Since version 1.0 py.test features the "funcarg" mechanism which allows a Python test function to take arguments independently provided by factory functions. Factory functions allow to encapsulate all setup and fixture glue code into nicely separated objects and provide a natural way for writing python test functions. Compared to xUnit style the new mechanism is meant to:
If you find issues or have further suggestions for improving the mechanism you are welcome to checkout contact possibilities page.
Here is a basic useful step-wise example for handling application specific test setup. The goal is to have one place where we have the glue and test support code for bootstrapping and configuring application objects and allow test modules and test functions to stay ignorant of involved details.
Let's write a simple test function living in a test file test_sample.py that uses a mysetup funcarg for accessing test specific setup.
# ./test_sample.py
def test_answer(mysetup):
app = mysetup.myapp()
answer = app.question()
assert answer == 42
To run this test py.test needs to find and call a factory to obtain the required mysetup function argument. The test function interacts with the provided application specific setup.
To provide the mysetup function argument we write down a factory method in a local plugin by putting the following code into a local conftest.py:
# ./conftest.py
from myapp import MyApp
def pytest_funcarg__mysetup(request):
return MySetup()
class MySetup:
def myapp(self):
return MyApp()
To run the example we represent our application by putting a pseudo MyApp object into myapp.py:
# ./myapp.py
class MyApp:
def question(self):
return 6 * 9
You can now run the test with py.test test_sample.py which will show this failure:
========================= test session starts =========================
python: platform linux2 -- Python 2.6.2
test object 1: /home/hpk/hg/py/trunk/example/funcarg/mysetup
test_sample.py F
============================== FAILURES ===============================
_____________________________ test_answer _____________________________
mysetup = <mysetup.conftest.MySetup instance at 0xa020eac>
def test_answer(mysetup):
app = mysetup.myapp()
answer = app.question()
> assert answer == 42
E assert 54 == 42
test_sample.py:5: AssertionError
====================== 1 failed in 0.11 seconds =======================
This means that our mysetup object was successfully instantiated, we asked it to provide an application instance and checking its question method resulted in the wrong answer. If you are confused as to what the concrete question or answers actually mean, please see here :) Otherwise proceed to step 2.
If you provide a "funcarg" from a plugin you can easily make methods depend on command line options or environment settings. To add a command line option we update the conftest.py of the previous example to add a command line option and to offer a new mysetup method:
# ./conftest.py
import py
from myapp import MyApp
def pytest_funcarg__mysetup(request):
return MySetup(request)
def pytest_addoption(parser):
parser.addoption("--ssh", action="store", default=None,
help="specify ssh host to run tests with")
class MySetup:
def __init__(self, request):
self.config = request.config
def myapp(self):
return MyApp()
def getsshconnection(self):
host = self.config.option.ssh
if host is None:
py.test.skip("specify ssh host with --ssh")
return py.execnet.SshGateway(host)
Now any test function can use the mysetup.getsshconnection() method like this:
# ./test_ssh.py
class TestClass:
def test_function(self, mysetup):
conn = mysetup.getsshconnection()
# work with conn
Running py.test test_ssh.py without specifying a command line option will result in a skipped test_function:
========================= test session starts =========================
python: platform linux2 -- Python 2.6.2
test object 1: test_ssh.py
test_ssh.py s
________________________ skipped test summary _________________________
conftest.py:23: [1] Skipped: 'specify ssh host with --ssh'
====================== 1 skipped in 0.11 seconds ======================
Note especially how the test function could stay clear knowing about how to construct test state values or when to skip and with what message. The test function can concentrate on actual test code and test state factories can interact with execution of tests.
If you specify a command line option like py.test --ssh=python.org the test will get un-skipped and actually execute.
# ./conftest.py
def pytest_option(parser):
group = parser.getgroup("myproject")
group.addoption("-A", dest="acceptance", action="store_true",
help="run (slow) acceptance tests")
def pytest_funcarg__accept(request):
return AcceptFuncarg(request)
class AcceptFuncarg:
def __init__(self, request):
if not request.config.option.acceptance:
py.test.skip("specify -A to run acceptance tests")
self.tmpdir = request.config.mktemp(request.function.__name__, numbered=True)
def run(self, cmd):
""" called by test code to execute an acceptance test. """
self.tmpdir.chdir()
return py.process.cmdexec(cmd)
and the actual test function example:
def test_some_acceptance_aspect(accept):
accept.tmpdir.mkdir("somesub")
result = accept.run("ls -la")
assert "somesub" in result
If you run this test without specifying a command line option the test will get skipped with an appropriate message. Otherwise you can start to add convenience and test support methods to your AcceptFuncarg and drive running of tools or applications and provide ways to do assertions about the output.
For larger scale setups it's sometimes useful to decorare a funcarg just for a particular test module. We can extend the accept example by putting this in our test module:
def pytest_funcarg__accept(request):
# call the next factory (living in our conftest.py)
arg = request.getfuncargvalue("accept")
# create a special layout in our tempdir
arg.tmpdir.mkdir("special")
return arg
class TestSpecialAcceptance:
def test_sometest(self, accept):
assert accept.tmpdir.join("special").check()
Our module level factory will be invoked first and it can ask its request object to call the next factory and then decorate its result. This mechanism allows us to stay ignorant of how/where the function argument is provided - in our example from a conftest plugin.
sidenote: the temporary directory used here are instances of the py.path.local class which provides many of the os.path methods in a convenient way.
Test functions can specify one ore more arguments ("funcargs") and a test module or plugin can define functions that provide the function argument. Let's look at a simple self-contained example that you can put into a test module:
# ./test_simplefactory.py
def pytest_funcarg__myfuncarg(request):
return 42
def test_function(myfuncarg):
assert myfuncarg == 17
If you run this with py.test test_simplefactory.py you see something like this:
=========================== test session starts ============================
python: platform linux2 -- Python 2.6.2
test object 1: /home/hpk/hg/py/trunk/example/funcarg/test_simplefactory.py
test_simplefactory.py F
================================ FAILURES ==================================
______________________________ test_function _______________________________
myfuncarg = 42
def test_function(myfuncarg):
> assert myfuncarg == 17
E assert 42 == 17
test_simplefactory.py:6: AssertionError
======================== 1 failed in 0.11 seconds ==========================
This means that the test function got executed and the assertion failed. Here is how py.test comes to execute this test function:
Note that if you misspell a function argument or want to use one that isn't available, an error with a list of available function argument is provided.
For more interesting factory functions that make good use of the request object please see the application setup tutorial example.
Request objects are passed to funcarg factories and allow to access test configuration, test context and useful caching and finalization helpers. Here is a list of attributes:
request.function: python function object requesting the argument
request.cls: class object where the test function is defined in or None.
request.module: module object where the test function is defined in.
request.config: access to command line opts and general config
request.param: if exists was passed by a previous metafunc.addcall
def cached_setup(setup, teardown=None, scope="module", extrakey=None):
""" cache and return result of calling setup().
The scope and the ``extrakey`` determine the cache key.
The scope also determines when teardown(result)
will be called. valid scopes are:
scope == 'function': when the single test function run finishes.
scope == 'module': when tests in a different module are run
scope == 'session': when tests of the session have run.
"""
Calling request.cached_setup() helps you to manage fixture objects across several scopes. For example, for creating a Database object that is to be setup only once during a test session you can use the helper like this:
def pytest_funcarg__database(request):
return request.cached_setup(
setup=lambda: Database("..."),
teardown=lambda val: val.close(),
scope="session"
)
def getfuncargvalue(name):
""" Lookup and call function argument factory for the given name.
Each function argument is only created once per function setup.
"""
request.getfuncargvalue(name) calls another funcarg factory function. You can use this function if you want to decorate a funcarg, i.e. you want to provide the "normal" value but add something extra. If a factory cannot be found a request.Error exception will be raised.
You can parametrize multiple runs of the same test function by adding new test function calls with different function argument values. Let's look at a simple self-contained example:
# ./test_example.py
def pytest_generate_tests(metafunc):
if "numiter" in metafunc.funcargnames:
for i in range(10):
metafunc.addcall(funcargs=dict(numiter=i))
def test_func(numiter):
assert numiter < 9
If you run this with py.test test_example.py you'll get:
============================= test session starts ==========================
python: platform linux2 -- Python 2.6.2
test object 1: /home/hpk/hg/py/trunk/test_example.py
test_example.py .........F
================================ FAILURES ==================================
__________________________ test_func.test_func[9] __________________________
numiter = 9
def test_func(numiter):
> assert numiter < 9
E assert 9 < 9
/home/hpk/hg/py/trunk/test_example.py:10: AssertionError
Here is what happens in detail:
metafunc objects are passed to the pytest_generate_tests hook. They help to inspect a testfunction and to generate tests according to test configuration or values specified in the class or module where a test function is defined:
metafunc.funcargnames: set of required function arguments for given function
metafunc.function: underlying python test function
metafunc.cls: class object where the test function is defined in or None.
metafunc.module: the module object where the test function is defined in.
metafunc.config: access to command line opts and general config
def addcall(funcargs={}, id=None, param=None):
""" trigger a new test function call. """
funcargs can be a dictionary of argument names mapped to values - providing it is called direct parametrization.
If you provide an id` it will be used for reporting and identification purposes. If you don't supply an id the stringified counter of the list of added calls will be used. id values needs to be unique between all invocations for a given test function.
param if specified will be seen by any funcarg factory as a request.param attribute. Setting it is called indirect parametrization.
Indirect parametrization is preferable if test values are expensive to setup or can only be created in certain environments. Test generators and thus addcall() invocations are performed during test collection which is separate from the actual test setup and test run phase. With distributed testing collection and test setup/run happens in different process.