Marked use cases of pytest automated testing framework (specified execution, skipping use cases, expected failure)

The mark module provided in pytest can implement many functions, such as:

Mark use cases, that is, label them
Skip and skipif mark skip, skip skips the current use case, and skipif matches the situation and skips the current use case.
xfail marks expected failure
Mark use cases
Sometimes we may not need to execute all the use cases in the project, but only execute some of them, that is, specify the execution of test cases of a certain type or scenario, such as only executing smoke use cases, then you need to use @pytest.mark. tag name to perform tag filtering. Tag names need to be registered before they can be used.

Register tag
The official documentation provides three methods of registering tags. Here we only introduce pytest.ini and conftest.py. Those who are interested can check it out.

Method 1: Create a new pytest.ini in the project root directory, and register and manage tags in it. Examples are as follows:

[pytest]
 
markers =
    smoke: marks test as smoke
    login
    order: order scenario

Three tags are defined here: smoke, login, and order. The tag description after the colon is optional.

Method 2: Define a hook function in conftest.py for label registration. The format is as follows:

def pytest_configure(config):
    marker_list = [
        "smoke: marks test as smoke",
        "login",
        "order: order scenario"
    ]
    for marker in marker_list:
        config.addinivalue_line("markers", marker)

Method 2 requires attention to the format of the definition, and the function name and input parameters cannot be easily modified.

How to use
import pytest
 
# mark test function
@pytest.mark.smoke
def test_01():
    print("Execute test_01")
 
def test_02():
    print("Execute test_02")
 
    
# Mark test class
@pytest.mark.order
class TestOrder:
    
    def test_order(self):
        print("Place order")
 
    def test_pay(self):
        print("payment")
 
        
# Multiple tags
@pytest.mark.smoke
@pytest.mark.login
def test_login():
    print("Login")

There is another way to label the test class, as follows:

# Mark test class (single tag)
class TestOrder:
    
    # Add order tags to all test methods in the class
    pytestmark = pytest.mark.order
    
    def test_order(self):
        print("Place order")
 
    def test_pay(self):
        print("payment")
    
    
# Mark test classes (multiple labels)
class TestOrder:
    
    # Label all test methods in the class with order and smoke tags
    pytestmark = [pytest.mark.order, pytest.mark.smoke]
    
    def test_order(self):
        print("Place order")
 
    def test_pay(self):
        print("payment")

You can also use pytestmark to mark the module and label all test classes and test functions in the module, as follows:

import pytest
 
# All test functions and test classes in the module will be labeled order and smoke.
pytestmark = [pytest.mark.order, pytest.mark.smoke]
 
def test_01():
    print("Execute test_01")
 
def test_02():
    print("Execute test_02")
 
 
class TestOrder:
    def test_order(self):
        print("Place order")
 
    def test_pay(self):
        print("payment")
Execution method

When executing, add the parameters -m and label name.

Command line:

# Execute use cases marked as smoke
pytest -m smoke
 
# Execute the use case marked as smoke and marked as login
pytest -m "smoke and login"
 
# Execute use cases marked as smoke or login
pytest -m "smoke or login"

Code Execution:

# Execute use cases marked as smoke
pytest.main(['-m smoke'])
 
# Execute use cases marked as smoke or order
pytest.main(["-m", "smoke or order"])
 
# Execute the use case marked as smoke and also marked as login
pytest.main(["-m", "smoke and login"])
 
# Execute use cases marked as smoke and not marked as login
pytest.main(["-m", "smoke and not login"])

It should be noted here that It is invalid to directly use pytest.main() in the test module to execute the labeled use cases in the current module, which will execute the All test cases. Example:

import pytest
 
# mark test function
@pytest.mark.smoke
def test_01():
    print("Execute test_01")
 
def test_02():
    print("Execute test_02")
 
    
# Mark test class
@pytest.mark.order
class TestOrder:
    
    def test_order(self):
        print("Place order")
 
    def test_pay(self):
        print("payment")
 
        
# Multiple tags
@pytest.mark.smoke
@pytest.mark.login
def test_login():
    print("Login")
    
if __name__ == '__main__':
    pytest.main(['-s', '-m smoke'])

Run the module and the results are as follows

It can be seen from the results that although the code is written to only execute use cases marked smoke, all five use cases have been executed and cannot be filtered.

We need to separate the execution code and place it in a separate execution module, such as run.py. The code is as follows:

# run.py
 
importpytest
 
if __name__ == '__main__':
    pytest.main(["-s", "-m", "smoke or order"])

The running results are as follows:

As can be seen from the results, only test cases marked smoke or order are run here.

mark skipped
Sometimes we need to skip certain test cases and not execute them. For example, after the code is updated, the old test cases do not need to be executed, or certain test cases do not need to be executed in some specific scenarios. In this case, we need to do the corresponding test cases. Flag to skip processing.

Pytest provides two methods of marking skipping, as follows:

Skip directly, @pytest.mark.skip(reason=”skip reason”), reason can be written or not.
Conditional skipping, that is, skipping and not executing if a certain condition is met, @pytest.mark.skipif(b>3, reason=”skip reason”)
Examples are as follows:

import pytest
 
@pytest.mark.skip(reason="No need to execute test_01")
def test_01():
    print("Execute test_01")
 
@pytest.mark.skip(2>1, reason="If 2 is greater than 1, skip and do not execute")
def test_02():
    print("Execute test_02")
    
    
if __name__ == '__main__':
    pytest.main(['-s'])

operation result:

As you can see from the running results, these two use cases have been skipped. If you want to skip the test class or test module, the method is the same as the method of labeling the test class and test module above, without further explanation.

xfail (marked as expected failure)
In some scenarios, it is necessary to mark test cases as expected failures, such as testing functions that have not yet been implemented or errors that have not been fixed. Use @pytest.mark.xfail to mark test cases as expected failures.

pytest.mark.xfail(condition=None, reason=None, raises=None, run=True, strict=False), parameter description is as follows:

condition, the condition for expected failure, the default value is None, which means that the use case is marked as expected failure only when the condition is met.

reason, the reason for failure, the default value is None, indicating the reason for marking the use case.

strict keyword parameter, default value is False.

When strict=False, if the use case fails to execute, the result is marked as xfail, indicating an expected failure; if the use case is executed successfully, the result is marked as XPASS, indicating an unexpected success;

When strict=True, if the use case executes successfully, the result will be marked as failed.

The raises keyword parameter, the default value is None, can report one or more specified exceptions. If the failure of the test case is not caused by the expected exception, pytest will mark the test result as failed.

run keyword parameter, the default value is True. When run=False, pytest will no longer execute the test case and directly mark the result as xfail.

Examples of commonly used parameters are as follows:

import pytest
 
# run and strict are both default. Because the use case execution fails, the use case execution result will be marked as xfail.
@pytest.mark.xfail(reason="bug to be fixed")
def test_01():
    print("Execute test_01")
    a = "hello"
    b = "hi"
    assert a == b
 
# run and strict are both default, because the use case execution is passed, so the use case execution result will be marked as xpass
@pytest.mark.xfail(condition=lambda: True, reason="bug to be fixed")
def test_02():
    print("Execute test_02")
    a = "hello"
    b = "hi"
    assert a != b
 
# run=False, this use case is not executed and the result is directly marked as xfail
@pytest.mark.xfail(reason="The function has not been developed yet", run=False)
def test_03():
    print("Execute test_03")
    a = "hello"
    b = "hi"
    assert a == b
 
# strict=True, because the use case execution is passed, the result will be marked as failed
@pytest.mark.xfail(reason="The function has not been developed yet", strict=True)
def test_04():
    print("Execute test_04")
    a = "hello"
    b = "he"
    assert b in a
    
if __name__ == '__main__':
    pytest.main(['-s'])

operation result:

It can be seen from the results that the test_01 result is displayed as xfail, the test_02 result is displayed as xpass, test_03 is not executed but directly displayed as xfail, and the test_04 result is displayed as failed.

Summarize
The above examples are only to illustrate the use of these functions provided by @pytest.mark. In the actual automation process, flexible selection is required.

In the general automated testing process, it is more common to mark a certain scenario use case by labeling, such as marking smoke test cases for smoke testing. Skipping or conditionally skipping test cases are also often used. There are fewer scenarios where use cases need to be marked as expected failure.

Finally, I would like to thank everyone who reads my article carefully. Reciprocity is always necessary. Although it is not a very valuable thing, if you can use it, you can take it directly:

How to obtain the document: Click on the link on the right to receive: Sharing of a complete set of software testing materials