testplan.testing package

Subpackages

Submodules

testplan.testing.base module

Base classes for all Tests

class testplan.testing.base.ProcessRunnerTest(**options)[source]

Bases: Test

A test runner that runs the tests in a separate subprocess. This is useful for running 3rd party testing frameworks (e.g. JUnit, GTest)

Test report will be populated by parsing the generated report output file (report.xml file by default.)

Parameters:
  • name – Test instance name, often used as uid of test entity.

  • binary – Path to the application binary or script.

  • description – Description of test instance.

  • proc_env – Environment overrides for subprocess.Popen; context value (when referring to other driver) and jinja2 template (when referring to self) will be resolved.

  • proc_cwd – Directory override for subprocess.Popen.

  • timeout

    Optional timeout for the subprocess. If a process runs longer than this limit, it will be killed and test will be marked as ERROR.

    String representations can be used as well as duration in seconds. (e.g. 10, 2.3, ‘1m 30s’, ‘1h 15m’)

  • ignore_exit_codes – When the test process exits with nonzero status code, the test will be marked as ERROR. This can be disabled by providing a list of numbers to ignore.

  • pre_args – List of arguments to be prepended before the arguments of the test runnable.

  • post_args – List of arguments to be appended before the arguments of the test runnable.

Also inherits all Test options.

CONFIG

alias of ProcessRunnerTestConfig

aborting() None[source]

Aborting logic for self.

add_main_batch_steps() None[source]

Runnable steps to be executed while environment is running.

add_post_resource_steps() None[source]

Runnable steps to run after environment stopped.

add_pre_resource_steps() None[source]

Runnable steps to be executed before environment starts.

apply_xfail_tests() None[source]

Apply xfail tests specified via –xfail-tests or @test_plan(xfail_tests=…).

get_proc_env() Dict[source]

Fabricate the env var for subprocess. Precedence: user-specified > hardcoded > system env

get_process_check_report(retcode: int, stdout: str, stderr: str) TestGroupReport[source]

When running a process fails (e.g. binary crash, timeout etc) we can still generate dummy testsuite / testcase reports with a certain hierarchy compatible with exporters and XUnit conventions. And logs of stdout & stderr can be saved as attachment.

get_test_context(list_cmd=None)[source]

Run the shell command generated by list_command in a subprocess, parse and return the stdout generated via parse_test_context.

Parameters:

list_cmd (str) – Command to list all test suites and testcases

Returns:

Result returned by parse_test_context.

Return type:

list of list

list_command() List[str] | None[source]

List custom arguments before and after the executable if they are defined. :return: List of commands to run before and after the test process,

as well as the test executable itself.

list_command_filter(testsuite_pattern: str, testcase_pattern: str)[source]

Return the base list command with additional filtering to list a specific set of testcases. To be implemented by concrete subclasses.

parse_test_context(test_list_output: bytes) List[List][source]

Override this to generate a nested list of test suite and test case context. Only required if list_command is overridden to return a command.

The result will later on be used by test listers to generate the test context output for this test instance.

Sample output:

[
    ['SuiteAlpha', ['testcase_one', 'testcase_two'],
    ['SuiteBeta', ['testcase_one', 'testcase_two'],
]
Parameters:

test_list_output – stdout from the list command

Returns:

Parsed test context from command line output of the 3rd party testing library.

prepare_binary() str[source]

Resolve the real binary path to run

process_test_data(test_data)[source]

Process raw test data that was collected and return a list of entries (e.g. TestGroupReport, TestCaseReport) that will be appended to the current test instance’s report as children.

Parameters:

test_data (xml.etree.Element) – Root node of parsed raw test data

Returns:

List of sub reports

Return type:

list of TestGroupReport / TestCaseReport

read_test_data()[source]

Parse output generated by the 3rd party testing tool, and then the parsed content will be handled by process_test_data.

You should override this function with custom logic to parse the contents of generated file.

property report_path: str | None
property resolved_bin: str
run_testcases_iter(testsuite_pattern: str = '*', testcase_pattern: str = '*', shallow_report: Dict = None) Generator[source]

Runs testcases as defined by the given filter patterns and yields testcase reports. A single testcase report is made for general checks of the test process, including checking the exit code and logging stdout and stderr of the process. Then, testcase reports are generated from the output of the test process.

For efficiency, we run all testcases in a single subprocess rather than running each testcase in a seperate process. This reduces the total time taken to run all testcases, however it will mean that testcase reports will not be generated until all testcases have finished running.

Parameters:
  • testsuite_pattern – pattern to match for testsuite names

  • testcase_pattern – pattern to match for testcase names

  • shallow_report – shallow report entry

Returns:

generator yielding testcase reports and UIDs for merge step

run_tests() None[source]

Run the tests in a subprocess, record stdout & stderr on runpath. Optionally enforce a timeout and log timeout related messages in the given timeout log path.

Raises:

ValueError – upon invalid test command

property stderr: str | None
property stdout: str | None
test_command() List[str][source]

Add custom arguments before and after the executable if they are defined. :return: List of commands to run before and after the test process,

as well as the test executable itself.

test_command_filter(testsuite_pattern: str, testcase_pattern: str)[source]

Return the base test command with additional filtering to run a specific set of testcases. To be implemented by concrete subclasses.

timeout_callback()[source]

Callback function that will be called by the daemon thread if a timeout occurs (e.g. process runs longer than specified timeout value).

Raises:

RuntimeError

property timeout_log: str | None
update_test_report() None[source]

Update current instance’s test report with generated sub reports from raw test data. Skip report updates if the process was killed.

Raises:

ValueError – in case the test report already has children

class testplan.testing.base.ProcessRunnerTestConfig(**options)[source]

Bases: TestConfig

Configuration object for ProcessRunnerTest.

classmethod get_options()[source]

Runnable specific config options.

class testplan.testing.base.ResourceHooks(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]

Bases: str, Enum

AFTER_START = 'After Start'
AFTER_STOP = 'After Stop'
BEFORE_START = 'Before Start'
BEFORE_STOP = 'Before Stop'
ENVIRONMENT_START = 'Environment Start'
ENVIRONMENT_STOP = 'Environment Stop'
ERROR_HANDLER = 'Error Handler'
STARTING = 'Starting'
STOPPING = 'Stopping'
class testplan.testing.base.Test(name: str, description: str = None, environment: list | ~typing.Callable = None, dependencies: dict | ~typing.Callable = None, initial_context: dict | ~typing.Callable = None, before_start: callable = None, after_start: callable = None, before_stop: callable = None, after_stop: callable = None, error_handler: callable = None, test_filter: ~testplan.testing.filtering.BaseFilter = None, test_sorter: ~testplan.testing.ordering.BaseSorter = None, stdout_style: ~testplan.report.testing.styles.Style = None, tags: str | ~typing.Iterable[str] = None, result: ~typing.Type[~testplan.testing.result.Result] = <class 'testplan.testing.result.Result'>, **options)[source]

Bases: Runnable

Base test instance class. Any runnable that runs a test can inherit from this class and override certain methods to customize functionality.

Parameters:
  • name – Test instance name, often used as uid of test entity.

  • description – Description of test instance.

  • environment – List of drivers to be started and made available on tests execution. Can also take a callable that returns the list of drivers.

  • dependencies – driver start-up dependencies as a directed graph, e.g {server1: (client1, client2)} indicates server1 shall start before client1 and client2. Can also take a callable that returns a dict.

  • initial_context – key: value pairs that will be made available as context for drivers in environment. Can also take a callable that returns a dict.

  • test_filter – Class with test filtering logic.

  • test_sorter – Class with tests sorting logic.

  • before_start – Callable to execute before starting the environment.

  • after_start – Callable to execute after starting the environment.

  • before_stop – Callable to execute before stopping the environment.

  • after_stop – Callable to execute after stopping the environment.

  • error_handler – Callable to execute when a step hits an exception.

  • stdout_style – Console output style.

  • tags – User defined tag value.

  • result – Result class definition for result object made available from within the testcases.

Also inherits all Runnable options.

CONFIG

alias of TestConfig

ENVIRONMENT

alias of TestEnvironment

RESULT

alias of TestResult

add_post_main_steps() None[source]

Runnable steps to run before environment stopped.

add_post_resource_steps() None[source]

Runnable steps to run after environment stopped.

add_pre_main_steps() None[source]

Runnable steps to run after environment started.

add_pre_resource_steps() None[source]

Runnable steps to be executed before environment starts.

add_start_resource_steps() None[source]

Runnable steps to start environment

add_stop_resource_steps() None[source]

Runnable steps to stop environment

property collect_code_context: bool

Collecting the file path, line number and code context of the assertions if enabled.

property description: str
property driver_info: bool
dry_run() RunnableResult[source]

Return an empty report skeleton for this test including all testsuites, testcases etc. hierarchy. Does not run any tests.

filter_levels = [FilterLevel.TEST]
get_filter_levels() List[FilterLevel][source]
get_metadata() TestMetadata[source]
get_stdout_style(passed: bool)[source]

Stdout style for status.

get_tags_index() str | Iterable[str] | Dict[source]

Return the tag index that will be used for filtering. By default, this is equal to the native tags for this object.

However, subclasses may build larger tag indices by collecting tags from their children for example.

get_test_context()[source]
log_test_results(top_down: bool = True)[source]

Log test results. i.e. ProcessRunnerTest or PyTest.

Parameters:

top_down – Flag logging test results using a top-down approach or a bottom-up approach.

property name: str

Instance name.

propagate_tag_indices() None[source]

Basic step for propagating tag indices of the test report tree. This step may be necessary if the report tree is created in parts and then added up.

property report: TestGroupReport

Shortcut for the test report.

reset_context() None[source]
run_testcases_iter(testsuite_pattern: str = '*', testcase_pattern: str = '*') None[source]

For a Test to be run interactively, it must implement this method.

It is expected to run tests iteratively and yield a tuple containing a testcase report and the list of parent UIDs required to merge the testcase report into the main report tree.

If it is not possible or very inefficient to run individual testcases in an iteratie manner, this method may instead run all the testcases in a batch and then return an iterator for the testcase reports and parent UIDs.

Parameters:
  • testsuite_pattern – Filter pattern for testsuite level.

  • testcase_pattern – Filter pattern for testcase level.

Yield:

generate tuples containing testcase reports and a list of the UIDs required to merge this into the main report tree, starting with the UID of this test.

set_discover_path(path: str) None[source]

If the Test is materialized from a task that is discovered outside pwd(), this might be needed for binary/library path derivation to work properly. :param path: the absolute path where the task has been discovered

should_log_test_result(depth: int, test_obj, style) Tuple[bool, int][source]

Whether to log test result and if yes, then with what indent.

Returns:

whether to log test results (Suite report, Testcase report, or result of assertions) and the indent that should be kept at start of lines

Raises:
  • ValueError – if met with an unexpected test group category

  • TypeError – if meth with an unsupported test object

should_run() bool[source]

Determines if current object should run.

start_test_resources() None[source]

Start all test resources but do not run any tests. Used in the interactive mode when environments may be started/stopped on demand. The base implementation is very simple but may be overridden in sub- classes to run additional setup pre- and post-environment start.

property stdout_style

Stdout style input.

stop_test_resources() None[source]

Stop all test resources. As above, this method is used for the interactive mode and is very simple in this base Test class, but may be overridden by sub-classes.

property test_context
uid() str[source]

Instance name uid.

class testplan.testing.base.TestConfig(**options)[source]

Bases: RunnableConfig

Configuration object for Test.

classmethod get_options()[source]

Runnable specific config options.

class testplan.testing.base.TestResult[source]

Bases: RunnableResult

Result object for Test runnable test execution framework base class and all sub classes.

Contains a test report object.

testplan.testing.filtering module

Filtering logic for Multitest, Suites and testcase methods (of Suites)

class testplan.testing.filtering.And(*filters)[source]

Bases: MetaFilter

Meta filter that returns True if ALL of the child filters return True.

composed_filter(test, suite, case)[source]
operator_str = '&'
class testplan.testing.filtering.BaseFilter[source]

Bases: object

Base class for filters, supports bitwise operators for composing multiple filters.

e.g. (FilterA(…) & FilterB(…)) | ~FilterC(…)

filter(test, suite, case) bool[source]
map(f)[source]
class testplan.testing.filtering.BaseTagFilter(tags)[source]

Bases: Filter

Base filter class for tag based filtering.

category = 3
filter_case(case)[source]
filter_suite(suite)[source]
filter_test(test)[source]
get_match_func() Callable[source]
class testplan.testing.filtering.Filter[source]

Bases: BaseFilter

Noop filter class, users can inherit from this to implement their own filters.

Returns True by default for all filtering operations, implicitly checks for test instances filter_levels declaration to apply the filtering logic.

category = 1
filter(test, suite, case)[source]
filter_case(case) bool[source]
filter_suite(suite) bool[source]
filter_test(test) bool[source]
map(f)[source]
class testplan.testing.filtering.FilterCategory(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]

Bases: IntEnum

COMMON = 1
PATTERN = 2
TAG = 3
class testplan.testing.filtering.FilterLevel(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]

Bases: Enum

This enum is used by test classes (e.g. ~testplan.testing.base.Test) to declare the depth of filtering logic while filter method is run.

By default only test (e.g. top) level filtering is used.

TEST = 'test'
TESTCASE = 'testcase'
TESTSUITE = 'testsuite'
class testplan.testing.filtering.MetaFilter(*filters)[source]

Bases: BaseFilter

Higher level filter that allow composition of other filters.

composed_filter(_test, _suite, _case) bool[source]
filter(test, suite, case)[source]
map(f)[source]
operator_str = None
class testplan.testing.filtering.Not(filter_obj)[source]

Bases: BaseFilter

Meta filter that returns the inverse of the original filter result.

filter(test, suite, case)[source]
map(f)[source]
class testplan.testing.filtering.Or(*filters)[source]

Bases: MetaFilter

Meta filter that returns True if ANY of the child filters return True.

composed_filter(test, suite, case)[source]
operator_str = '|'
class testplan.testing.filtering.Pattern(