testplan.testing package

Subpackages

Submodules

testplan.testing.base module

Base classes for all Tests

class testplan.testing.base.ProcessRunnerTest(**options)[source]

Bases: testplan.testing.base.Test

A test runner that runs the tests in a separate subprocess. This is useful for running 3rd party testing frameworks (e.g. JUnit, GTest)

Test report will be populated by parsing the generated report output file (report.xml file by default.)

Parameters:
  • name – Test instance name, often used as uid of test entity.
  • binary – Path to the application binary or script.
  • description – Description of test instance.
  • proc_env – Environment overrides for subprocess.Popen; context value (when referring to other driver) and jinja2 template (when referring to self) will be resolved.
  • proc_cwd – Directory override for subprocess.Popen.
  • timeout

    Optional timeout for the subprocess. If a process runs longer than this limit, it will be killed and test will be marked as ERROR.

    String representations can be used as well as duration in seconds. (e.g. 10, 2.3, ‘1m 30s’, ‘1h 15m’)

  • ignore_exit_codes – When the test process exits with nonzero status code, the test will be marked as ERROR. This can be disabled by providing a list of numbers to ignore.
  • pre_args – List of arguments to be prepended before the arguments of the test runnable.
  • post_args – List of arguments to be appended before the arguments of the test runnable.

Also inherits all Test options.

CONFIG

alias of ProcessRunnerTestConfig

aborting() → None[source]

Aborting logic for self.

add_main_batch_steps() → None[source]

Runnable steps to be executed while environment is running.

add_post_resource_steps() → None[source]

Runnable steps to run after environment stopped.

add_pre_resource_steps() → None[source]

Runnable steps to be executed before environment starts.

apply_xfail_tests() → None[source]

Apply xfail tests specified via –xfail-tests or @test_plan(xfail_tests=…).

get_proc_env() → Dict[KT, VT][source]

Fabricate the env var for subprocess. Precedence: user-specified > hardcoded > system env

get_process_check_report(retcode: int, stdout: str, stderr: str) → testplan.report.testing.base.TestGroupReport[source]

When running a process fails (e.g. binary crash, timeout etc) we can still generate dummy testsuite / testcase reports with a certain hierarchy compatible with exporters and XUnit conventions. And logs of stdout & stderr can be saved as attachment.

get_test_context(list_cmd=None)[source]

Run the shell command generated by list_command in a subprocess, parse and return the stdout generated via parse_test_context.

Parameters:list_cmd (str) – Command to list all test suites and testcases
Returns:Result returned by parse_test_context.
Return type:list of list
list_command() → Optional[List[str]][source]

List custom arguments before and after the executable if they are defined. :return: List of commands to run before and after the test process,

as well as the test executable itself.
list_command_filter(testsuite_pattern: str, testcase_pattern: str)[source]

Return the base list command with additional filtering to list a specific set of testcases. To be implemented by concrete subclasses.

parse_test_context(test_list_output: bytes) → List[List[T]][source]

Override this to generate a nested list of test suite and test case context. Only required if list_command is overridden to return a command.

The result will later on be used by test listers to generate the test context output for this test instance.

Sample output:

[
    ['SuiteAlpha', ['testcase_one', 'testcase_two'],
    ['SuiteBeta', ['testcase_one', 'testcase_two'],
]
Parameters:test_list_output – stdout from the list command
Returns:Parsed test context from command line output of the 3rd party testing library.
prepare_binary() → str[source]

Resolve the real binary path to run

process_test_data(test_data)[source]

Process raw test data that was collected and return a list of entries (e.g. TestGroupReport, TestCaseReport) that will be appended to the current test instance’s report as children.

Parameters:test_data (xml.etree.Element) – Root node of parsed raw test data
Returns:List of sub reports
Return type:list of TestGroupReport / TestCaseReport
read_test_data()[source]

Parse output generated by the 3rd party testing tool, and then the parsed content will be handled by process_test_data.

You should override this function with custom logic to parse the contents of generated file.

report_path
resolved_bin
run_testcases_iter(testsuite_pattern: str = '*', testcase_pattern: str = '*', shallow_report: Dict[KT, VT] = None) → Generator[T_co, T_contra, V_co][source]

Runs testcases as defined by the given filter patterns and yields testcase reports. A single testcase report is made for general checks of the test process, including checking the exit code and logging stdout and stderr of the process. Then, testcase reports are generated from the output of the test process.

For efficiency, we run all testcases in a single subprocess rather than running each testcase in a seperate process. This reduces the total time taken to run all testcases, however it will mean that testcase reports will not be generated until all testcases have finished running.

Parameters:
  • testsuite_pattern – pattern to match for testsuite names
  • testcase_pattern – pattern to match for testcase names
  • shallow_report – shallow report entry
Returns:

generator yielding testcase reports and UIDs for merge step

run_tests() → None[source]

Run the tests in a subprocess, record stdout & stderr on runpath. Optionally enforce a timeout and log timeout related messages in the given timeout log path.

Raises:ValueError – upon invalid test command
stderr
stdout
test_command() → List[str][source]

Add custom arguments before and after the executable if they are defined. :return: List of commands to run before and after the test process,

as well as the test executable itself.
test_command_filter(testsuite_pattern: str, testcase_pattern: str)[source]

Return the base test command with additional filtering to run a specific set of testcases. To be implemented by concrete subclasses.

timeout_callback()[source]

Callback function that will be called by the daemon thread if a timeout occurs (e.g. process runs longer than specified timeout value).

Raises:RuntimeError
timeout_log
update_test_report() → None[source]

Update current instance’s test report with generated sub reports from raw test data. Skip report updates if the process was killed.

Raises:ValueError – in case the test report already has children
class testplan.testing.base.ProcessRunnerTestConfig(**options)[source]

Bases: testplan.testing.base.TestConfig

Configuration object for ProcessRunnerTest.

classmethod get_options()[source]

Runnable specific config options.

class testplan.testing.base.ResourceHooks[source]

Bases: enum.Enum

An enumeration.

after_start = 'After Start'
after_stop = 'After Stop'
before_start = 'Before Start'
before_stop = 'Before Stop'
class testplan.testing.base.Test(name: str, description: str = None, environment: Union[list, Callable] = None, dependencies: Union[dict, Callable] = None, initial_context: Union[dict, Callable] = None, before_start: callable = None, after_start: callable = None, before_stop: callable = None, after_stop: callable = None, error_handler: callable = None, test_filter: testplan.testing.filtering.BaseFilter = None, test_sorter: testplan.testing.ordering.BaseSorter = None, stdout_style: testplan.report.testing.styles.Style = None, tags: Union[str, Iterable[str]] = None, result: Type[testplan.testing.result.Result] = <class 'testplan.testing.result.Result'>, **options)[source]

Bases: testplan.common.entity.base.Runnable

Base test instance class. Any runnable that runs a test can inherit from this class and override certain methods to customize functionality.

Parameters:
  • name – Test instance name, often used as uid of test entity.
  • description – Description of test instance.
  • environment – List of drivers to be started and made available on tests execution. Can also take a callable that returns the list of drivers.
  • dependencies – driver start-up dependencies as a directed graph, e.g {server1: (client1, client2)} indicates server1 shall start before client1 and client2. Can also take a callable that returns a dict.
  • initial_context – key: value pairs that will be made available as context for drivers in environment. Can also take a callable that returns a dict.
  • test_filter – Class with test filtering logic.
  • test_sorter – Class with tests sorting logic.
  • before_start – Callable to execute before starting the environment.
  • after_start – Callable to execute after starting the environment.
  • before_stop – Callable to execute before stopping the environment.
  • after_stop – Callable to execute after stopping the environment.
  • error_handler – Callable to execute when a step hits an exception.
  • stdout_style – Console output style.
  • tags – User defined tag value.
  • result – Result class definition for result object made available from within the testcases.

Also inherits all Runnable options.

CONFIG

alias of TestConfig

ENVIRONMENT

alias of testplan.testing.environment.base.TestEnvironment

RESULT

alias of TestResult

add_post_main_steps() → None[source]

Runnable steps to run before environment stopped.

add_post_resource_steps() → None[source]

Runnable steps to run after environment stopped.

add_pre_main_steps() → None[source]

Runnable steps to run after environment started.

add_pre_resource_steps() → None[source]

Runnable steps to be executed before environment starts.

add_start_resource_steps() → None[source]

Runnable steps to start environment

add_stop_resource_steps() → None[source]

Runnable steps to stop environment

description
dry_run() → None[source]

Return an empty report skeleton for this test including all testsuites, testcases etc. hierarchy. Does not run any tests.

filter_levels = [<FilterLevel.TEST: 'test'>]
get_filter_levels() → List[testplan.testing.filtering.FilterLevel][source]
get_metadata() → testplan.testing.multitest.test_metadata.TestMetadata[source]
get_stdout_style(passed: bool)[source]

Stdout style for status.

get_tags_index() → Union[str, Iterable[str], Dict[KT, VT]][source]

Return the tag index that will be used for filtering. By default this is equal to the native tags for this object.

However subclasses may build larger tag indices by collecting tags from their children for example.

get_test_context()[source]
log_test_results(top_down: bool = True)[source]

Log test results. i.e. ProcessRunnerTest or PyTest.

Parameters:top_down – Flag logging test results using a top-down approach or a bottom-up approach.
name

Instance name.

propagate_tag_indices() → None[source]

Basic step for propagating tag indices of the test report tree. This step may be necessary if the report tree is created in parts and then added up.

report

Shortcut for the test report.

reset_context() → None[source]
run_testcases_iter(testsuite_pattern: str = '*', testcase_pattern: str = '*') → None[source]

For a Test to be run interactively, it must implement this method.

It is expected to run tests iteratively and yield a tuple containing a testcase report and the list of parent UIDs required to merge the testcase report into the main report tree.

If it is not possible or very inefficient to run individual testcases in an iteratie manner, this method may instead run all the testcases in a batch and then return an iterator for the testcase reports and parent UIDs.

Parameters:
  • testsuite_pattern – Filter pattern for testsuite level.
  • testcase_pattern – Filter pattern for testcase level.
Yield:

generate tuples containing testcase reports and a list of the UIDs required to merge this into the main report tree, starting with the UID of this test.

set_discover_path(path: str) → None[source]

If the Test is materialized from a task that is discovered outside pwd(), this might be needed for binary/library path derivation to work properly. :param path: the absolute path where the task has been discovered

should_log_test_result(depth: int, test_obj, style) → Tuple[bool, int][source]

Whether to log test result and if yes, then with what indent.

Returns:

whether to log test results (Suite report, Testcase report, or result of assertions) and the indent that should be kept at start of lines

Raises:
  • ValueError – if met with an unexpected test group category
  • TypeError – if meth with an unsupported test object
should_run() → bool[source]

Determines if current object should run.

start_test_resources() → None[source]

Start all test resources but do not run any tests. Used in the interactive mode when environments may be started/stopped on demand. The base implementation is very simple but may be overridden in sub- classes to run additional setup pre- and post-environment start.

stdout_style

Stdout style input.

stop_test_resources() → None[source]

Stop all test resources. As above, this method is used for the interactive mode and is very simple in this base Test class, but may be overridden by sub-classes.

test_context
uid() → str[source]

Instance name uid.

class testplan.testing.base.TestConfig(**options)[source]

Bases: testplan.common.entity.base.RunnableConfig

Configuration object for Test.

classmethod get_options()[source]

Runnable specific config options.

class testplan.testing.base.TestResult[source]

Bases: testplan.common.entity.base.RunnableResult

Result object for Test runnable test execution framework base class and all sub classes.

Contains a test report object.

testplan.testing.filtering module

Filtering logic for Multitest, Suites and testcase methods (of Suites)

class testplan.testing.filtering.And(*filters)[source]

Bases: testplan.testing.filtering.MetaFilter

Meta filter that returns True if ALL of the child filters return True.

composed_filter(test, suite, case)[source]
operator_str = '&'
class testplan.testing.filtering.BaseFilter[source]

Bases: object

Base class for filters, supports bitwise operators for composing multiple filters.

e.g. (FilterA(…) & FilterB(…)) | ~FilterC(…)

filter(test, suite, case) → bool[source]
class testplan.testing.filtering.BaseTagFilter(tags)[source]

Bases: testplan.testing.filtering.Filter

Base filter class for tag based filtering.

category = 3
filter_case(case)[source]
filter_suite(suite)[source]
filter_test(test)[source]
get_match_func() → Callable[source]
class testplan.testing.filtering.Filter[source]

Bases: testplan.testing.filtering.BaseFilter

Noop filter class, users can inherit from this to implement their own filters.

Returns True by default for all filtering operations, implicitly checks for test instances filter_levels declaration to apply the filtering logic.

category = 1
filter(test, suite, case)[source]
filter_case(case) → bool[source]
filter_suite(suite) → bool[source]
filter_test(test) → bool[source]
class testplan.testing.filtering.FilterCategory[source]

Bases: enum.IntEnum

An enumeration.

COMMON = 1
PATTERN = 2
TAG = 3
class testplan.testing.filtering.FilterLevel[source]

Bases: enum.Enum

This enum is used by test classes (e.g. ~testplan.testing.base.Test) to declare the depth of filtering logic while filter method is run.

By default only test (e.g. top) level filtering is used.

TEST = 'test'
TESTCASE = 'testcase'
TESTSUITE = 'testsuite'
class testplan.testing.filtering.MetaFilter(*filters)[source]

Bases: testplan.testing.filtering.BaseFilter

Higher level filter that allow composition of other filters.

composed_filter(_test, _suite, _case) → bool[source]
filter(test, suite, case)[source]
operator_str = None
class testplan.testing.filtering.Not(filter_obj)[source]

Bases: testplan.testing.filtering.BaseFilter

Meta filter that returns the inverse of the original filter result.

filter(test, suite, case)[source]
class testplan.testing.filtering.Or(*filters)[source]

Bases: testplan.testing.filtering.MetaFilter

Meta filter that returns True if ANY of the child filters return True.

composed_filter(test, suite, case)[source]
operator_str = '|'
class testplan.testing.filtering.Pattern(pattern, match_uid=False)[source]

Bases: testplan.testing.filtering.Filter

Base class for name based, glob style filtering.

https://docs.python.org/3.4/library/fnmatch.html

Examples:

<Multitest name>:<suite name>:<testcase name> <Multitest name>::<testcase name> *:<suite name>:
ALL_MATCH = '*'
MAX_LEVEL = 3
classmethod any(*patterns)[source]

Shortcut for filtering against multiple patterns.

e.g. Pattern.any(<pattern 1>, <pattern 2>…)

category = 2
filter_case(case)[source]
filter_suite(suite)[source]
filter_test(test: Test)[source]
parse_pattern(pattern: str) → List[str][source]
class testplan.testing.filtering.PatternAction(option_strings, dest, nargs=None, const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]

Bases: argparse.Action

Parser action for generating Pattern filters. Returns a list of Pattern filter objects.

In:

--patterns foo bar --patterns baz

Out:

[Pattern('foo'), Pattern('bar'), Pattern('baz')]
class testplan.testing.filtering.Tags(tags)[source]

Bases: testplan.testing.filtering.BaseTagFilter

Tag filter that returns True if ANY of the given tags match.

get_match_func()[source]
class testplan.testing.filtering.TagsAction(option_strings, dest, nargs=None, const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]

Bases: argparse.Action

Parser action for generating tags (any) filters.

In:

--tags foo bar hello=world --tags baz hello=mars

Out:

[
    Tags({
        'simple': {'foo', 'bar'},
        'hello': {'world'},
    }),
    Tags({
        'simple': {'baz'},
        'hello': {'mars'},
    })
]
filter_class

alias of Tags

class testplan.testing.filtering.TagsAll(tags)[source]

Bases: testplan.testing.filtering.BaseTagFilter

Tag filter that returns True if ALL of the given tags match.

get_match_func()[source]
class testplan.testing.filtering.TagsAllAction(option_strings, dest, nargs=None, const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]

Bases: testplan.testing.filtering.TagsAction

Parser action for generating tags (all) filters.

In:

--tags-all foo bar hello=world --tags-all baz hello=mars

Out:

[
    TagsAll({
        'simple': {'foo', 'bar'},
        'hello': {'world'},
    }),
    TagsAll({
        'simple': {'baz'},
        'hello': {'mars'},
    })
]
filter_class

alias of TagsAll

testplan.testing.filtering.flatten_filters(metafilter_kls: Type[MetaFilter], filters: List[Filter]) → List[testplan.testing.filtering.Filter][source]

This is used for flattening nested filters of same type

So when we have something like:

Or(filter-1, filter-2) | Or(filter-3, filter-4)

We end up with:

Or(filter-1, filter-2, filter-3, filter-4)

Instead of:

Or(Or(filter-1, filter-2), Or(filter-3, filter-4))
testplan.testing.filtering.parse_filter_args(parsed_args, arg_names)[source]

Utility function that’s used for grouping filters of the same category together. Will be used while parsing command line arguments for test filters.

Filters that belong to the same category will be grouped under Or whereas filters of different categories will be grouped under And.

In:

--patterns my_pattern --tags foo --tags-all bar baz

Out:

And(
    Pattern('my_pattern'),
    Or(
        Tags({'simple': {'foo'}}),
        TagsAll({'simple': {'bar', 'baz'}}),
    )
)

testplan.testing.listing module

This module contains logic for listing representing test context of a plan.

class testplan.testing.listing.BaseLister[source]

Bases: testplan.testing.listing.Listertype

Base of all listers, implement the get_output() give it a name in NAME and a description in DESCRIPTION or alternatively override name() and/or description() and it is good to be added to listing_registry.

get_output(instance)[source]
log_test_info(instance)[source]
class testplan.testing.listing.CountLister[source]

Bases: testplan.testing.listing.BaseLister

Displays the number of suites and total testcases per test instance.

DESCRIPTION = 'Lists top level instances and total number of suites & testcases per instance.'
NAME = 'COUNT'
get_output(instance)[source]
class testplan.testing.listing.ExpandedNameLister[source]

Bases: testplan.testing.listing.BaseLister

Lists names of the items within the test context:

Sample output:

MultitestAlpha
SuiteOne
testcase_foo testcase_bar
SuiteTwo
testcase_baz
MultitestBeta
DESCRIPTION = 'List tests in readable format.'
NAME = 'NAME_FULL'
format_instance(instance)[source]
format_suite(instance, suite)[source]
format_testcase(instance, suite, testcase)[source]
get_output(instance)[source]
get_testcase_outputs(instance, suite, testcases)[source]
class testplan.testing.listing.ExpandedPatternLister[source]

Bases: testplan.testing.listing.ExpandedNameLister

Lists the items in test context in a copy-pasta friendly format compatible with –patterns and –tags arguments.

Example:

MultitestAlpha
MultitestAlpha:SuiteOne –tags color=red
MultitestAlpha:SuiteOne:testcase_foo MultitestAlpha:SuiteOne:testcase_bar –tags color=blue
MultitestAlpha:SuiteTwo
MultitestAlpha:SuiteTwo:testcase_baz
MultitestBeta
DESCRIPTION = 'List tests in `--patterns` / `--tags` compatible format.'
NAME = 'PATTERN_FULL'
apply_tag_label(pattern, obj)[source]
format_instance(instance)[source]
format_suite(instance, suite)[source]
format_testcase(instance, suite, testcase)[source]
class testplan.testing.listing.Listertype[source]

Bases: object

DESCRIPTION = None
NAME = None
description()[source]
metadata_based = False
name()[source]
class testplan.testing.listing.ListingArgMixin[source]

Bases: testplan.common.utils.parser.ArgMixin

classmethod get_descriptions()[source]

Override this method to return a dictionary with Enums as keys and description strings as values.

This will later on be rendered via –help command.

classmethod get_parser_context(default=None, **kwargs)[source]

Shortcut method for populating Argparse.parser.add_argument params.

classmethod parse(arg)[source]

Get the enum for given cmdline arg in string form, display all available options when an invalid value is parsed.

class testplan.testing.listing.ListingRegistry[source]

Bases: object

A registry to store listers, add listers to the listing_registry instance which is used to create the commandline parser.

add_lister(lister)[source]
static get_arg_name(lister)[source]
to_arg()[source]
class testplan.testing.listing.MetadataBasedLister[source]

Bases: testplan.testing.listing.Listertype

Base of all metadata based listers, implement the get_output() give it a name in NAME and a description in DESCRIPTION or alternatively override name() and/or description() and it is good to be added to listing_registry.

get_output(metadata: testplan.testing.multitest.test_metadata.TestPlanMetadata)[source]
log_test_info(metadata: testplan.testing.multitest.test_metadata.TestPlanMetadata)[source]
metadata_based = True
class testplan.testing.listing.NameLister[source]

Bases: testplan.testing.listing.TrimMixin, testplan.testing.listing.ExpandedNameLister

Trimmed version of ExpandedNameLister

DESCRIPTION = 'List tests in readable format.\n\tMax 25 testcases per suite will be displayed'
NAME = 'NAME'
class testplan.testing.listing.PatternLister[source]

Bases: testplan.testing.listing.TrimMixin, testplan.testing.listing.ExpandedPatternLister

Like test lister, but trims list of testcases if they exceed <MAX_TESTCASES>.

This is useful if the user has generated hundreds of testcases via parametrization.

DESCRIPTION = 'List tests in `--patterns` / `--tags` compatible format.\n\tMax 25 testcases per suite will be displayed'
NAME = 'PATTERN'
class testplan.testing.listing.SimpleJsonLister[source]

Bases: testplan.testing.listing.MetadataBasedLister

DESCRIPTION = 'Dump test information in json. Can take json:/path/to/output.json as well, then the result is dumped to the file'
NAME = 'JSON'
get_output(metadata: testplan.testing.multitest.test_metadata.TestPlanMetadata)[source]
class testplan.testing.listing.TrimMixin[source]

Bases: object

DESCRIPTION = '\tMax 25 testcases per suite will be displayed'
get_testcase_outputs(instance, suite, testcases)[source]
testplan.testing.listing.listing_registry = <testplan.testing.listing.ListingRegistry object>

Registry instance that will be used to create the commandline parser, this can be extended with new listers

class testplan.testing.listing.store_lister_and_path(option_strings, dest, nargs=None, const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]

Bases: argparse.Action

testplan.testing.listing.test_pattern(test_instance: Test) → str[source]

testplan.testing.ordering module

Classes for sorting test context before a test run.

Warning: sort_instances functionality is not supported yet, but the API is available for future compatibility.

class testplan.testing.ordering.AlphanumericSorter(sort_type=<SortType.ALL: 'all'>)[source]

Bases: testplan.testing.ordering.TypedSorter

Sorter that uses basic alphanumeric ordering.

sort_instances(instances)[source]
sort_testcases(testcases, param_groups=None)[source]
sort_testsuites(testsuites)[source]
class testplan.testing.ordering.BaseSorter[source]

Bases: object

Base sorter class

should_sort_instances()[source]
should_sort_testcases()[source]
should_sort_testsuites()[source]
sort_instances(instances)[source]
sort_testcases(testcases, param_groups=None)[source]
sort_testsuites(testsuites)[source]
sorted_instances(instances)[source]
sorted_testcases(testsuite, testcases)[source]
sorted_testsuites(testsuites)[source]
class testplan.testing.ordering.NoopSorter[source]

Bases: testplan.testing.ordering.BaseSorter

Sorter that returns the original ordering.

should_sort_instances()[source]
should_sort_testcases()[source]
should_sort_testsuites()[source]
class testplan.testing.ordering.ShuffleSorter(shuffle_type=<SortType.ALL: 'all'>, seed=None)[source]

Bases: testplan.testing.ordering.TypedSorter

Sorter that shuffles the ordering. It is idempotent in a way that, it will return the same ordering for the same seed for the same list.

randomizer
shuffle(items)[source]
sort_instances(instances)[source]
sort_testcases(testcases, param_groups=None)[source]
sort_testsuites(testsuites)[source]
class testplan.testing.ordering.SortType[source]

Bases: enum.Enum

Helper enum used by sorter classes.

ALL = 'all'
INSTANCES = 'instances'
SUITES = 'suites'
TEST_CASES = 'testcases'
validate = <bound method SortType.validate of <enum 'SortType'>>[source]
class testplan.testing.ordering.TypedSorter(sort_type=<SortType.ALL: 'all'>)[source]

Bases: testplan.testing.ordering.BaseSorter

Base sorter that allows configuration of sort levels via sort_type argument.

check_sort_type(sort_type)[source]
should_sort_instances()[source]
should_sort_testcases()[source]
should_sort_testsuites()[source]

testplan.testing.tagging module

Generic Tagging logic.

testplan.testing.tagging.check_all_matching_tags(tag_arg_dict, target_tag_dict)[source]

Return True if all tag sets in tag_arg_dict is a subset of the matching categories in target_tag_dict.

testplan.testing.tagging.check_any_matching_tags(tag_arg_dict, target_tag_dict)[source]

Return true if there is at least one match for a category.

testplan.testing.tagging.merge_tag_dicts(*tag_dicts)[source]

Utility function for merging tag dicts for easy comparisons.

testplan.testing.tagging.parse_tag_arguments(*tag_arguments)[source]

Parse command line tag arguments into a dictionary of sets.

For the call below:

--tags foo bar named-tag=one,two named-tag=three hello=world

We will get:

[
  {'simple': {'foo'},
  {'simple', {'bar'},
  {'named_tag', {'one', 'two'},
  {'named_tag', {'three'},
  {'hello', {'world'}
]

The repeated tag values will later on be grouped together via TagsAction.

testplan.testing.tagging.tag_label(tag_dict)[source]

Return tag data in readable format.

>>> tag_dict = {
  'simple': set(['foo', 'bar']),
  'tag_group_1': set(['some-value']),
  'other_group': set(['one', 'two', 'three'])
}
>>> tag_label(tag_dict)
Tags: foo bar tag_group_1=some-value other_group=one,two,three
testplan.testing.tagging.validate_tag_value(tag_value)[source]

Validate a tag value, make sure it is of correct type. Return a tag dict for internal representation.

Sample input / output:

‘foo’ -> {‘simple’: {‘foo’} (‘foo’, ‘bar’) -> {‘simple’: {‘foo’, ‘bar’} {‘color’: ‘red’} -> {‘color’: {‘red’} {‘color’: (‘red’, ‘blue’)} -> {‘color’: {‘red’, ‘blue’}

Parameters:tag_value (string, iterable of string or a dict with string keys and string or iterable of string as values.) – User defined tag value.
Returns:Internal representation of the tag context.
Return type:dict of set

testplan.testing.py_test module

PyTest test runner.

class testplan.testing.py_test.PyTest(name, target, description=None, select='', extra_args=None, result=<class 'testplan.testing.result.Result'>, **options)[source]

Bases: testplan.testing.base.Test

PyTest plugin for Testplan. Allows tests written for PyTest to be run from Testplan, with the test results logged and included in the Testplan report.

Parameters:
  • name (str) – Test instance name, often used as uid of test entity.
  • target (str or list of str) – Target of PyTest configuration.
  • description (str) – Description of test instance.
  • select (str) – Selection of PyTest configuration.
  • extra_args (NoneType or list of str) – Extra arguments passed to pytest.
  • result (Result) – Result that contains assertion entries.

Also inherits all Test options.

CONFIG

alias of PyTestConfig

add_main_batch_steps()[source]

Specify the test steps: run the tests, then log the results.

get_test_context()[source]

Inspect the test suites and cases by running PyTest with the –collect-only flag and passing in our collection plugin.

Returns:List containing pairs of suite name and testcase names.
Return type:List[Tuple[str, List[str]]]
run_testcases_iter(testsuite_pattern: str = '*', testcase_pattern: str = '*', shallow_report: Dict[KT, VT] = None) → Generator[T_co, T_contra, V_co][source]

Run all testcases and yield testcase reports.

Parameters:
  • testsuite_pattern – pattern to match for testsuite names
  • testcase_pattern – pattern to match for testcase names
  • shallow_report – shallow report entry
Returns:

generator yielding testcase reports and UIDs for merge step

run_tests()[source]

Run pytest and wait for it to terminate.

setup()[source]

Setup the PyTest plugin for the suite.

class testplan.testing.py_test.PyTestConfig(**options)[source]

Bases: testplan.testing.base.TestConfig

Configuration object for PyTest test runner.

classmethod get_options()[source]

Runnable specific config options.

testplan.testing.pyunit module

PyUnit test runner.

class testplan.testing.pyunit.PyUnit(name, testcases, description=None, **kwargs)[source]

Bases: testplan.testing.base.Test

Test runner for PyUnit unit tests.

Parameters:
  • name (str) – Test instance name, often used as uid of test entity.
  • testcases (TestCase) – PyUnit testcases.
  • description (str) – Description of test instance.

Also inherits all Test options.

CONFIG

alias of PyUnitConfig

add_main_batch_steps()[source]

Specify the test steps: run the tests, then log the results.

get_test_context()[source]

Currently we do not inspect individual PyUnit testcases - only allow the whole suite to be run.

run_testcases_iter(testsuite_pattern: str = '*', testcase_pattern: str = '*', shallow_report: Dict[KT, VT] = None) → Generator[T_co, T_contra, V_co][source]

Run all testcases and yield testcase reports.

Parameters:
  • testsuite_pattern – pattern to match for testsuite names
  • testcase_pattern – pattern to match for testcase names
  • shallow_report – shallow report entry
Returns:

generator yielding testcase reports and UIDs for merge step

run_tests()[source]

Run PyUnit and wait for it to terminate.

class testplan.testing.pyunit.PyUnitConfig(**options)[source]

Bases: testplan.testing.base.TestConfig

Configuration object for :py:class`~testplan.testing.pyunit.PyUnit` test runner.

classmethod get_options()[source]

Runnable specific config options.

testplan.testing.junit module

JUnit test runner.

class testplan.testing.junit.JUnit(name, binary, results_dir, junit_args=None, junit_filter=None, **options)[source]

Bases: testplan.testing.base.ProcessRunnerTest

Subprocess test runner for JUnit: https://junit.org/junit5/docs/current/user-guide/

Please note that the test (either native binary or script) should generate XML format report so that Testplan is able to parse the result.

gradle test
Parameters:
  • name (str) – Test instance name, often used as uid of test entity.
  • binary (str) – Path to the gradle binary or script.
  • description (str) – Description of test instance.
  • junit_args (NoneType or list) – Customized command line arguments for Junit test
  • results_dir (str) – Where saved the test xml report.
  • junit_filter (NoneType or list) – Customized command line arguments for filtering testcases.

Also inherits all ProcessRunnerTest options.

CONFIG

alias of JUnitConfig

list_command_filter(testsuite_pattern, testcase_pattern)[source]

Return the base list command with additional filtering to list a specific set of testcases.

process_test_data(test_data)[source]

Convert JUnit output into a a list of report entries.

Parameters:test_data (list) – JUnit test output.
Returns:list of sub reports.
Return type:list of (TestGroupReport or TestCaseReport)
read_test_data()[source]

Read JUnit xml report.

Returns:JUnit test output.
Return type:list
test_command_filter(testsuite_pattern, testcase_pattern)[source]

Return the base test command with additional filtering to run a specific set of testcases.

class testplan.testing.junit.JUnitConfig(**options)[source]

Bases: testplan.testing.base.ProcessRunnerTestConfig

Configuration object for :py:class`~testplan.testing.junit.JUnit` test runner.

classmethod get_options()[source]

Runnable specific config options.

Module contents