testplan.testing package¶
Subpackages¶
- testplan.testing.multitest package
- Subpackages
- testplan.testing.multitest.driver package
- testplan.testing.multitest.entries package
- Submodules
- Module contents
- Subpackages
- testplan.testing.cpp package
- testplan.testing.bdd package
Submodules¶
testplan.testing.base module¶
Base classes for all Tests
-
class
testplan.testing.base.
ProcessRunnerTest
(**options)[source]¶ Bases:
testplan.testing.base.Test
A test runner that runs the tests in a separate subprocess. This is useful for running 3rd party testing frameworks (e.g. JUnit, GTest)
Test report will be populated by parsing the generated report output file (report.xml file by default.)
Parameters: - name – Test instance name, often used as uid of test entity.
- binary – Path to the application binary or script.
- description – Description of test instance.
- proc_env – Environment overrides for
subprocess.Popen
; context value (when referring to other driver) and jinja2 template (when referring to self) will be resolved. - proc_cwd – Directory override for
subprocess.Popen
. - timeout –
Optional timeout for the subprocess. If a process runs longer than this limit, it will be killed and test will be marked as
ERROR
.String representations can be used as well as duration in seconds. (e.g. 10, 2.3, ‘1m 30s’, ‘1h 15m’)
- ignore_exit_codes – When the test process exits with nonzero status
code, the test will be marked as
ERROR
. This can be disabled by providing a list of numbers to ignore. - pre_args – List of arguments to be prepended before the arguments of the test runnable.
- post_args – List of arguments to be appended before the arguments of the test runnable.
Also inherits all
Test
options.-
CONFIG
¶ alias of
ProcessRunnerTestConfig
-
apply_xfail_tests
() → None[source]¶ Apply xfail tests specified via –xfail-tests or @test_plan(xfail_tests=…).
-
get_proc_env
() → Dict[KT, VT][source]¶ Fabricate the env var for subprocess. Precedence: user-specified > hardcoded > system env
-
get_process_check_report
(retcode: int, stdout: str, stderr: str) → testplan.report.testing.base.TestGroupReport[source]¶ When running a process fails (e.g. binary crash, timeout etc) we can still generate dummy testsuite / testcase reports with a certain hierarchy compatible with exporters and XUnit conventions. And logs of stdout & stderr can be saved as attachment.
-
get_test_context
(list_cmd=None)[source]¶ Run the shell command generated by list_command in a subprocess, parse and return the stdout generated via parse_test_context.
Parameters: list_cmd ( str
) – Command to list all test suites and testcasesReturns: Result returned by parse_test_context. Return type: list
oflist
-
list_command
() → Optional[List[str]][source]¶ List custom arguments before and after the executable if they are defined. :return: List of commands to run before and after the test process,
as well as the test executable itself.
-
list_command_filter
(testsuite_pattern: str, testcase_pattern: str)[source]¶ Return the base list command with additional filtering to list a specific set of testcases. To be implemented by concrete subclasses.
-
parse_test_context
(test_list_output: bytes) → List[List[T]][source]¶ Override this to generate a nested list of test suite and test case context. Only required if list_command is overridden to return a command.
The result will later on be used by test listers to generate the test context output for this test instance.
Sample output:
[ ['SuiteAlpha', ['testcase_one', 'testcase_two'], ['SuiteBeta', ['testcase_one', 'testcase_two'], ]
Parameters: test_list_output – stdout from the list command Returns: Parsed test context from command line output of the 3rd party testing library.
-
process_test_data
(test_data)[source]¶ Process raw test data that was collected and return a list of entries (e.g. TestGroupReport, TestCaseReport) that will be appended to the current test instance’s report as children.
Parameters: test_data ( xml.etree.Element
) – Root node of parsed raw test dataReturns: List of sub reports Return type: list
ofTestGroupReport
/TestCaseReport
-
read_test_data
()[source]¶ Parse output generated by the 3rd party testing tool, and then the parsed content will be handled by
process_test_data
.You should override this function with custom logic to parse the contents of generated file.
-
report_path
¶
-
resolved_bin
¶
-
run_testcases_iter
(testsuite_pattern: str = '*', testcase_pattern: str = '*', shallow_report: Dict[KT, VT] = None) → Generator[T_co, T_contra, V_co][source]¶ Runs testcases as defined by the given filter patterns and yields testcase reports. A single testcase report is made for general checks of the test process, including checking the exit code and logging stdout and stderr of the process. Then, testcase reports are generated from the output of the test process.
For efficiency, we run all testcases in a single subprocess rather than running each testcase in a seperate process. This reduces the total time taken to run all testcases, however it will mean that testcase reports will not be generated until all testcases have finished running.
Parameters: - testsuite_pattern – pattern to match for testsuite names
- testcase_pattern – pattern to match for testcase names
- shallow_report – shallow report entry
Returns: generator yielding testcase reports and UIDs for merge step
-
run_tests
() → None[source]¶ Run the tests in a subprocess, record stdout & stderr on runpath. Optionally enforce a timeout and log timeout related messages in the given timeout log path.
Raises: ValueError – upon invalid test command
-
stderr
¶
-
stdout
¶
-
test_command
() → List[str][source]¶ Add custom arguments before and after the executable if they are defined. :return: List of commands to run before and after the test process,
as well as the test executable itself.
-
test_command_filter
(testsuite_pattern: str, testcase_pattern: str)[source]¶ Return the base test command with additional filtering to run a specific set of testcases. To be implemented by concrete subclasses.
-
timeout_callback
()[source]¶ Callback function that will be called by the daemon thread if a timeout occurs (e.g. process runs longer than specified timeout value).
Raises: RuntimeError –
-
timeout_log
¶
-
class
testplan.testing.base.
ProcessRunnerTestConfig
(**options)[source]¶ Bases:
testplan.testing.base.TestConfig
Configuration object for
ProcessRunnerTest
.
-
class
testplan.testing.base.
ResourceHooks
[source]¶ Bases:
enum.Enum
An enumeration.
-
after_start
= 'After Start'¶
-
after_stop
= 'After Stop'¶
-
before_start
= 'Before Start'¶
-
before_stop
= 'Before Stop'¶
-
-
class
testplan.testing.base.
Test
(name: str, description: str = None, environment: Union[list, Callable] = None, dependencies: Union[dict, Callable] = None, initial_context: Union[dict, Callable] = None, before_start: callable = None, after_start: callable = None, before_stop: callable = None, after_stop: callable = None, error_handler: callable = None, test_filter: testplan.testing.filtering.BaseFilter = None, test_sorter: testplan.testing.ordering.BaseSorter = None, stdout_style: testplan.report.testing.styles.Style = None, tags: Union[str, Iterable[str]] = None, result: Type[testplan.testing.result.Result] = <class 'testplan.testing.result.Result'>, **options)[source]¶ Bases:
testplan.common.entity.base.Runnable
Base test instance class. Any runnable that runs a test can inherit from this class and override certain methods to customize functionality.
Parameters: - name – Test instance name, often used as uid of test entity.
- description – Description of test instance.
- environment – List of
drivers
to be started and made available on tests execution. Can also take a callable that returns the list of drivers. - dependencies – driver start-up dependencies as a directed graph, e.g {server1: (client1, client2)} indicates server1 shall start before client1 and client2. Can also take a callable that returns a dict.
- initial_context – key: value pairs that will be made available as context for drivers in environment. Can also take a callable that returns a dict.
- test_filter – Class with test filtering logic.
- test_sorter – Class with tests sorting logic.
- before_start – Callable to execute before starting the environment.
- after_start – Callable to execute after starting the environment.
- before_stop – Callable to execute before stopping the environment.
- after_stop – Callable to execute after stopping the environment.
- error_handler – Callable to execute when a step hits an exception.
- stdout_style – Console output style.
- tags – User defined tag value.
- result – Result class definition for result object made available from within the testcases.
Also inherits all
Runnable
options.-
CONFIG
¶ alias of
TestConfig
-
ENVIRONMENT
¶ alias of
testplan.testing.environment.base.TestEnvironment
-
RESULT
¶ alias of
TestResult
-
description
¶
-
dry_run
() → None[source]¶ Return an empty report skeleton for this test including all testsuites, testcases etc. hierarchy. Does not run any tests.
-
filter_levels
= [<FilterLevel.TEST: 'test'>]¶
Return the tag index that will be used for filtering. By default this is equal to the native tags for this object.
However subclasses may build larger tag indices by collecting tags from their children for example.
-
log_test_results
(top_down: bool = True)[source]¶ Log test results. i.e. ProcessRunnerTest or PyTest.
Parameters: top_down – Flag logging test results using a top-down approach or a bottom-up approach.
-
name
¶ Instance name.
-
propagate_tag_indices
() → None[source]¶ Basic step for propagating tag indices of the test report tree. This step may be necessary if the report tree is created in parts and then added up.
-
report
¶ Shortcut for the test report.
-
run_testcases_iter
(testsuite_pattern: str = '*', testcase_pattern: str = '*') → None[source]¶ For a Test to be run interactively, it must implement this method.
It is expected to run tests iteratively and yield a tuple containing a testcase report and the list of parent UIDs required to merge the testcase report into the main report tree.
If it is not possible or very inefficient to run individual testcases in an iteratie manner, this method may instead run all the testcases in a batch and then return an iterator for the testcase reports and parent UIDs.
Parameters: - testsuite_pattern – Filter pattern for testsuite level.
- testcase_pattern – Filter pattern for testcase level.
Yield: generate tuples containing testcase reports and a list of the UIDs required to merge this into the main report tree, starting with the UID of this test.
-
set_discover_path
(path: str) → None[source]¶ If the Test is materialized from a task that is discovered outside pwd(), this might be needed for binary/library path derivation to work properly. :param path: the absolute path where the task has been discovered
-
should_log_test_result
(depth: int, test_obj, style) → Tuple[bool, int][source]¶ Whether to log test result and if yes, then with what indent.
Returns: whether to log test results (Suite report, Testcase report, or result of assertions) and the indent that should be kept at start of lines
Raises: - ValueError – if met with an unexpected test group category
- TypeError – if meth with an unsupported test object
-
start_test_resources
() → None[source]¶ Start all test resources but do not run any tests. Used in the interactive mode when environments may be started/stopped on demand. The base implementation is very simple but may be overridden in sub- classes to run additional setup pre- and post-environment start.
-
stdout_style
¶ Stdout style input.
-
stop_test_resources
() → None[source]¶ Stop all test resources. As above, this method is used for the interactive mode and is very simple in this base Test class, but may be overridden by sub-classes.
-
test_context
¶
-
class
testplan.testing.base.
TestConfig
(**options)[source]¶ Bases:
testplan.common.entity.base.RunnableConfig
Configuration object for
Test
.
-
class
testplan.testing.base.
TestResult
[source]¶ Bases:
testplan.common.entity.base.RunnableResult
Result object for
Test
runnable test execution framework base class and all sub classes.Contains a test
report
object.
testplan.testing.filtering module¶
Filtering logic for Multitest, Suites and testcase methods (of Suites)
-
class
testplan.testing.filtering.
And
(*filters)[source]¶ Bases:
testplan.testing.filtering.MetaFilter
Meta filter that returns True if ALL of the child filters return True.
-
operator_str
= '&'¶
-
-
class
testplan.testing.filtering.
BaseFilter
[source]¶ Bases:
object
Base class for filters, supports bitwise operators for composing multiple filters.
e.g. (FilterA(…) & FilterB(…)) | ~FilterC(…)
-
class
testplan.testing.filtering.
BaseTagFilter
(tags)[source]¶ Bases:
testplan.testing.filtering.Filter
Base filter class for tag based filtering.
-
category
= 3¶
-
-
class
testplan.testing.filtering.
Filter
[source]¶ Bases:
testplan.testing.filtering.BaseFilter
Noop filter class, users can inherit from this to implement their own filters.
Returns True by default for all filtering operations, implicitly checks for test instances
filter_levels
declaration to apply the filtering logic.-
category
= 1¶
-
-
class
testplan.testing.filtering.
FilterCategory
[source]¶ Bases:
enum.IntEnum
An enumeration.
-
COMMON
= 1¶
-
PATTERN
= 2¶
-
TAG
= 3¶
-
-
class
testplan.testing.filtering.
FilterLevel
[source]¶ Bases:
enum.Enum
This enum is used by test classes (e.g. ~testplan.testing.base.Test) to declare the depth of filtering logic while
filter
method is run.By default only
test
(e.g. top) level filtering is used.-
TEST
= 'test'¶
-
TESTCASE
= 'testcase'¶
-
TESTSUITE
= 'testsuite'¶
-
-
class
testplan.testing.filtering.
MetaFilter
(*filters)[source]¶ Bases:
testplan.testing.filtering.BaseFilter
Higher level filter that allow composition of other filters.
-
operator_str
= None¶
-
-
class
testplan.testing.filtering.
Not
(filter_obj)[source]¶ Bases:
testplan.testing.filtering.BaseFilter
Meta filter that returns the inverse of the original filter result.
-
class
testplan.testing.filtering.
Or
(*filters)[source]¶ Bases:
testplan.testing.filtering.MetaFilter
Meta filter that returns True if ANY of the child filters return True.
-
operator_str
= '|'¶
-
-
class
testplan.testing.filtering.
Pattern
(pattern, match_uid=False)[source]¶ Bases:
testplan.testing.filtering.Filter
Base class for name based, glob style filtering.
https://docs.python.org/3.4/library/fnmatch.html
Examples:
<Multitest name>:<suite name>:<testcase name> <Multitest name>::<testcase name> *:<suite name>:-
ALL_MATCH
= '*'¶
-
MAX_LEVEL
= 3¶
-
classmethod
any
(*patterns)[source]¶ Shortcut for filtering against multiple patterns.
e.g. Pattern.any(<pattern 1>, <pattern 2>…)
-
category
= 2¶
-
-
class
testplan.testing.filtering.
PatternAction
(option_strings, dest, nargs=None, const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]¶ Bases:
argparse.Action
Parser action for generating Pattern filters. Returns a list of Pattern filter objects.
In:
--patterns foo bar --patterns baz
Out:
[Pattern('foo'), Pattern('bar'), Pattern('baz')]
-
class
testplan.testing.filtering.
Tags
(tags)[source]¶ Bases:
testplan.testing.filtering.BaseTagFilter
Tag filter that returns True if ANY of the given tags match.
-
class
testplan.testing.filtering.
TagsAction
(option_strings, dest, nargs=None, const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]¶ Bases:
argparse.Action
Parser action for generating tags (any) filters.
In:
--tags foo bar hello=world --tags baz hello=mars
Out:
[ Tags({ 'simple': {'foo', 'bar'}, 'hello': {'world'}, }), Tags({ 'simple': {'baz'}, 'hello': {'mars'}, }) ]
-
class
testplan.testing.filtering.
TagsAll
(tags)[source]¶ Bases:
testplan.testing.filtering.BaseTagFilter
Tag filter that returns True if ALL of the given tags match.
-
class
testplan.testing.filtering.
TagsAllAction
(option_strings, dest, nargs=None, const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]¶ Bases:
testplan.testing.filtering.TagsAction
Parser action for generating tags (all) filters.
In:
--tags-all foo bar hello=world --tags-all baz hello=mars
Out:
[ TagsAll({ 'simple': {'foo', 'bar'}, 'hello': {'world'}, }), TagsAll({ 'simple': {'baz'}, 'hello': {'mars'}, }) ]
-
testplan.testing.filtering.
flatten_filters
(metafilter_kls: Type[MetaFilter], filters: List[Filter]) → List[testplan.testing.filtering.Filter][source]¶ This is used for flattening nested filters of same type
So when we have something like:
Or(filter-1, filter-2) | Or(filter-3, filter-4)We end up with:
Or(filter-1, filter-2, filter-3, filter-4)Instead of:
Or(Or(filter-1, filter-2), Or(filter-3, filter-4))
-
testplan.testing.filtering.
parse_filter_args
(parsed_args, arg_names)[source]¶ Utility function that’s used for grouping filters of the same category together. Will be used while parsing command line arguments for test filters.
Filters that belong to the same category will be grouped under Or whereas filters of different categories will be grouped under And.
In:
--patterns my_pattern --tags foo --tags-all bar baz
Out:
And( Pattern('my_pattern'), Or( Tags({'simple': {'foo'}}), TagsAll({'simple': {'bar', 'baz'}}), ) )
testplan.testing.listing module¶
This module contains logic for listing representing test context of a plan.
-
class
testplan.testing.listing.
BaseLister
[source]¶ Bases:
testplan.testing.listing.Listertype
Base of all listers, implement the
get_output()
give it a name inNAME
and a description inDESCRIPTION
or alternatively overridename()
and/ordescription()
and it is good to be added tolisting_registry
.
-
class
testplan.testing.listing.
CountLister
[source]¶ Bases:
testplan.testing.listing.BaseLister
Displays the number of suites and total testcases per test instance.
-
DESCRIPTION
= 'Lists top level instances and total number of suites & testcases per instance.'¶
-
NAME
= 'COUNT'¶
-
-
class
testplan.testing.listing.
ExpandedNameLister
[source]¶ Bases:
testplan.testing.listing.BaseLister
Lists names of the items within the test context:
Sample output:
- MultitestAlpha
- SuiteOne
- testcase_foo testcase_bar
- SuiteTwo
- testcase_baz
- MultitestBeta
- …
-
DESCRIPTION
= 'List tests in readable format.'¶
-
NAME
= 'NAME_FULL'¶
-
class
testplan.testing.listing.
ExpandedPatternLister
[source]¶ Bases:
testplan.testing.listing.ExpandedNameLister
Lists the items in test context in a copy-pasta friendly format compatible with –patterns and –tags arguments.
Example:
- MultitestAlpha
- MultitestAlpha:SuiteOne –tags color=red
- MultitestAlpha:SuiteOne:testcase_foo MultitestAlpha:SuiteOne:testcase_bar –tags color=blue
- MultitestAlpha:SuiteTwo
- MultitestAlpha:SuiteTwo:testcase_baz
- MultitestBeta
- …
-
DESCRIPTION
= 'List tests in `--patterns` / `--tags` compatible format.'¶
-
NAME
= 'PATTERN_FULL'¶
-
class
testplan.testing.listing.
Listertype
[source]¶ Bases:
object
-
DESCRIPTION
= None¶
-
NAME
= None¶
-
metadata_based
= False¶
-
-
class
testplan.testing.listing.
ListingArgMixin
[source]¶ Bases:
testplan.common.utils.parser.ArgMixin
-
classmethod
get_descriptions
()[source]¶ Override this method to return a dictionary with Enums as keys and description strings as values.
This will later on be rendered via –help command.
-
classmethod
-
class
testplan.testing.listing.
ListingRegistry
[source]¶ Bases:
object
A registry to store listers, add listers to the
listing_registry
instance which is used to create the commandline parser.
-
class
testplan.testing.listing.
MetadataBasedLister
[source]¶ Bases:
testplan.testing.listing.Listertype
Base of all metadata based listers, implement the
get_output()
give it a name inNAME
and a description inDESCRIPTION
or alternatively overridename()
and/ordescription()
and it is good to be added tolisting_registry
.-
metadata_based
= True¶
-
-
class
testplan.testing.listing.
NameLister
[source]¶ Bases:
testplan.testing.listing.TrimMixin
,testplan.testing.listing.ExpandedNameLister
Trimmed version of ExpandedNameLister
-
DESCRIPTION
= 'List tests in readable format.\n\tMax 25 testcases per suite will be displayed'¶
-
NAME
= 'NAME'¶
-
-
class
testplan.testing.listing.
PatternLister
[source]¶ Bases:
testplan.testing.listing.TrimMixin
,testplan.testing.listing.ExpandedPatternLister
Like test lister, but trims list of testcases if they exceed <MAX_TESTCASES>.
This is useful if the user has generated hundreds of testcases via parametrization.
-
DESCRIPTION
= 'List tests in `--patterns` / `--tags` compatible format.\n\tMax 25 testcases per suite will be displayed'¶
-
NAME
= 'PATTERN'¶
-
-
class
testplan.testing.listing.
SimpleJsonLister
[source]¶ Bases:
testplan.testing.listing.MetadataBasedLister
-
DESCRIPTION
= 'Dump test information in json. Can take json:/path/to/output.json as well, then the result is dumped to the file'¶
-
NAME
= 'JSON'¶
-
-
class
testplan.testing.listing.
TrimMixin
[source]¶ Bases:
object
-
DESCRIPTION
= '\tMax 25 testcases per suite will be displayed'¶
-
-
testplan.testing.listing.
listing_registry
= <testplan.testing.listing.ListingRegistry object>¶ Registry instance that will be used to create the commandline parser, this can be extended with new listers
testplan.testing.ordering module¶
Classes for sorting test context before a test run.
Warning: sort_instances functionality is not supported yet, but the API is available for future compatibility.
-
class
testplan.testing.ordering.
AlphanumericSorter
(sort_type=<SortType.ALL: 'all'>)[source]¶ Bases:
testplan.testing.ordering.TypedSorter
Sorter that uses basic alphanumeric ordering.
-
class
testplan.testing.ordering.
NoopSorter
[source]¶ Bases:
testplan.testing.ordering.BaseSorter
Sorter that returns the original ordering.
-
class
testplan.testing.ordering.
ShuffleSorter
(shuffle_type=<SortType.ALL: 'all'>, seed=None)[source]¶ Bases:
testplan.testing.ordering.TypedSorter
Sorter that shuffles the ordering. It is idempotent in a way that, it will return the same ordering for the same seed for the same list.
-
randomizer
¶
-
-
class
testplan.testing.ordering.
SortType
[source]¶ Bases:
enum.Enum
Helper enum used by sorter classes.
-
ALL
= 'all'¶
-
INSTANCES
= 'instances'¶
-
SUITES
= 'suites'¶
-
TEST_CASES
= 'testcases'¶
-
-
class
testplan.testing.ordering.
TypedSorter
(sort_type=<SortType.ALL: 'all'>)[source]¶ Bases:
testplan.testing.ordering.BaseSorter
Base sorter that allows configuration of sort levels via sort_type argument.
testplan.testing.tagging module¶
Generic Tagging logic.
Return True if all tag sets in tag_arg_dict is a subset of the matching categories in target_tag_dict.
Return true if there is at least one match for a category.
-
testplan.testing.tagging.
merge_tag_dicts
(*tag_dicts)[source]¶ Utility function for merging tag dicts for easy comparisons.
-
testplan.testing.tagging.
parse_tag_arguments
(*tag_arguments)[source]¶ Parse command line tag arguments into a dictionary of sets.
For the call below:
--tags foo bar named-tag=one,two named-tag=three hello=world
We will get:
[ {'simple': {'foo'}, {'simple', {'bar'}, {'named_tag', {'one', 'two'}, {'named_tag', {'three'}, {'hello', {'world'} ]
The repeated tag values will later on be grouped together via TagsAction.
-
testplan.testing.tagging.
tag_label
(tag_dict)[source]¶ Return tag data in readable format.
>>> tag_dict = { 'simple': set(['foo', 'bar']), 'tag_group_1': set(['some-value']), 'other_group': set(['one', 'two', 'three']) }
>>> tag_label(tag_dict) Tags: foo bar tag_group_1=some-value other_group=one,two,three
-
testplan.testing.tagging.
validate_tag_value
(tag_value)[source]¶ Validate a tag value, make sure it is of correct type. Return a tag dict for internal representation.
Sample input / output:
‘foo’ -> {‘simple’: {‘foo’} (‘foo’, ‘bar’) -> {‘simple’: {‘foo’, ‘bar’} {‘color’: ‘red’} -> {‘color’: {‘red’} {‘color’: (‘red’, ‘blue’)} -> {‘color’: {‘red’, ‘blue’}
Parameters: tag_value ( string
,iterable
ofstring
or adict
withstring
keys andstring
oriterable
ofstring
as values.) – User defined tag value.Returns: Internal representation of the tag context. Return type: dict
ofset
testplan.testing.py_test module¶
PyTest test runner.
-
class
testplan.testing.py_test.
PyTest
(name, target, description=None, select='', extra_args=None, result=<class 'testplan.testing.result.Result'>, **options)[source]¶ Bases:
testplan.testing.base.Test
PyTest plugin for Testplan. Allows tests written for PyTest to be run from Testplan, with the test results logged and included in the Testplan report.
Parameters: - name (
str
) – Test instance name, often used as uid of test entity. - target (
str
orlist
ofstr
) – Target of PyTest configuration. - description (
str
) – Description of test instance. - select (
str
) – Selection of PyTest configuration. - extra_args (
NoneType
orlist
ofstr
) – Extra arguments passed to pytest. - result (
Result
) – Result that contains assertion entries.
Also inherits all
Test
options.-
CONFIG
¶ alias of
PyTestConfig
-
get_test_context
()[source]¶ Inspect the test suites and cases by running PyTest with the –collect-only flag and passing in our collection plugin.
Returns: List containing pairs of suite name and testcase names. Return type: List[Tuple[str, List[str]]]
-
run_testcases_iter
(testsuite_pattern: str = '*', testcase_pattern: str = '*', shallow_report: Dict[KT, VT] = None) → Generator[T_co, T_contra, V_co][source]¶ Run all testcases and yield testcase reports.
Parameters: - testsuite_pattern – pattern to match for testsuite names
- testcase_pattern – pattern to match for testcase names
- shallow_report – shallow report entry
Returns: generator yielding testcase reports and UIDs for merge step
- name (
testplan.testing.pyunit module¶
PyUnit test runner.
-
class
testplan.testing.pyunit.
PyUnit
(name, testcases, description=None, **kwargs)[source]¶ Bases:
testplan.testing.base.Test
Test runner for PyUnit unit tests.
Parameters: - name (
str
) – Test instance name, often used as uid of test entity. - testcases (
TestCase
) – PyUnit testcases. - description (
str
) – Description of test instance.
Also inherits all
Test
options.-
CONFIG
¶ alias of
PyUnitConfig
-
get_test_context
()[source]¶ Currently we do not inspect individual PyUnit testcases - only allow the whole suite to be run.
-
run_testcases_iter
(testsuite_pattern: str = '*', testcase_pattern: str = '*', shallow_report: Dict[KT, VT] = None) → Generator[T_co, T_contra, V_co][source]¶ Run all testcases and yield testcase reports.
Parameters: - testsuite_pattern – pattern to match for testsuite names
- testcase_pattern – pattern to match for testcase names
- shallow_report – shallow report entry
Returns: generator yielding testcase reports and UIDs for merge step
- name (
-
class
testplan.testing.pyunit.
PyUnitConfig
(**options)[source]¶ Bases:
testplan.testing.base.TestConfig
Configuration object for :py:class`~testplan.testing.pyunit.PyUnit` test runner.
testplan.testing.junit module¶
JUnit test runner.
-
class
testplan.testing.junit.
JUnit
(name, binary, results_dir, junit_args=None, junit_filter=None, **options)[source]¶ Bases:
testplan.testing.base.ProcessRunnerTest
Subprocess test runner for JUnit: https://junit.org/junit5/docs/current/user-guide/
Please note that the test (either native binary or script) should generate XML format report so that Testplan is able to parse the result.
gradle test
Parameters: - name (
str
) – Test instance name, often used as uid of test entity. - binary (
str
) – Path to the gradle binary or script. - description (
str
) – Description of test instance. - junit_args (
NoneType
orlist
) – Customized command line arguments for Junit test - results_dir (
str
) – Where saved the test xml report. - junit_filter (
NoneType
orlist
) – Customized command line arguments for filtering testcases.
Also inherits all
ProcessRunnerTest
options.-
CONFIG
¶ alias of
JUnitConfig
-
list_command_filter
(testsuite_pattern, testcase_pattern)[source]¶ Return the base list command with additional filtering to list a specific set of testcases.
- name (
-
class
testplan.testing.junit.
JUnitConfig
(**options)[source]¶ Bases:
testplan.testing.base.ProcessRunnerTestConfig
Configuration object for :py:class`~testplan.testing.junit.JUnit` test runner.