testplan.testing package¶
Subpackages¶
- testplan.testing.multitest package
- Subpackages
- testplan.testing.multitest.driver package
- testplan.testing.multitest.entries package
- Submodules
- Module contents
- Subpackages
- testplan.testing.cpp package
- testplan.testing.bdd package
Submodules¶
testplan.testing.base module¶
Base classes for all Tests
-
class
testplan.testing.base.
ProcessRunnerTest
(**options)[source]¶ Bases:
testplan.testing.base.Test
A test runner that runs the tests in a separate subprocess. This is useful for running 3rd party testing frameworks (e.g. JUnit, GTest)
Test report will be populated by parsing the generated report output file (report.xml file by default.)
Parameters: - name – Test instance name, often used as uid of test entity.
- binary – Path to the application binary or script.
- description – Description of test instance.
- proc_env – Environment overrides for
subprocess.Popen
; context value (when referring to other driver) and jinja2 template (when referring to self) will be resolved. - proc_cwd – Directory override for
subprocess.Popen
. - timeout –
Optional timeout for the subprocess. If a process runs longer than this limit, it will be killed and test will be marked as
ERROR
.String representations can be used as well as duration in seconds. (e.g. 10, 2.3, ‘1m 30s’, ‘1h 15m’)
- ignore_exit_codes – When the test process exits with nonzero status
code, the test will be marked as
ERROR
. This can be disabled by providing a list of numbers to ignore. - pre_args – List of arguments to be prepended before the arguments of the test runnable.
- post_args – List of arguments to be appended before the arguments of the test runnable.
Also inherits all
Test
options.-
CONFIG
¶ alias of
ProcessRunnerTestConfig
-
apply_xfail_tests
() → None[source]¶ Apply xfail tests specified via –xfail-tests or @test_plan(xfail_tests=…).
-
get_proc_env
() → Dict[KT, VT][source]¶ Fabricate the env var for subprocess. Precedence: user-specified > hardcoded > system env
-
get_process_check_report
(retcode: int, stdout: str, stderr: str) → testplan.report.testing.base.TestGroupReport[source]¶ When running a process fails (e.g. binary crash, timeout etc) we can still generate dummy testsuite / testcase reports with a certain hierarchy compatible with exporters and XUnit conventions. And logs of stdout & stderr can be saved as attachment.
-
get_test_context
(list_cmd=None)[source]¶ Run the shell command generated by list_command in a subprocess, parse and return the stdout generated via parse_test_context.
Parameters: list_cmd ( str
) – Command to list all test suites and testcasesReturns: Result returned by parse_test_context. Return type: list
oflist
-
list_command
() → Optional[List[str]][source]¶ List custom arguments before and after the executable if they are defined. :return: List of commands to run before and after the test process,
as well as the test executable itself.
-
list_command_filter
(testsuite_pattern: str, testcase_pattern: str)[source]¶ Return the base list command with additional filtering to list a specific set of testcases. To be implemented by concrete subclasses.
-
parse_test_context
(test_list_output: bytes) → List[List[T]][source]¶ Override this to generate a nested list of test suite and test case context. Only required if list_command is overridden to return a command.
The result will later on be used by test listers to generate the test context output for this test instance.
Sample output:
[ ['SuiteAlpha', ['testcase_one', 'testcase_two'], ['SuiteBeta', ['testcase_one', 'testcase_two'], ]
Parameters: test_list_output – stdout from the list command Returns: Parsed test context from command line output of the 3rd party testing library.
-
process_test_data
(test_data)[source]¶ Process raw test data that was collected and return a list of entries (e.g. TestGroupReport, TestCaseReport) that will be appended to the current test instance’s report as children.
Parameters: test_data ( xml.etree.Element
) – Root node of parsed raw test dataReturns: List of sub reports Return type: list
ofTestGroupReport
/TestCaseReport
-
read_test_data
()[source]¶ Parse output generated by the 3rd party testing tool, and then the parsed content will be handled by
process_test_data
.You should override this function with custom logic to parse the contents of generated file.
-
report_path
¶
-
resolved_bin
¶
-
run_testcases_iter
(testsuite_pattern: str = '*', testcase_pattern: str = '*', shallow_report: Dict[KT, VT] = None) → Generator[T_co, T_contra, V_co][source]¶ Runs testcases as defined by the given filter patterns and yields testcase reports. A single testcase report is made for general checks of the test process, including checking the exit code and logging stdout and stderr of the process. Then, testcase reports are generated from the output of the test process.
For efficiency, we run all testcases in a single subprocess rather than running each testcase in a seperate process. This reduces the total time taken to run all testcases, however it will mean that testcase reports will not be generated until all testcases have finished running.
Parameters: - testsuite_pattern – pattern to match for testsuite names
- testcase_pattern – pattern to match for testcase names
- shallow_report – shallow report entry
Returns: generator yielding testcase reports and UIDs for merge step
-
run_tests
() → None[source]¶ Run the tests in a subprocess, record stdout & stderr on runpath. Optionally enforce a timeout and log timeout related messages in the given timeout log path.
Raises: ValueError – upon invalid test command
-
stderr
¶
-
stdout
¶
-
test_command
() → List[str][source]¶ Add custom arguments before and after the executable if they are defined. :return: List of commands to run before and after the test process,
as well as the test executable itself.
-
test_command_filter
(testsuite_pattern: str, testcase_pattern: str)[source]¶ Return the base test command with additional filtering to run a specific set of testcases. To be implemented by concrete subclasses.
-
timeout_callback
()[source]¶ Callback function that will be called by the daemon thread if a timeout occurs (e.g. process runs longer than specified timeout value).
Raises: RuntimeError –
-
timeout_log
¶
-
class
testplan.testing.base.
ProcessRunnerTestConfig
(**options)[source]¶ Bases:
testplan.testing.base.TestConfig
Configuration object for
ProcessRunnerTest
.
-
class
testplan.testing.base.
ResourceHooks
[source]¶ Bases:
enum.Enum
An enumeration.
-
after_start
= 'After Start'¶
-
after_stop
= 'After Stop'¶
-
before_start
= 'Before Start'¶
-
before_stop
= 'Before Stop'¶
-
-
class
testplan.testing.base.
Test
(name: str, description: str = None, environment: Union[list, Callable] = None, dependencies: Union[dict, Callable] = None, initial_context: Union[dict, Callable] = None, before_start: callable = None, after_start: callable = None, before_stop: callable = None, after_stop: callable = None, error_handler: callable = None, test_filter: testplan.testing.filtering.BaseFilter = None, test_sorter: testplan.testing.ordering.BaseSorter = None, stdout_style: testplan.report.testing.styles.Style = None, tags: Union[str, Iterable[str]] = None, result: Type[testplan.testing.result.Result] = <class 'testplan.testing.result.Result'>, **options)[source]¶ Bases:
testplan.common.entity.base.Runnable
Base test instance class. Any runnable that runs a test can inherit from this class and override certain methods to customize functionality.
Parameters: - name – Test instance name, often used as uid of test entity.
- description – Description of test instance.
- environment – List of
drivers
to be started and made available on tests execution. Can also take a callable that returns the list of drivers. - dependencies – driver start-up dependencies as a directed graph, e.g {server1: (client1, client2)} indicates server1 shall start before client1 and client2. Can also take a callable that returns a dict.
- initial_context – key: value pairs that will be made available as context for drivers in environment. Can also take a callable that returns a dict.
- test_filter – Class with test filtering logic.
- test_sorter – Class with tests sorting logic.
- before_start – Callable to execute before starting the environment.
- after_start – Callable to execute after starting the environment.
- before_stop – Callable to execute before stopping the environment.
- after_stop – Callable to execute after stopping the environment.
- error_handler – Callable to execute when a step hits an exception.
- stdout_style – Console output style.
- tags – User defined tag value.
- result – Result class definition for result object made available from within the testcases.
Also inherits all
Runnable
options.-
CONFIG
¶ alias of
TestConfig
-
ENVIRONMENT
¶ alias of
testplan.testing.environment.base.TestEnvironment
-
RESULT
¶ alias of
TestResult
-
description
¶
-
dry_run
() → None[source]¶ Return an empty report skeleton for this test including all testsuites, testcases etc. hierarchy. Does not run any tests.
-
filter_levels
= [<FilterLevel.TEST: 'test'>]¶
Return the tag index that will be used for filtering. By default this is equal to the native tags for this object.
However subclasses may build larger tag indices by collecting tags from their children for example.
-
log_test_results
(top_down: bool = True)[source]¶ Log test results. i.e. ProcessRunnerTest or PyTest.
Parameters: top_down – Flag logging test results using a top-down approach or a bottom-up approach.
-
name
¶ Instance name.
-
propagate_tag_indices
() → None[source]¶ Basic step for propagating tag indices of the test report tree. This step may be necessary if the report tree is created in parts and then added up.
-
report
¶ Shortcut for the test report.
-
run_testcases_iter
(testsuite_pattern: str = '*', testcase_pattern: str = '*') → None[source]¶ For a Test to be run interactively, it must implement this method.
It is expected to run tests iteratively and yield a tuple containing a testcase report and the list of parent UIDs required to merge the testcase report into the main report tree.
If it is not possible or very inefficient to run individual testcases in an iteratie manner, this method may instead run all the testcases in a batch and then return an iterator for the testcase reports and parent UIDs.
Parameters: - testsuite_pattern – Filter pattern for testsuite level.
- testcase_pattern – Filter pattern for testcase level.
Yield: generate tuples containing testcase reports and a list of the UIDs required to merge this into the main report tree, starting with the UID of this test.
-
set_discover_path
(path: str) → None[source]¶ If the Test is materialized from a task that is discovered outside pwd(), this might be needed for binary/library path derivation to work properly. :param path: the absolute path where the task has been discovered
-
should_log_test_result
(depth: int, test_obj, style) → Tuple[bool, int][source]¶ Whether to log test result and if yes, then with what indent.
Returns: whether to log test results (Suite report, Testcase report, or result of assertions) and the indent that should be kept at start of lines
Raises: - ValueError – if met with an unexpected test group category
- TypeError – if meth with an unsupported test object
-
start_test_resources
() → None[source]¶ Start all test resources but do not run any tests. Used in the interactive mode when environments may be started/stopped on demand. The base implementation is very simple but may be overridden in sub- classes to run additional setup pre- and post-environment start.
-
stdout_style
¶ Stdout style input.
-
stop_test_resources
() → None[source]¶ Stop all test resources. As above, this method is used for the interactive mode and is very simple in this base Test class, but may be overridden by sub-classes.
-
test_context
¶
-
class
testplan.testing.base.
TestConfig
(**options)[source]¶ Bases:
testplan.common.entity.base.RunnableConfig
Configuration object for
Test
.
-
class
testplan.testing.base.
TestResult
[source]¶ Bases:
testplan.common.entity.base.RunnableResult
Result object for
Test
runnable test execution framework base class and all sub classes.Contains a test
report
object.
testplan.testing.filtering module¶
Filtering logic for Multitest, Suites and testcase methods (of Suites)
-
class
testplan.testing.filtering.
And
(*filters)[source]¶ Bases:
testplan.testing.filtering.MetaFilter
Meta filter that returns True if ALL of the child filters return True.
-
operator_str
= '&'¶
-
-
class
testplan.testing.filtering.
BaseFilter
[source]¶ Bases:
object
Base class for filters, supports bitwise operators for composing multiple filters.
e.g. (FilterA(…) & FilterB(…)) | ~FilterC(…)
-
class
testplan.testing.filtering.
BaseTagFilter
(tags)[source]¶ Bases:
testplan.testing.filtering.Filter
Base filter class for tag based filtering.
-
category
= 3¶
-
-
class
testplan.testing.filtering.
Filter
[source]¶ Bases:
testplan.testing.filtering.BaseFilter
Noop filter class, users can inherit from this to implement their own filters.
Returns True by default for all filtering operations, implicitly checks for test instances
filter_levels
declaration to apply the filtering logic.-
category
= 1¶
-
-
class
testplan.testing.filtering.
FilterCategory
[source]¶ Bases:
enum.IntEnum
An enumeration.
-
COMMON
= 1¶
-
PATTERN
= 2¶
-
TAG
= 3¶
-
-
class
testplan.testing.filtering.
FilterLevel
[source]¶ Bases:
enum.Enum
This enum is used by test classes (e.g. ~testplan.testing.base.Test) to declare the depth of filtering logic while
filter
method is run.By default only
test
(e.g. top) level filtering is used.-
TEST
= 'test'¶
-
TESTCASE
= 'testcase'¶
-
TESTSUITE
= 'testsuite'¶
-
-
class
testplan.testing.filtering.
MetaFilter
(*filters)[source]¶ Bases:
testplan.testing.filtering.BaseFilter
Higher level filter that allow composition of other filters.
-
operator_str
= None¶
-
-
class
testplan.testing.filtering.
Not
(filter_obj)[source]¶ Bases:
testplan.testing.filtering.BaseFilter
Meta filter that returns the inverse of the original filter result.
-
class
testplan.testing.filtering.
Or
(*filters)[source]¶ Bases:
testplan.testing.filtering.MetaFilter
Meta filter that returns True if ANY of the child filters return True.
-
operator_str
= '|'¶
-
-
class
testplan.testing.filtering.
Pattern
(pattern, match_uid=False)[source]¶ Bases:
testplan.testing.filtering.Filter
Base class for name based, glob style filtering.
https://docs.python.org/3.4/library/fnmatch.html
Examples:
<Multitest name>:<suite name>:<testcase name> <Multitest name>::<testcase name> *:<suite name>:-
ALL_MATCH
= '*'¶
-
MAX_LEVEL
= 3¶
-
classmethod
any
(*patterns)[source]¶ Shortcut for filtering against multiple patterns.
e.g. Pattern.any(<pattern 1>, <pattern 2>…)
-
category
= 2¶
-
-
class
testplan.testing.filtering.
PatternAction
(option_strings, dest, nargs=None, const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]¶ Bases:
argparse.Action
Parser action for generating Pattern filters. Returns a list of Pattern filter objects.
In:
--patterns foo bar --patterns baz
Out:
[Pattern('foo'), Pattern('bar'), Pattern('baz')]
-
class
testplan.testing.filtering.
Tags
(tags)[source]¶ Bases:
testplan.testing.filtering.BaseTagFilter
Tag filter that returns True if ANY of the given tags match.
-
class
testplan.testing.filtering.
TagsAction
(option_strings, dest, nargs=None, const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]¶ Bases:
argparse.Action
Parser action for generating tags (any) filters.
In:
--tags foo bar hello=world --tags baz hello=mars
Out:
[ Tags({ 'simple': {'foo', 'bar'}, 'hello': {'world'}, }), Tags({ 'simple': {'baz'}, 'hello': {'mars'}, }) ]
-
class
testplan.testing.filtering.
TagsAll
(tags)[source]¶ Bases:
testplan.testing.filtering.BaseTagFilter
Tag filter that returns True if ALL of the given tags match.
-
class
testplan.testing.filtering.
TagsAllAction
(option_strings, dest, nargs=None, const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]¶ Bases:
testplan.testing.filtering.TagsAction
Parser action for generating tags (all) filters.
In:
--tags-all foo bar hello=world --tags-all baz hello=mars
Out:
[ TagsAll({ 'simple': {'foo', 'bar'}, 'hello': {'world'}, }), TagsAll({ 'simple': {'baz'}, 'hello': {'mars'}, }) ]
-
testplan.testing.filtering.
flatten_filters
(metafilter_kls: Type[MetaFilter], filters: List[Filter]) → List[testplan.testing.filtering.Filter][source]¶ This is used for flattening nested filters of same type
So when we have something like:
Or(filter-1, filter-2) | Or(filter-3, filter-4)We end up with:
Or(filter-1, filter-2, filter-3, filter-4)Instead of:
Or(Or(filter-1, filter-2), Or(filter-3, filter-4))
-
testplan.testing.filtering.
parse_filter_args
(parsed_args, arg_names)[source]¶ Utility function that’s used for grouping filters of the same category together. Will be used while parsing command line arguments for test filters.
Filters that belong to the same category will be grouped under Or whereas filters of different categories will be grouped under And.
In:
--patterns my_pattern --tags foo --tags-all bar baz
Out:
And( Pattern('my_pattern'), Or( Tags({'simple': {'foo'}}), TagsAll({'simple': {'bar', 'baz'}}), ) )
testplan.testing.listing module¶
This module contains logic for listing representing test context of a plan.
-
class
testplan.testing.listing.
BaseLister
[source]¶ Bases:
testplan.testing.listing.Listertype
Base of all listers, implement the
get_output()
give it a name inNAME
and a description inDESCRIPTION
or alternatively overridename()
and/ordescription()
and it is good to be added tolisting_registry
.
-
class
testplan.testing.listing.
CountLister
[source]¶ Bases:
testplan.testing.listing.BaseLister
Displays the number of suites and total testcases per test instance.
-
DESCRIPTION
= 'Lists top level instances and total number of suites & testcases per instance.'¶
-
NAME
= 'COUNT'¶
-
-
class
testplan.testing.listing.
ExpandedNameLister
[source]¶ Bases:
testplan.testing.listing.BaseLister
Lists names of the items within the test context:
Sample output:
- MultitestAlpha
- SuiteOne
- testcase_foo testcase_bar
- SuiteTwo
- testcase_baz
- MultitestBeta
- …
-
DESCRIPTION
= 'List tests in readable format.'¶
-
NAME
= 'NAME_FULL'¶
-
class
testplan.testing.listing.
ExpandedPatternLister
[source]¶ Bases:
testplan.testing.listing.ExpandedNameLister
Lists the items in test context in a copy-pasta friendly format compatible with –patterns and –tags arguments.
Example:
- MultitestAlpha
- MultitestAlpha:SuiteOne –tags color=red
- MultitestAlpha:SuiteOne:testcase_foo MultitestAlpha:SuiteOne:testcase_bar –tags color=blue
- MultitestAlpha:SuiteTwo
- MultitestAlpha:SuiteTwo:testcase_baz
- MultitestBeta
- …
-
DESCRIPTION
= 'List tests in `--patterns` / `--tags` compatible format.'¶
-
NAME
= 'PATTERN_FULL'¶
-
class
testplan.testing.listing.
Listertype
[source]¶ Bases:
object
-
DESCRIPTION
= None¶
-
NAME
= None¶
-
metadata_based
= False¶
-
-
class
testplan.testing.listing.
ListingArgMixin
[source]¶ Bases:
testplan.common.utils.parser.ArgMixin
-
classmethod
get_descriptions
()[source]¶ Override this method to return a dictionary with Enums as keys and description strings as values.
This will later on be rendered via –help command.
-
classmethod
-
class
testplan.testing.listing.
ListingRegistry
[source]¶ Bases:
object
A registry to store listers, add listers to the
listing_registry
instance which is used to create the commandline parser.
-
class
testplan.testing.listing.
MetadataBasedLister
[source]¶ Bases:
testplan.testing.listing.Listertype
Base of all metadata based listers, implement the
get_output()
give it a name inNAME
and a description inDESCRIPTION
or alternatively overridename()
and/ordescription()
and it is good to be added tolisting_registry
.-
metadata_based
= True¶
-
-
class
testplan.testing.listing.
NameLister
[source]¶ Bases:
testplan.testing.listing.TrimMixin
,testplan.testing.listing.ExpandedNameLister
Trimmed version of ExpandedNameLister
-
DESCRIPTION
= 'List tests in readable format.\n\tMax 25 testcases per suite will be displayed'¶
-
NAME
= 'NAME'¶
-
-
class
testplan.testing.listing.
PatternLister
[source]¶ Bases:
testplan.testing.listing.TrimMixin
,testplan.testing.listing.ExpandedPatternLister
Like test lister, but trims list of testcases if they exceed <MAX_TESTCASES>.
This is useful if the user has generated hundreds of testcases via parametrization.
-
DESCRIPTION
= 'List tests in `--patterns` / `--tags` compatible format.\n\tMax 25 testcases per suite will be displayed'¶
-
NAME
= 'PATTERN'¶
-
-
class
testplan.testing.listing.
SimpleJsonLister
[source]¶ Bases:
testplan.testing.listing.MetadataBasedLister
-
DESCRIPTION
= 'Dump test information in json. Can take json:/path/to/output.json as well, then the result is dumped to the file'¶
-
NAME
= 'JSON'¶
-
-
class
testplan.testing.listing.
TrimMixin
[source]¶ Bases:
object
-
DESCRIPTION
= '\tMax 25 testcases per suite will be displayed'¶
-
-
testplan.testing.listing.
listing_registry
= <testplan.testing.listing.ListingRegistry object>¶ Registry instance that will be used to create the commandline parser, this can be extended with new listers
testplan.testing.ordering module¶
Classes for sorting test context before a test run.
Warning: sort_instances functionality is not supported yet, but the API is available for future compatibility.
-
class
testplan.testing.ordering.
AlphanumericSorter
(sort_type=<SortType.ALL: 'all'>)[source]¶ Bases:
testplan.testing.ordering.TypedSorter
Sorter that uses basic alphanumeric ordering.
-
class
testplan.testing.ordering.
NoopSorter
[source]¶ Bases:
testplan.testing.ordering.BaseSorter
Sorter that returns the original ordering.
-
class
testplan.testing.ordering.
ShuffleSorter
(shuffle_type=<SortType.ALL: 'all'>, seed=None)[source]¶ Bases:
testplan.testing.ordering.TypedSorter
Sorter that shuffles the ordering. It is idempotent in a way that, it will return the same ordering for the same seed for the same list.
-
randomizer
¶
-
-
class
testplan.testing.ordering.
SortType
[source]¶ Bases:
enum.Enum
Helper enum used by sorter classes.
-
ALL
= 'all'¶
-
INSTANCES
= 'instances'¶
-
SUITES
= 'suites'¶
-
TEST_CASES
= 'testcases'¶
-
-
class
testplan.testing.ordering.
TypedSorter
(sort_type=<SortType.ALL: 'all'>)[source]¶ Bases:
testplan.testing.ordering.BaseSorter
Base sorter that allows configuration of sort levels via sort_type argument.
testplan.testing.result module¶
Defines the Result object and its sub-namepsaces.
The Result object is the interface used by testcases to make assertions and log data. Entries contained in the result are copied into the Report object after testcases have finished running.
-
class
testplan.testing.result.
AssertionNamespace
(result)[source]¶ Bases:
object
Base class for assertion namespaces. Users can inherit from this class to implement custom namespaces.
-
class
testplan.testing.result.
DictNamespace
(result)[source]¶ Bases:
testplan.testing.result.AssertionNamespace
Contains logic for Dictionary related assertions.
-
check
(dictionary, description=None, category=None, has_keys=None, absent_keys=None)[source]¶ Checks for existence / absence of dictionary keys, uses top level keys in case of nested dictionaries.
result.dict.check( dictionary={ 'foo': 1, 'bar': 2, 'baz': 3, }, has_keys=['foo', 'alpha'], absent_keys=['bar', 'beta'] )
Parameters: - dictionary (
dict
) – Dict object to check. - has_keys (
list
orobject
(items must be hashable)) – List of keys to check for existence. - absent_keys (
list
orobject
(items must be hashable)) – List of keys to check for absence. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- dictionary (
-
log
(dictionary, description=None)[source]¶ Logs a dictionary to the report.
result.dict.log( dictionary={ 'foo': [1, 2, 3], 'bar': {'color': 'blue'}, 'baz': 'hello world', } )
Parameters: - dictionary (
dict
) – Dict object to log. - description (
str
) – Text description for the assertion.
Returns: Always returns True, this is not an assertion so it cannot fail.
Return type: bool
- dictionary (
-
match
(actual: Dict[KT, VT], expected: Dict[KT, VT], include_only_expected: bool = False, description: str = None, category: str = None, include_keys: List[Hashable] = None, exclude_keys: List[Hashable] = None, report_mode=<ReportOptions.ALL: 1>, actual_description: str = None, expected_description: str = None, value_cmp_func: Callable[[Any, Any], bool] = <built-in function eq>) → testplan.testing.multitest.entries.assertions.DictMatch[source]¶ Matches two dictionaries, supports nested data. Custom comparators can be used as values on the
expected
dict.from testplan.common.utils import comparison result.dict.match( actual={ 'foo': 1, 'bar': 2, }, expected={ 'foo': 1, 'bar': 5, 'extra-key': 10, }, ) result.dict.match( actual={ 'foo': [1, 2, 3], 'bar': {'color': 'blue'}, 'baz': 'hello world', }, expected={ 'foo': [1, 2, lambda v: isinstance(v, int)], 'bar': { 'color': comparison.In(['blue', 'red', 'yellow']) }, 'baz': re.compile(r'\w+ world'), } )
Parameters: - actual – Original dictionary.
- expected – Comparison dictionary, can contain custom comparators (e.g. regex, lambda functions)
- include_only_expected – Use the keys present in the expected dictionary.
- include_keys – Keys to exclusively consider in the comparison.
- exclude_keys – Keys to ignore in the comparison.
- report_mode – Specify which comparisons should be kept and reported. Default option is to report all comparisons but this can be restricted if desired. See ReportOptions enum for more detail.
- actual_description – Column header description for original dict.
- expected_description – Column header description for expected dict.
- description – Text description for the assertion.
- category – Custom category that will be used for summarization.
- value_cmp_func – Function to use to compare values in expected and actual dicts. Defaults to using operator.eq().
Returns: Assertion pass status
-
match_all
(values, comparisons, description=None, category=None, key_weightings=None)[source]¶ Match multiple unordered dictionaries.
Initially all value/expected comparison combinations are evaluated and converted to an error weight.
If certain keys are more important than others, it is possible to give them additional weighting during the comparison, by specifying a “key_weightings” dict. The default weight of a mismatch is 100.
The values/comparisons permutation that results in the least error appended to the report.
result.dict.match_all( values=[ {'foo': 12, ...}, {'foo': 13, ...}, ... ], comparisons=[ Expected({'foo': 12, ...}), Expected({'foo': 15, ...}) ... ], # twice the default weight of 100 key_weightings={'foo': 200})
Parameters: - values (
list
ofdict
) – Original values. - comparisons (
list
oftestplan.common.utils.comparison.Expected
) – Comparison objects. - key_weightings (
dict
) – Per-key overrides that specify a different weight for different keys. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- values (
-
-
class
testplan.testing.result.
ExceptionCapture
(result, assertion_kls, exceptions, pattern=None, func=None, description=None, category=None)[source]¶ Bases:
object
Exception capture scope, will be used by exception related assertions. An instance of this class will be used as a context manager by exception related assertion methods.
-
class
testplan.testing.result.
FixNamespace
(result)[source]¶ Bases:
testplan.testing.result.AssertionNamespace
Contains assertion logic that operates on fix messages.
-
check
(msg, description=None, category=None, has_tags=None, absent_tags=None)[source]¶ Checks existence / absence of tags in a Fix message. Checks top level tags only.
result.fix.check( msg={ 36: 6, 22: 5, 55: 2, 38: 5, 555: [ .. more nested data here ... ] }, has_tags=[26, 22, 11], absent_tags=[444, 555], )
Parameters: - msg (
dict
) – Fix message. - has_tags (
list
ofobject
(items must be hashable)) – List of tags to check for existence. - absent_tags (
list
ofobject
(items must be hashable)) – List of tags to check for absence. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- msg (
-
log
(msg, description=None)[source]¶ Logs a fix message to the report.
result.fix.log( msg={ 36: 6, 22: 5, 55: 2, 38: 5, 555: [ .. more nested data here ... ] } )
Parameters: - msg (
dict
orpyfixmsg.fixmessage.FixMessage
) – Fix message. - description (
str
) – Text description for the assertion.
Returns: Always returns True, this is not an assertion so it cannot fail.
Return type: bool
- msg (
-
match
(actual: Dict[KT, VT], expected: Dict[KT, VT], include_only_expected: bool = False, description: str = None, category: str = None, include_tags: List[Hashable] = None, exclude_tags: List[Hashable] = None, report_mode=<ReportOptions.ALL: 1>, actual_description: str = None, expected_description: str = None) → testplan.testing.multitest.entries.assertions.FixMatch[source]¶ Matches two FIX messages, supports repeating groups (nested data). Custom comparators can be used as values on the
expected
msg.result.fix.match( actual={ 36: 6, 22: 5, 55: 2, 38: 5, 555: [ .. more nested data here ... ] }, expected={ 36: 6, 22: 5, 55: lambda val: val in [2, 3, 4], 38: 5, 555: [ .. more nested data here ... ] } )
Parameters: - actual – Original FIX message.
- expected – Expected FIX message, can include compiled regex patterns or callables for advanced comparison.
- include_only_expected – Use the tags present in the expected message.
- include_tags – Tags to exclusively consider in the comparison.
- exclude_tags – Keys to ignore in the comparison.
- report_mode – Specify which comparisons should be kept and reported. Default option is to report all comparisons but this can be restricted if desired. See ReportOptions enum for more detail.
- actual_description – Column header description for original msg.
- expected_description – Column header description for expected msg.
- description – Text description for the assertion.
- category – Custom category that will be used for summarization.
Returns: Assertion pass status
-
match_all
(values, comparisons, description=None, category=None, tag_weightings=None)[source]¶ Match multiple unordered FIX messages.
Initially all value/expected comparison combinations are evaluated and converted to an error weight.
If certain fix tags are more important than others (e.g. ID FIX tags), it is possible to give them additional weighting during the comparison, by specifying a “tag_weightings” dict.
The default weight of a mismatch is 100.
The values/comparisons permutation that results in the least error appended to the report.
result.dict.match_all( values=[ { 36: 6, 22: 5, 55: 2, ...}, { 36: 7, ...}, ... ], comparisons=[ Expected({ 36: 6, 22: 5, 55: 2, ...},), Expected({ 36: 7, ...}) ... ], # twice the default weight of 100 key_weightings={36: 200})
Parameters: - values (
list
ofdict
) – Original values. - comparisons (
list
oftestplan.common.utils.comparison.Expected
) – Comparison objects. - tag_weightings (
dict
) – Per-tag overrides that specify a different weight for different tags. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- values (
-
-
class
testplan.testing.result.
LogfileExpect
(result, log_matcher, regex, timeout, description, category)[source]¶ Bases:
testplan.common.utils.match.ScopedLogfileMatch
ScopedLogfileMatch with assertion operation.
-
class
testplan.testing.result.
LogfileNamespace
(result)[source]¶ Bases:
testplan.testing.result.AssertionNamespace
Contains assertion methods that operates on log files equipped with
LogMatcher
.-
expect
(log_matcher: testplan.common.utils.match.LogMatcher, regex: Union[str, bytes, Pattern[AnyStr]], timeout: float = 5.0, description: Optional[str] = None, category: Optional[str] = None)[source]¶ Call as context manager for pattern matching in logfile, given expected lines (indirectly) produced by context manager body, with matching results logged to the report. On enter doing position setting to EOF operation as
result.logfile.seek_eof
, on exit doing matching operation asresult.logfile.match
.with result.logfile.expect( log_matcher, r".*passed.*", timeout=2.0, description="my logfile match assertion", ): ...
Parameters: - log_matcher – LogMatcher on target logfile.
- regex – Regular expression as expected pattern in target logfile.
- timeout – Match timeout value in seconds.
- description – Text description for the assertion.
- category – Custom category that will be used for summarization.
-
match
(log_matcher: testplan.common.utils.match.LogMatcher, regex: Union[str, bytes, Pattern[AnyStr]], timeout: float = 5.0, description: Optional[str] = None, category: Optional[str] = None)[source]¶ Match patterns in logfile using LogMatcher, with matching results logged to the report.
result.logfile.match( log_matcher, r".*passed.*", timeout=2.0, description="my logfile match assertion", )
Parameters: - log_matcher – LogMatcher on target logfile.
- regex – Regular expression as expected pattern in target logfile.
- timeout – Match timeout value in seconds.
- description – Text description for the assertion.
- category – Custom category that will be used for summarization.
-
seek_eof
(log_matcher: testplan.common.utils.match.LogMatcher, description: Optional[str] = None)[source]¶ Set the position of LogMatcher to end of logfile, with operation logged to the report.
result.logfile.seek_eof(log_matcher)
Parameters: - log_matcher – LogMatcher on target logfile.
- description – Custom text description for the entry.
-
-
class
testplan.testing.result.
RegexNamespace
(result)[source]¶ Bases:
testplan.testing.result.AssertionNamespace
Contains logic for regular expression assertions.
-
findall
(regexp, value, description=None, category=None, flags=0, condition=None)[source]¶ Checks if there are one or more matches of the
regexp
exist in thevalue
viare.finditer
. Can apply further assertions viacondition
func.result.regex.findall( regexp='foo', value='foo foo foo bar bar foo bar', condition=lambda num_matches: 2 < num_matches < 5, )
Parameters: - regexp (
str
or compiled regex) – String pattern or compiled regexp object. - value (
str
) – String to match against. - flags (
int
) – Regex flags that will be passed to there.finditer
function. - condition (
callable
) – A callable that accepts a single argument, which is the number of matches (int). - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- regexp (
-
match
(regexp, value, description=None, category=None, flags=0)[source]¶ Checks if the given
regexp
matches thevalue
viare.match
operation.result.regex.match(regexp='foo', value='foobar')
Parameters: - regexp (
str
or compiled regex) – String pattern or compiled regexp object. - value (
str
) – String to match against. - flags (
int
) – Regex flags that will be passed to there.match
function. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- regexp (
-
matchline
(regexp, value, description=None, category=None, flags=0)[source]¶ Checks if the given
regexp
returns a match (re.match
) for any of the lines in thevalue
.result.regex.matchline( regexp=re.compile(r'\w+ line$'), value=os.linesep.join([ 'first line', 'second aaa', 'third line' ]), )
Parameters: - regexp (
str
or compiled regex) – String pattern or compiled regexp object. - value (
str
) – String to match against. - flags (
int
) – Regex flags that will be passed to there.match
function. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- regexp (
-
multiline_match
(regexp, value, description=None, category=None)[source]¶ Checks if the given
regexp
matches thevalue
viare.match
operation, usesre.MULTILINE
andre.DOTALL
flags implicitly.result.regex.multiline_match( regexp='first line.*second', value=os.linesep.join([ 'first line', 'second line', 'third line' ]), )
Parameters: - regexp (
str
or compiled regex) – String pattern or compiled regexp object. - value (
str
) – String to match against. - description (
str
) – text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status.
Return type: bool
- regexp (
-
multiline_not_match
(regexp, value, description=None, category=None)[source]¶ Checks if the given
regexp
does not match thevalue
viare.match
operation, usesre.MULTILINE
andre.DOTALL
flags implicitly.result.regex.multiline_not_match( regexp='foobar', value=os.linesep.join([ 'first line', 'second line', 'third line' ]), )
Parameters: - regexp (
str
or compiled regex) – String pattern or compiled regexp object. - value (
str
) – String to match against. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- regexp (
-
not_match
(regexp, value, description=None, category=None, flags=0)[source]¶ Checks if the given
regexp
does not match thevalue
viare.match
operation.result.regex.not_match('baz', 'foobar')
Parameters: - regexp (
str
or compiled regex) – String pattern or compiled regexp object. - value (
str
) – String to match against. - flags (
int
) – Regex flags that will be passed to there.match
function. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status.
Return type: bool
- regexp (
-
search
(regexp, value, description=None, category=None, flags=0)[source]¶ Checks if the given
regexp
exists in thevalue
viare.search
operation.result.regex.search('bar', 'foobarbaz')
Parameters: - regexp (
str
or compiled regex) – String pattern or compiled regexp object. - value (
str
) – String to match against. - flags (
int
) – Regex flags that will be passed to there.search
function. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- regexp (
-
search_empty
(regexp, value, description=None, category=None, flags=0)[source]¶ Checks if the given
regexp
does not exist in thevalue
viare.search
operation.result.regex.search_empty('aaa', 'foobarbaz')
Parameters: - regexp (
str
or compiled regex) – String pattern or compiled regexp object. - value (
str
) – String to match against. - flags (
int
) – Regex flags that will be passed to there.search
function. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- regexp (
-
-
class
testplan.testing.result.
Result
(stdout_style=None, continue_on_failure=True, _group_description=None, _parent=None, _summarize=False, _num_passing=5, _num_failing=5, _scratch=None)[source]¶ Bases:
object
Contains assertion methods and namespaces for generating test data. A new instance of
Result
object is passed to each testcase when a suite is run.-
attach
(path, description=None, ignore=None, only=None, recursive=False)[source]¶ Attaches a file to the report.
Parameters: - path (
str
) – Path to the file or directory be to attached. - description (
str
) – Text description for the assertion. - ignore (
list
orNoneType
) – List of patterns of file name to ignore when attaching a directory. - only (
list
orNoneType
) – List of patterns of file name to include when attaching a directory. - recursive (
bool
) – Recursively traverse sub-directories and attach all files, default is to only attach files in top directory.
Returns: Always returns True, this is not an assertion so it cannot fail.
Return type: bool
- path (
-
conditional_log
(condition, log_message, log_description, fail_description, flag=None)[source]¶ A compound assertion that does result.log() or result.fail() depending on the truthiness of condition.
result.conditional_log( some_condition, log_message, log_description, fail_description, )
is a shortcut for writing:
if some_condition: result.log(log_message, description=log_description) else: result.fail(fail_description)
Parameters: - condition – Value to be evaluated for truthiness
- condition –
object
- log_message (
str
) – Message to pass to result.log if condition evaluates to True. - log_description (
str
) – Description to pass to result.log if condition evaluates to True. - fail_description (
str
) – Description to pass to result.fail if condition evaluates to False. - flag – Custom flag of the assertion which is reserved and can be used for some special purpose.
Returns: True
Return type: bool
-
contain
(member, container, description=None, category=None)[source]¶ Checks if
member in container
.result.contain(1, [1, 2, 3, 4], 'Custom description')
Parameters: - member (
object
) – Item to be checked for existence in the container. - container (
object
) – Container object, should support item lookup operations. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- member (
-
diff
(first, second, ignore_space_change=False, ignore_whitespaces=False, ignore_blank_lines=False, unified=False, context=False, description=None, category=None)[source]¶ Line diff assertion. Fail if at least one difference found.
text1 = 'a b c\nd\n' text2 = 'a b c\nd\t\n' result.diff(text1, text2, ignore_space_change=True)
Parameters: - first (
str
orlist
) – The first piece of textual content to be compared. - second (
str
orlist
) – The second piece of textual content to be compared. - ignore_space_change (
bool
) – Ignore changes in the amount of whitespace. - ignore_whitespaces (
bool
) – Ignore all white space. - ignore_blank_lines (
bool
) – Ignore changes whose lines are all blank. - unified (
bool
orint
) – If truth value, output differences in unified context. Use an integer to specify the number of lines of leading context before matching lines and trailing context after matching lines. Defaults to 3. - context (
bool
orint
) – If truth value, output differences in copied context. Use an integer to specify the number of lines of leading context before matching lines and trailing context after matching lines. Defaults to 3.
Returns: Assertion pass status
Return type: bool
- first (
-
eq
(actual, expected, description=None, category=None)¶ Equality assertion, checks if
actual == expected
. Can be used via shortcut:result.eq
.result.equal('foo', 'foo', 'Custom description')
Parameters: - actual (
object
) – First (actual) value of the comparison. - expected (
object
) – Second (expected) value of the comparison. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- actual (
-
equal
(actual, expected, description=None, category=None)[source]¶ Equality assertion, checks if
actual == expected
. Can be used via shortcut:result.eq
.result.equal('foo', 'foo', 'Custom description')
Parameters: - actual (
object
) – First (actual) value of the comparison. - expected (
object
) – Second (expected) value of the comparison. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- actual (
-
equal_exclude_slices
(actual, expected, slices, description=None, category=None)[source]¶ Checks if items that exist outside the given slices of
actual
andexpected
are equal.result.equal_exclude_slices( [1, 2, 3, 4, 5, 6, 7, 8], ['a', 'b', 3, 4, 'c', 'd', 'e', 'f'], slices=[slice(0, 2), slice(4, 8)], description='Comparison of slices (exclusion)' )
Parameters: - actual (
object
that supports slice operations.) – First (actual) value of the comparison. - expected (
object
that supports slice operations.) – Second (expected) value of the comparison. - slices (
list
ofslice
) – Slices that will be used for exclusion of items fromactual
andexpected
. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- actual (
-
equal_slices
(actual, expected, slices, description=None, category=None)[source]¶ Checks if given slices of
actual
andexpected
are equal.result.equal_slices( [1, 2, 3, 4, 5, 6, 7, 8], ['a', 'b', 3, 4, 'c', 'd', 7, 8], slices=[slice(2, 4), slice(6, 8)], description='Comparison of slices' )
Parameters: - actual (
object
that supports slice operations.) – First (actual) value of the comparison. - expected (
object
that supports slice operations.) – Second (expected) value of the comparison. - slices (
list
ofslice
) – Slices that will be applied toactual
andexpected
. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- actual (
-
fail
(message: str, description: Optional[str] = None, flag: Optional[str] = None, category: Optional[str] = None) → testplan.testing.multitest.entries.assertions.Fail[source]¶ Failure assertion, can be used for explicitly failing a testcase. The message will be included by email exporter. Most common usage is within a conditional block.
if some_condition: result.fail('Unexpected failure: {}'.format(...))
Parameters: - description – Text description of the failure.
- category – Custom category that will be used for summarization.
- flag – custom flag - reserved parameter
Returns: False
-
false
(value, description=None, category=None)[source]¶ Boolean assertion, checks if
value
is falsy.result.false(some_obj, 'Custom description')
Parameters: - value (
object
) – Value to be evaluated for falsiness. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- value (
-
ge
(first, second, description=None, category=None)¶ Checks if
first >= second
. Can be used via shortcut:result.ge
result.greater_equal(5, 3, 'Custom description')
Parameters: - first (
object
) – Left side of the comparison. - second (
object
) – Right side of the comparison. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- first (
-
get_namespaces
()[source]¶ This method can be overridden for enabling custom assertion namespaces for child classes.
-
graph
(graph_type, graph_data, description, series_options, graph_options)[source]¶ Displays a Graph in the report.
result.graph('Line', { 'graph 1':[{'x': 0, 'y': 8},{'x': 1, 'y': 5}] }, description='Line Graph', series_options={'graph 1':{"colour": "red"}}, graph_options=None)
Parameters: - graph_type (
str
) – Type of graph user wants to create. Currently implemented: ‘Line’, ‘Scatter’, ‘Bar’, ‘Hexbin’, ‘Pie’, ‘Whisker’, ‘Contour’ - graph_data (
dict[str, list]
) – Data to plot on the graph, for each series. - description (
str
) – Text description for the graph. - series_options (
dict[str, dict[str, object]]`
.) – Customisation parameters for each individual series. Currently implemented: 1){‘Colour’:str
} - colour of that series (str can be either basic colour name or RGB) - graph_options (
dict[str, object]
.) –Customisation parameters for overall graph Currently implemented:
1){‘xAxisTitle’:str
} - x axis graph title 2){‘yAxisTitle’:str
} - y axis graph title 3){‘legend’:bool
} - to display legend legend (Default: false)
- graph_type (
-
greater
(first, second, description=None, category=None)[source]¶ Checks if
first > second
. Can be used via shortcut:result.gt
result.greater(5, 3, 'Custom description')
Parameters: - first (
object
) – Left side of the comparison. - second (
object
) – Right side of the comparison. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- first (
-
greater_equal
(first, second, description=None, category=None)[source]¶ Checks if
first >= second
. Can be used via shortcut:result.ge
result.greater_equal(5, 3, 'Custom description')
Parameters: - first (
object
) – Left side of the comparison. - second (
object
) – Right side of the comparison. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- first (
-
group
(description=None, summarize=False, num_passing=5, num_failing=5)[source]¶ Creates an assertion group or summary, which is helpful for formatting assertion data on certain output targets (e.g. PDF, JSON) and reducing the amount of content that gets displayed.
Should be used as a context manager.
# Group and sub groups with result.group(description='Custom group description') as group: group.not_equal(2, 3, description='Assertion within a group') group.greater(5, 3) with group.group() as sub_group: sub_group.less(6, 3, description='Assertion in sub group') # Summary example with result.group( summarize=True, num_passing=4, num_failing=10, ) as group: for i in range(500): # First 4 passing assertions will be displayed group.equal(i, i) # First 10 failing assertions will be displayed group.equal(i, i + 1)
Parameters: - description (
str
) – Text description for the assertion group. - summarize (
bool
) – Flag for enabling summarization. - num_passing (
int
) – Max limit for number of passing assertions per category & assertion type. - num_failing (
int
) – Max limit for number of failing assertions per category & assertion type.
Returns: A new result object that refers the current result as a parent.
Return type: Result object
- description (
-
gt
(first, second, description=None, category=None)¶ Checks if
first > second
. Can be used via shortcut:result.gt
result.greater(5, 3, 'Custom description')
Parameters: - first (
object
) – Left side of the comparison. - second (
object
) – Right side of the comparison. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- first (
-
isclose
(first, second, rel_tol=1e-09, abs_tol=0.0, description=None, category=None)[source]¶ Checks if
first
andsecond
are approximately equal.result.isclose(99.99, 100, 0.001, 0.0, 'Custom description')
Parameters: - first (
numbers.Number
) – The first item to be compared for approximate equality. - second (
numbers.Number
) – The second item to be compared for approximate equality. - rel_tol (
numbers.Real
) – The relative tolerance. - abs_tol (
numbers.Real
) – The minimum absolute tolerance level.
Returns: Assertion pass status
Return type: bool
- first (
-
le
(first, second, description=None, category=None)¶ Checks if
first <= second
. Can be used via shortcut:result.le
result.less_equal(5, 3, 'Custom description')
Parameters: - first (
object
) – Left side of the comparison. - second (
object
) – Right side of the comparison. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- first (
-
less
(first, second, description=None, category=None)[source]¶ Checks if
first < second
. Can be used via shortcut:result.lt
result.less(3, 5, 'Custom description')
Parameters: - first (
object
) – Left side of the comparison. - second (
object
) – Right side of the comparison. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- first (
-
less_equal
(first, second, description=None, category=None)[source]¶ Checks if
first <= second
. Can be used via shortcut:result.le
result.less_equal(5, 3, 'Custom description')
Parameters: - first (
object
) – Left side of the comparison. - second (
object
) – Right side of the comparison. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- first (
-
log
(message, description=None, flag=None)[source]¶ Create a string message entry, can be used for providing additional context related to test steps.
result.log('Custom log message ...')
Parameters: - message (
str
or instance) – Log message - description (
str
) – Text description for the assertion. - flag – Custom flag of the assertion which is reserved and can be used for some special purpose.
- flag –
str
orNoneType
Returns: Always returns True, this is not an assertion so it cannot fail.
Return type: bool
- message (
-
log_code
(code, language='python', description=None)[source]¶ Create a codelog message entry which contains code snippet, can be used for providing additional context related to test steps.
Parameters: - code (
str
) – The source code string. - language (
str
) – The language of source code. e.g. js, xml, python, java, c, cpp, bash. Defaults to python. - description (
str
) – Text description for the assertion.
Returns: True
Return type: bool
- code (
-
log_html
(code, description='Embedded HTML')[source]¶ Create a markdown message entry without escape, can be used for providing additional context related to test steps.
Parameters: - code (
str
) – HTML code string. Tag <script> will not be executed. - description (
str
) – Text description for the assertion.
Returns: True
Return type: bool
- code (
-
lt
(first, second, description=None, category=None)¶ Checks if
first < second
. Can be used via shortcut:result.lt
result.less(3, 5, 'Custom description')
Parameters: - first (
object
) – Left side of the comparison. - second (
object
) – Right side of the comparison. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- first (
-
markdown
(message, description=None, escape=True)[source]¶ Create a markdown message entry, can be used for providing additional context related to test steps.
result.markdown( 'Markdown string ....', description='Test', escape=False )
Parameters: - message (
str
) – Markdown string - description (
str
) – Text description for the assertion. - escape – Escape html.
- escape –
bool
Returns: True
Return type: bool
- message (
-
matplot
(pyplot, width=None, height=None, description=None)[source]¶ Displays a Matplotlib plot in the report.
Parameters: - pyplot (
matplotlib.pyplot
) – Matplotlib pyplot object to be displayed. - width (
int
) – Figure width in inches, use pyplot defaul if not specified - height (
int
) – Figure height in inches, use pyplot default if not specified - description (
str
) – Text description for the assertion.
Returns: Always returns True, this is not an assertion so it cannot fail.
Return type: bool
- pyplot (
-
namespaces
= {'dict': <class 'testplan.testing.result.DictNamespace'>, 'fix': <class 'testplan.testing.result.FixNamespace'>, 'logfile': <class 'testplan.testing.result.LogfileNamespace'>, 'regex': <class 'testplan.testing.result.RegexNamespace'>, 'table': <class 'testplan.testing.result.TableNamespace'>, 'xml': <class 'testplan.testing.result.XMLNamespace'>}¶
-
ne
(actual, expected, description=None, category=None)¶ Inequality assertion, checks if
actual != expected
. Can be used via shortcut:result.ne
.result.not_equal('foo', 'bar', 'Custom description')
Parameters: - actual (
object
) – First (actual) value of the comparison. - expected (
object
) – Second (expected) value of the comparison. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- actual (
-
not_contain
(member, container, description=None, category=None)[source]¶ Checks if
member not in container
.result.not_contain(5, [1, 2, 3, 4], 'Custom description')
Parameters: - member (
object
) – Item to be checked for absence from the container. - container (
object
) – Container object, should support item lookup operations. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- member (
-
not_equal
(actual, expected, description=None, category=None)[source]¶ Inequality assertion, checks if
actual != expected
. Can be used via shortcut:result.ne
.result.not_equal('foo', 'bar', 'Custom description')
Parameters: - actual (
object
) – First (actual) value of the comparison. - expected (
object
) – Second (expected) value of the comparison. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- actual (
-
not_raises
(exceptions, description=None, category=None, pattern=None, func=None)[source]¶ Checks if given code block does not raise certain type(s) of exception(s).
Supports further checks via
pattern
andfunc
arguments.with result.not_raises(AttributeError): {'foo': 3}['bar'] with result.raises(ValueError, pattern='foo') raise ValueError('abc xyz') def check_exception(exc): ... with result.raises(TypeError, func=check_exception): raise TypeError(...)
Parameters: - exceptions (
list
ofException
classes or a singleException
class) – Exception types to check. - pattern (
str
or compiled regex object) – String pattern that will be searched (re.searched
) within exception message. - func (
callable
) – Callable that accepts a single argument (the exception object) - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- exceptions (
-
passed
¶ Entries stored passed status.
-
raises
(exceptions, description=None, category=None, pattern=None, func=None)[source]¶ Checks if given code block raises certain type(s) of exception(s). Supports further checks via
pattern
andfunc
arguments.with result.raises(KeyError): {'foo': 3}['bar'] with result.raises(ValueError, pattern='foo') raise ValueError('abc foobar xyz') def check_exception(exc): ... with result.raises(TypeError, func=check_exception): raise TypeError(...)
Parameters: - exceptions (
list
ofException
classes or a singleException
class) – Exception types to check. - pattern (
str
or compiled regex object) – String pattern that will be searched (re.searched
) within exception message. - func (
callable
) – Callable that accepts a single argument (the exception object) - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- exceptions (
-
serialized_entries
¶ Return entry data in dictionary form. This will then be stored in related
TestCaseReport
’sentries
attribute.
-
skip
(reason: str, description: Optional[str] = None)[source]¶ Skip a testcase with the given reason.
Parameters: - reason (
str
) – The message to show the user as reason for the skip. - description (
str
) – Text description for the assertion.
- reason (
-
true
(value, description=None, category=None)[source]¶ Boolean assertion, checks if
value
is truthy.result.true(some_obj, 'Custom description')
Parameters: - value (
object
) – Value to be evaluated for truthiness. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- value (
-
-
class
testplan.testing.result.
TableNamespace
(result)[source]¶ Bases:
testplan.testing.result.AssertionNamespace
Contains logic for regular expression assertions.
-
column_contain
(table, values, column, description=None, category=None, limit=None, report_fails_only=False)[source]¶ Checks if all of the values of a table’s column contain values from a given list.
result.table.column_contain( table=[ ['symbol', 'amount'], ['AAPL', 12], ['GOOG', 21], ['FB', 32], ['AMZN', 5], ['MSFT', 42] ], values=['AAPL', 'AMZN'], column='symbol', )
Parameters: - table (
list
oflist
orlist
ofdict
.) – Tabular data - values (
iterable
ofobject
) – Values that will be checked against each cell. - column (
str
) – Column name to check. - limit (
int
) – Maximum number of rows to process, can be used for limiting output. - report_fails_only (
bool
) – Filtering option, output will contain failures only if this argument is True. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- table (
-
diff
(actual, expected, description=None, category=None, include_columns=None, exclude_columns=None, report_all=True, fail_limit=0)[source]¶ Find differences of two tables, uses equality for each table cell for plain values and supports regex / custom comparators as well. The result will contain only failing comparisons.
If the columns of the two tables are not the same, either
include_columns
orexclude_columns
arguments must be used to have column uniformity.result.table.diff( actual=[ ['name', 'age'], ['Bob', 32], ['Susan', 24], ], expected=[ ['name', 'age'], ['Bob', 33], ['David', 24], ] ) result.table.diff( actual=[ ['name', 'age'], ['Bob', 32], ['Susan', 24], ], expected=[ ['name', 'age'], [re.compile(r'^B\w+'), 33], ['David', lambda age: 20 < age < 50], ] )
Parameters: - actual (
list
oflist
orlist
ofdict
.) – Tabular data - expected (
list
oflist
orlist
ofdict
.) – Tabular data, which can contain custom comparators. - include_columns (
list
ofstr
) – List of columns to include in the comparison. Cannot be used withexclude_columns
. - exclude_columns (
list
ofstr
) – List of columns to exclude from the comparison. Cannot be used withinclude_columns
. - report_all (
bool
) – Boolean flag for configuring output. If True then all columns of the original table will be displayed. - fail_limit (
int
) – Max number of failures before aborting the comparison run. Useful for large tables, when we want to stop after we have N rows that fail the comparison. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- actual (
-
log
(table, display_index=False, description=None)[source]¶ Logs a table to the report.
result.table.log( table=[ ['name', 'age', 'gender'], ['Bob', 32, 'M'], ['Susan', 24, 'F'], ] )
Parameters: - table (
list
oflist
orlist
ofdict
) – Tabular data. - display_index (
bool
) – Flag whether to display row indices. - description (
str
) – Text description for the assertion.
Returns: Always returns True, this is not an assertion so it cannot fail.
Return type: bool
- table (
-
match
(actual, expected, description=None, category=None, include_columns=None, exclude_columns=None, report_all=True, fail_limit=0)[source]¶ Compares two tables, uses equality for each table cell for plain values and supports regex / custom comparators as well.
If the columns of the two tables are not the same, either
include_columns
orexclude_columns
arguments must be used to have column uniformity.result.table.match( actual=[ ['name', 'age'], ['Bob', 32], ['Susan', 24], ], expected=[ ['name', 'age'], ['Bob', 33], ['David', 24], ] ) result.table.match( actual=[ ['name', 'age'], ['Bob', 32], ['Susan', 24], ], expected=[ ['name', 'age'], [re.compile(r'^B\w+'), 33], ['David', lambda age: 20 < age < 50], ] )
Parameters: - actual (
list
oflist
orlist
ofdict
.) – Tabular data - expected (
list
oflist
orlist
ofdict
.) – Tabular data, which can contain custom comparators. - include_columns (
list
ofstr
) – List of columns to include in the comparison. Cannot be used withexclude_columns
. - exclude_columns (
list
ofstr
) – List of columns to exclude from the comparison. Cannot be used withinclude_columns
. - report_all (
bool
) – Boolean flag for configuring output. If True then all columns of the original table will be displayed. - fail_limit (
int
) – Max number of failures before aborting the comparison run. Useful for large tables, when we want to stop after we have N rows that fail the comparison. - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- actual (
-
-
class
testplan.testing.result.
XMLNamespace
(result)[source]¶ Bases:
testplan.testing.result.AssertionNamespace
Contains logic for XML related assertions.
-
check
(element, xpath, description=None, category=None, tags=None, namespaces=None)[source]¶ Checks if given xpath and tags exist in the XML body. Supports namespace based matching as well.
result.xml.check( element=''' <Root> <Test>Value1</Test> <Test>Value2</Test> </Root> ''', xpath='/Root/Test', tags=['Value1', 'Value2'], ) result.xml.check( element=''' <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Header/> <SOAP-ENV:Body> <ns0:message xmlns:ns0="http://testplan">Hello world!</ns0:message> </SOAP-ENV:Body> </SOAP-ENV:Envelope> ''', xpath='//*/a:message', tags=[re.compile(r'Hello*')], namespaces={"a": "http://testplan"}, )
Parameters: - element (
str
orlxml.etree.Element
) – XML element - xpath (
str
) – XPath expression to be used for navigation & check. - tags (
list
ofstr
or compiled regex patterns) – Tag values to match against in the given xpath. - namespaces (
dict
) – Prefix mapping for xpath expressions. (namespace prefixes as keys and URIs for values.) - description (
str
) – Text description for the assertion. - category (
str
) – Custom category that will be used for summarization.
Returns: Assertion pass status
Return type: bool
- element (
-
-
testplan.testing.result.
report_target
(func: Callable, ref_func: Callable = None) → Callable[source]¶ Sets the decorated function’s filepath and line-range in assertion state. If the target function is a parametrized function, should refer to its parametrized template to find information of the original function.
Parameters: - func – The target function about which the information of source path and line range will be retrieved.
- ref_func – The parametrized template if func is a generated
function, otherwise
None
.
testplan.testing.tagging module¶
Generic Tagging logic.
Return True if all tag sets in tag_arg_dict is a subset of the matching categories in target_tag_dict.
Return true if there is at least one match for a category.
-
testplan.testing.tagging.
merge_tag_dicts
(*tag_dicts)[source]¶ Utility function for merging tag dicts for easy comparisons.
-
testplan.testing.tagging.
parse_tag_arguments
(*tag_arguments)[source]¶ Parse command line tag arguments into a dictionary of sets.
For the call below:
--tags foo bar named-tag=one,two named-tag=three hello=world
We will get:
[ {'simple': {'foo'}, {'simple', {'bar'}, {'named_tag', {'one', 'two'}, {'named_tag', {'three'}, {'hello', {'world'} ]
The repeated tag values will later on be grouped together via TagsAction.
-
testplan.testing.tagging.
tag_label
(tag_dict)[source]¶ Return tag data in readable format.
>>> tag_dict = { 'simple': set(['foo', 'bar']), 'tag_group_1': set(['some-value']), 'other_group': set(['one', 'two', 'three']) }
>>> tag_label(tag_dict) Tags: foo bar tag_group_1=some-value other_group=one,two,three
-
testplan.testing.tagging.
validate_tag_value
(tag_value)[source]¶ Validate a tag value, make sure it is of correct type. Return a tag dict for internal representation.
Sample input / output:
‘foo’ -> {‘simple’: {‘foo’} (‘foo’, ‘bar’) -> {‘simple’: {‘foo’, ‘bar’} {‘color’: ‘red’} -> {‘color’: {‘red’} {‘color’: (‘red’, ‘blue’)} -> {‘color’: {‘red’, ‘blue’}
Parameters: tag_value ( string
,iterable
ofstring
or adict
withstring
keys andstring
oriterable
ofstring
as values.) – User defined tag value.Returns: Internal representation of the tag context. Return type: dict
ofset
testplan.testing.py_test module¶
PyTest test runner.
-
class
testplan.testing.py_test.
PyTest
(name, target, description=None, select='', extra_args=None, result=<class 'testplan.testing.result.Result'>, **options)[source]¶ Bases:
testplan.testing.base.Test
PyTest plugin for Testplan. Allows tests written for PyTest to be run from Testplan, with the test results logged and included in the Testplan report.
Parameters: - name (
str
) – Test instance name, often used as uid of test entity. - target (
str
orlist
ofstr
) – Target of PyTest configuration. - description (
str
) – Description of test instance. - select (
str
) – Selection of PyTest configuration. - extra_args (
NoneType
orlist
ofstr
) – Extra arguments passed to pytest. - result (
Result
) – Result that contains assertion entries.
Also inherits all
Test
options.-
CONFIG
¶ alias of
PyTestConfig
-
get_test_context
()[source]¶ Inspect the test suites and cases by running PyTest with the –collect-only flag and passing in our collection plugin.
Returns: List containing pairs of suite name and testcase names. Return type: List[Tuple[str, List[str]]]
-
run_testcases_iter
(testsuite_pattern: str = '*', testcase_pattern: str = '*', shallow_report: Dict[KT, VT] = None) → Generator[T_co, T_contra, V_co][source]¶ Run all testcases and yield testcase reports.
Parameters: - testsuite_pattern – pattern to match for testsuite names
- testcase_pattern – pattern to match for testcase names
- shallow_report – shallow report entry
Returns: generator yielding testcase reports and UIDs for merge step
- name (
testplan.testing.pyunit module¶
PyUnit test runner.
-
class
testplan.testing.pyunit.
PyUnit
(name, testcases, description=None, **kwargs)[source]¶ Bases:
testplan.testing.base.Test
Test runner for PyUnit unit tests.
Parameters: - name (
str
) – Test instance name, often used as uid of test entity. - testcases (
TestCase
) – PyUnit testcases. - description (
str
) – Description of test instance.
Also inherits all
Test
options.-
CONFIG
¶ alias of
PyUnitConfig
-
get_test_context
()[source]¶ Currently we do not inspect individual PyUnit testcases - only allow the whole suite to be run.
-
run_testcases_iter
(testsuite_pattern: str = '*', testcase_pattern: str = '*', shallow_report: Dict[KT, VT] = None) → Generator[T_co, T_contra, V_co][source]¶ Run all testcases and yield testcase reports.
Parameters: - testsuite_pattern – pattern to match for testsuite names
- testcase_pattern – pattern to match for testcase names
- shallow_report – shallow report entry
Returns: generator yielding testcase reports and UIDs for merge step
- name (
-
class
testplan.testing.pyunit.
PyUnitConfig
(**options)[source]¶ Bases:
testplan.testing.base.TestConfig
Configuration object for :py:class`~testplan.testing.pyunit.PyUnit` test runner.
testplan.testing.junit module¶
JUnit test runner.
-
class
testplan.testing.junit.
JUnit
(name, binary, results_dir, junit_args=None, junit_filter=None, **options)[source]¶ Bases:
testplan.testing.base.ProcessRunnerTest
Subprocess test runner for JUnit: https://junit.org/junit5/docs/current/user-guide/
Please note that the test (either native binary or script) should generate XML format report so that Testplan is able to parse the result.
gradle test
Parameters: - name (
str
) – Test instance name, often used as uid of test entity. - binary (
str
) – Path to the gradle binary or script. - description (
str
) – Description of test instance. - junit_args (
NoneType
orlist
) – Customized command line arguments for Junit test - results_dir (
str
) – Where saved the test xml report. - junit_filter (
NoneType
orlist
) – Customized command line arguments for filtering testcases.
Also inherits all
ProcessRunnerTest
options.-
CONFIG
¶ alias of
JUnitConfig
-
list_command_filter
(testsuite_pattern, testcase_pattern)[source]¶ Return the base list command with additional filtering to list a specific set of testcases.
- name (
-
class
testplan.testing.junit.
JUnitConfig
(**options)[source]¶ Bases:
testplan.testing.base.ProcessRunnerTestConfig
Configuration object for :py:class`~testplan.testing.junit.JUnit` test runner.