testplan.testing.multitest package

Subpackages

Submodules

testplan.testing.multitest.base module

MultiTest test execution framework.

class testplan.testing.multitest.base.MultiTest(name: str, suites, description=None, initial_context={}, environment=[], dependencies=None, thread_pool_size=0, max_thread_pool_size=10, part=None, multi_part_uid=None, before_start=None, after_start=None, before_stop=None, after_stop=None, stdout_style=None, tags=None, result=<class 'testplan.testing.result.Result'>, fix_spec_path=None, testcase_report_target=True, **options)[source]

Bases: testplan.testing.base.Test

Starts a local Environment of Driver instances and executes testsuites against it.

Parameters:
  • name – Test instance name, often used as uid of test entity.
  • suites (list) – List of @testsuite decorated class instances containing @testcase decorated methods representing the tests.
  • description (str) – Description of test instance.
  • thread_pool_size (int) – Size of the thread pool which executes testcases with execution_group specified in parallel (default 0 means no pool).
  • max_thread_pool_size (int) – Maximum number of threads allowed in the pool.
  • stop_on_error (bool) – When exception raised, stop executing remaining testcases in the current test suite. Default: True
  • part (tuple of (int, int)) – Execute only a part of the total testcases. MultiTest needs to know which part of the total it is. Only works with Multitest.
  • multi_part_uid (callable) – Custom function to overwrite the uid of test entity if part attribute is defined, otherwise use default implementation.
  • fix_spec_path (NoneType or str.) – Path of fix specification file.
  • testcase_report_target (bool) – Whether to mark testcases as assertions for filepath and line number information

Also inherits all Test options.

CONFIG

alias of MultiTestConfig

DEFAULT_THREAD_POOL_SIZE = 5
ENVIRONMENT

alias of testplan.testing.environment.base.TestEnvironment

RESULT

alias of testplan.testing.base.TestResult

STATUS

alias of testplan.common.entity.base.RunnableStatus

abort()

Default abort policy. First abort all dependencies and then itself.

abort_dependencies()

Yield all dependencies to be aborted before self abort.

aborted

Returns if entity was aborted.

aborting()[source]

Suppressing not implemented debug log from parent class.

active

Entity not aborting/aborted.

add_main_batch_steps()[source]

Runnable steps to be executed while environment is running.

add_post_main_steps() → None

Runnable steps to run before environment stopped.

add_post_resource_steps()[source]

Runnable steps to run after environment stopped.

add_pre_main_steps() → None

Runnable steps to run after environment started.

add_pre_resource_steps()[source]

Runnable steps to be executed before environment starts.

add_resource(resource: testplan.common.entity.base.Resource, uid: Optional[str] = None)

Adds a resource in the runnable environment.

Parameters:
  • resource (Subclass of Resource) – Resource to be added.
  • uid (str or NoneType) – Optional input resource uid.
Returns:

Resource uid assigned.

Return type:

str

add_start_resource_steps() → None

Runnable steps to start environment

add_stop_resource_steps() → None

Runnable steps to stop environment

apply_xfail_tests()[source]

Apply xfail tests specified via –xfail-tests or @test_plan(xfail_tests=…). For MultiTest, we only apply MT:: & MT:TS:* here. Testcase level xfail already applied during test execution.

cfg

Configuration object.

context_input(exclude: list = None) → Dict[str, Any]

All attr of self in a dict for context resolution

define_runpath()

Define runpath directory based on parent object and configuration.

description
dry_run() → None

Return an empty report skeleton for this test including all testsuites, testcases etc. hierarchy. Does not run any tests.

filter_levels = [<FilterLevel.TEST: 'test'>, <FilterLevel.TESTSUITE: 'testsuite'>, <FilterLevel.TESTCASE: 'testcase'>]
classmethod filter_locals(local_vars)

Filter out init params of None value, they will take default value defined in its ConfigOption object; also filter out special vars that are not init params from local_vars.

Parameters:local_vars
get_filter_levels() → List[testplan.testing.filtering.FilterLevel]
get_metadata() → testplan.testing.multitest.test_metadata.TestMetadata[source]
get_stdout_style(passed: bool)

Stdout style for status.

get_tags_index()[source]

Tags index for a multitest is its native tags merged with tag indices from all of its suites. (Suite tag indices will also contain tag indices from their testcases as well).

get_test_context()[source]

Return filtered & sorted list of suites & testcases via cfg.test_filter & cfg.test_sorter.

Returns:Test suites and testcases belong to them.
Return type:list of tuple
i
interactive
log_test_results(top_down: bool = True)

Log test results. i.e. ProcessRunnerTest or PyTest.

Parameters:top_down – Flag logging test results using a top-down approach or a bottom-up approach.
logger

logger object

make_runpath_dirs()

Creates runpath related directories.

name

Instance name.

parent

Returns parent Entity.

pause()

Pauses entity execution.

pausing()

Pauses the resource.

post_step_call(step)[source]

Callable to be executed after each step.

pre_step_call(step)

Callable to be invoked before each step.

propagate_tag_indices() → None

Basic step for propagating tag indices of the test report tree. This step may be necessary if the report tree is created in parts and then added up.

report

Shortcut for the test report.

reset_context() → None
resources

Returns the Environment of Resources.

result

Returns a RunnableResult

resume()

Resumes entity execution.

resuming()

Resumes the resource.

run()

Executes the defined steps and populates the result object.

run_result()

Returns if a run was successful.

run_testcases_iter(testsuite_pattern: str = '*', testcase_pattern: str = '*', shallow_report: Dict[KT, VT] = None) → Generator[T_co, T_contra, V_co][source]

Run all testcases and yield testcase reports.

Parameters:
  • testsuite_pattern – pattern to match for testsuite names
  • testcase_pattern – pattern to match for testcase names
  • shallow_report – shallow report entry
Returns:

generator yielding testcase reports and UIDs for merge steps

run_tests()[source]

Run all tests as a batch and return the results.

runpath

Path to be used for temp/output files by entity.

scratch

Path to be used for temp files by entity.

set_discover_path(path: str) → None

If the Test is materialized from a task that is discovered outside pwd(), this might be needed for binary/library path derivation to work properly. :param path: the absolute path where the task has been discovered

set_part(part: Tuple[int, int]) → None[source]
Parameters:part – Enable the part feature and execute only a part of the total testcases for an existing Multitest.
setup()[source]

Multitest pre-running routines.

Here related resources haven’t been set up while all necessary wires have been connected.

should_log_test_result(depth: int, test_obj, style) → Tuple[bool, int]

Whether to log test result and if yes, then with what indent.

Returns:

whether to log test results (Suite report, Testcase report, or result of assertions) and the indent that should be kept at start of lines

Raises:
  • ValueError – if met with an unexpected test group category
  • TypeError – if meth with an unsupported test object
should_run()[source]

MultiTest filters are applied in get_test_context so we just check if test_context is not empty.

skip_step(step)[source]

Check if a step should be skipped.

start_test_resources() → None

Start all test resources but do not run any tests. Used in the interactive mode when environments may be started/stopped on demand. The base implementation is very simple but may be overridden in sub- classes to run additional setup pre- and post-environment start.

status

Status object.

stdout_style

Stdout style input.

stop_test_resources() → None

Stop all test resources. As above, this method is used for the interactive mode and is very simple in this base Test class, but may be overridden by sub-classes.

suites

Input list of suites.

teardown()

Teardown step to be executed last.

test_context
uid()[source]

Instance name uid. A Multitest part instance should not have the same uid as its name.

unset_part() → None[source]

Disable part feature

wait(target_status, timeout=None)

Wait until objects status becomes target status.

Parameters:
  • target_status (str) – expected status
  • timeout (int or NoneType) – timeout in seconds
class testplan.testing.multitest.base.MultiTestConfig(**options)[source]

Bases: testplan.testing.base.TestConfig

Configuration object for MultiTest runnable test execution framework.

classmethod build_schema()

Build a validation schema using the config options defined in this class and its parent classes.

denormalize()

Create new config object that inherits all explicit attributes from its parents as well.

get_local(name, default=None)

Returns a local config setting (not from container)

classmethod get_options()[source]

Runnable specific config options.

ignore_extra_keys = False
parent

Returns the parent configuration.

set_local(name, value)

set without any check

class testplan.testing.multitest.base.MultiTestRuntimeInfo[source]

Bases: object

This class provides information about the state of the actual test run that is accessible from the testcase through the environment as: env.runtime_info

Currently only the actual testcase name is accessible as: env.runtime_info.testcase.name, more info to come.

class TestcaseInfo[source]

Bases: object

name = None
report = None
class testplan.testing.multitest.base.RuntimeEnvironment(environment: testplan.common.entity.base.Environment, runtime_info: testplan.testing.multitest.base.MultiTestRuntimeInfo)[source]

Bases: object

A collection of resources accessible through either items or named attributes, representing a test environment instance with runtime information about the currently executing testcase.

This class is a tiny wrapper around the Environment of Test, delegates all calls to it but with a runtime_info which serves the runtime information of the current thread of execution.

testplan.testing.multitest.base.deprecate_stop_on_error(user_input)[source]
testplan.testing.multitest.base.iterable_suites(obj)[source]

Create an iterable suites object.

testplan.testing.multitest.logging module

class testplan.testing.multitest.logging.AutoLogCaptureMixin[source]

Bases: testplan.testing.multitest.logging.LogCaptureMixin

capture_log(result, capture_level=None, attach_log=None, format=None)

Context manager to capture logs, capture the log in the provided result.

Parameters:
  • result – The result where to inject the log
  • capture_level (CaptureLevel) – The level the log are captured, TESTSUITE (default), TESTPLAN or ROOT
  • attach_log (bool) – If True the logs captured to file and then attached to the result
  • format (str) – A format string can be passed to the loghandler
Returns:

returns the suite level logger

Return type:

logging.Logger

log_capture_config
logger

logger object

post_testcase(name, env, result)[source]
pre_testcase(name, env, result)[source]
select_loggers(capture_level)
class testplan.testing.multitest.logging.CaptureLevel[source]

Bases: object

Capture level Enum like object

ROOT:
Capture all logs reaching the root logger, it contains all testplan logs plus other lib logs
TESTPLAN:
Capture all testplan logs, eg driver logs
TESTSUITE:
Whatever is logged from the testcases
static OTHER(suite)
ROOT = (<staticmethod object>, <staticmethod object>)
static TESTPLAN(suite)
static TESTSUITE(suite)
class testplan.testing.multitest.logging.LogCaptureConfig[source]

Bases: object

Configuration for log capture

capture_level CaptureLevel:
initial value: CaptureLevel.TESTSUITE The level the log are captured, TESTSUITE (default), TESTPLAN or ROOT
attach_log bool:
If True the logs captured to file and then attached to the result
format str:
A format string can be passed to the loghandler
class testplan.testing.multitest.logging.LogCaptureMixin[source]

Bases: testplan.common.utils.logger.Loggable

Mixin to add easy logging support to any @multitest.testsuite

capture_log(result, capture_level=None, attach_log=None, format=None)[source]

Context manager to capture logs, capture the log in the provided result.

Parameters:
  • result – The result where to inject the log
  • capture_level (CaptureLevel) – The level the log are captured, TESTSUITE (default), TESTPLAN or ROOT
  • attach_log (bool) – If True the logs captured to file and then attached to the result
  • format (str) – A format string can be passed to the loghandler
Returns:

returns the suite level logger

Return type:

logging.Logger

log_capture_config
logger

logger object

select_loggers(capture_level)[source]

testplan.testing.multitest.parametrization module

Parametrization support for test cases.

exception testplan.testing.multitest.parametrization.ParametrizationError[source]

Bases: ValueError

testplan.testing.multitest.parametrization.default_name_func(func_name, kwargs)[source]

Readable testcase name generator for parametrized testcases.

>>> import collections
>>> default_name_func('Test Method',
                      collections.OrderedDict(('foo', 5), ('bar', 10)))
'Test Method {foo:5, bar:10}'
Parameters:
  • func_name (str) – Name of the parametrization target function.
  • kwargs (collections.OrderedDict) – The order of keys will be the same as the order of arguments in the original function.
Returns:

New readable name testcase method.

Return type:

str

testplan.testing.multitest.parametrization.generate_functions(function, parameters, name, name_func, tag_dict, tag_func, docstring_func, summarize, num_passing, num_failing, key_combs_limit, execution_group, timeout)[source]

Generate test cases using the given parameter context, use the name_func to generate the name.

If parameters is of type tuple / list then a new testcase method will be created for each item.

If parameters is of type dict (of tuple/list), then a new method will be created for each item in the Cartesian product of all combinations of values.

Parameters:
  • function (callable) – A testcase method, with extra arguments for parametrization.
  • parameters (list or tuple of dict or tuple / list OR a dict of tuple / list.) – Parametrization context for the test case method.
  • name (str) – Customized readable name for testcase.
  • name_func (callable) – Function for generating names of parametrized testcases, should accept func_name and kwargs as parameters.
  • docstring_func (callable) – Function that will generate docstring, should accept docstring and kwargs as parameters.
  • tag_func (callable) – Function that will be used for generating tags via parametrization kwargs. Should accept kwargs as parameter.
  • tag_dict (dict of set) – Tag annotations to be used for each generated testcase.
  • summarize (bool) – Flag for enabling testcase level summarization of all assertions.
  • num_passing (int) – Max number of passing assertions for testcase level assertion summary.
  • num_failing (int) – Max number of failing assertions for testcase level assertion summary.
  • key_combs_limit (int) – Max number of failed key combinations on fix/dict summaries that contain assertion details.
  • execution_group (str or NoneType) – Name of execution group in which the testcases can be executed in parallel.
  • timeout (int) – Timeout in seconds to wait for testcase to be finished.
Returns:

List of functions that is testcase compliant (accepts self, env, result as arguments) and have unique names.

Return type:

list

testplan.testing.multitest.parametrization.parametrization_name_func(func_name, kwargs)[source]

Method name generator for parametrized testcases.

>>> import collections
>>> parametrization_name_func('test_method',
                           collections.OrderedDict(('foo', 5), ('bar', 10)))
'test_method__foo_5__bar_10'
Parameters:
  • func_name (str) – Name of the parametrization target function.
  • kwargs (collections.OrderedDict) – The order of keys will be the same as the order of arguments in the original function.
Returns:

New testcase method name.

Return type:

str

testplan.testing.multitest.result module

testplan.testing.multitest.suite module

Multitest testsuite/testcase module.

testplan.testing.multitest.suite.add_testcase_metadata(func: Callable, metadata: testplan.testing.multitest.test_metadata.TestCaseStaticMetadata)[source]
testplan.testing.multitest.suite.get_suite_metadata(suite: object, include_testcases: bool = True) → testplan.testing.multitest.test_metadata.TestSuiteMetadata[source]
testplan.testing.multitest.suite.get_testcase_desc(suite, testcase_name)[source]

Return the description of the testcase with the given name of the given testsuite.

Remove trailing line returns if applicable, they look nasty in the reports (text and otherwise)

testplan.testing.multitest.suite.get_testcase_metadata(testcase: object)[source]
testplan.testing.multitest.suite.get_testcase_methods(suite)[source]

Return the unbound method objects marked as a testcase from a testsuite class.

testplan.testing.multitest.suite.get_testsuite_desc(suite)[source]

Return the description of the testsuite.

Remove trailing line returns if applicable, they look nasty in the reports (text and otherwise)

testplan.testing.multitest.suite.get_testsuite_name(suite)[source]

Returns the name to be used for the given testsuite. This is made of either the class name or the result of name (can be a normal string or a function returning a string) if it exists. The first time this function is called the suite name will be saved for future use.

Parameters:suite (testsuite) – Suite object whose name is needed
Returns:Name of given suite
Return type:str
testplan.testing.multitest.suite.is_testcase(func)[source]

Returns true if the given function is a testcase. :param func: Function object.

Returns:True if the function is decorated with testcase.
testplan.testing.multitest.suite.propagate_tag_indices(suite, tag_dict)[source]

Update tag indices of the suite instance / class and its children (e.g. testcases, parametrization templates).

For multitest we support multiple levels of tagging:

  1. Multitest (top) level
  2. Suite level
  3. Testcase / parametrization level

When a test suite class is defined, the native tags of the suite is used for updating test indices of unbound testcase methods and vice versa. These test indices are shared among all instances of this suite.

Later on when an instance of the suite class is initialized and added to a multitest, test indices of the suite object and its bound testcase methods are updated multitest object’s native tags.

This means different instances of the same suite class may end up having different tag indices if they are added to different multitests that have different native tags.

E.g. when we have a suite & testcases with native tags like:

MySuite -> {'color': 'red'}
    testcase_one -> no tags
    testcase_two -> {'color': 'blue', 'speed': 'fast'}
    parametrized_testcase -> {'color': 'yellow'}
        generated_testcase_1 -> no tags
        generated_testcase_2 -> no tags

We will have the following tag indices:

MySuite -> {'color': {'red', 'blue', 'yellow'}, 'speed': {'fast'}}
    testcase_one -> {'color': 'red'}
    testcase_two -> {'color': {'blue', 'red'}, 'speed': 'fast'}
    parametrized_testcase -> NO TAG INDEX
        generated_testcase_1 -> {'color': {'yellow', 'red'}}
        generated_testcase_2 -> {'color': {'yellow', 'red'}}

Parametrization method templates do not have tag index attribute, as they are not run as tests and their tags are propagated to generated testcases (and eventually up to the suite index).

testplan.testing.multitest.suite.set_testsuite_testcases(suite)[source]

Build the list of testcases to run for the given testsuite. The name of each testcase should be unique.

Parameters:suite (testsuite) – Suite object whose testcases need to be set
Returns:None
Return type:NoneType
testplan.testing.multitest.suite.skip_if(*predicates)[source]

Annotate a testcase with skip predicate(s). The skip predicates will be evaluated before the testsuite is due to be executed and passed the instance of the suite as the sole argument.

The predicate’s signature must name the argument “testsuite” or a MethodSignatureMismatch will be raised.

If any of the predicates is true, then the testcase will be skipped by MultiTest instead of being normally executed.

testplan.testing.multitest.suite.skip_if_testcase(*predicates)[source]

Annotate a suite with skip predicate(s). The skip predicates will be evaluated against each test case method before the testsuite is due to be executed and passed the instance of the suite as the sole argument.

The predicate’s signature must name the argument “testsuite” or a MethodSignatureMismatch will be raised.

If any of the predicates returns true for a test case method then the method will be skipped.

testplan.testing.multitest.suite.testcase(*args, **kwargs)[source]

Annotate a member function as being a testcase.

This checks that the function takes three arguments called self, env, report and will throw if it’s not the case.

Although this is somewhat restrictive, it also lessens the chances that wrong signatures (with swapped parameters for example) will cause bugs that can be time-consuming to figure out.

Parameters:
  • name (str or NoneType) – Custom name to be used instead of function name for testcase in test report. In case of a parameterized testcases, this custom name will be used as the parameterized group name in report.
  • tags (str or tuple(str) or dict(str: str) or dict(str: tuple(str)) or NoneType) – Allows filtering of tests with simple tags or multi-simple tags or named tags or multi-named tags.
  • parameters (list(object) or tuple(special_case) or dict(list(object) or tuple(object)) or NoneType) –

    Enables the creation of more compact testcases using simple or combinatorial paramatization, by allowing you to pass extra arguments to the testcase declaration.

    Note that the special_case must either be: a tuple or list with positional values that correspond to the parametrized argument names in the method definition OR a dict that has matching keys & values to the parametrized argument names OR a single value (that is not a tuple, or list) if and only if there is a single parametrization argument.

  • name_func (callable or NoneType) –

    Custom name generation algorithm for parametrized testcases. The callable should has a signature like following:

    name_func(func_name: str, kwargs: collections.OrderedDict) -> str

    Where parameterized group name (function name or as specified in name parameter) will be passed to func_name and input parameters will be passed to kwargs.

  • tag_func (callable or NoneType) –

    Dynamic testcase tag assignment function that returns simple tags or named tag context. The signature is:

    tag_func(kwargs: collections.OrderedDict) -> dict or list

    Where kwargs is an ordered dictionary of parametrized arguments.

    NOTE: If you use tag_func along with tags argument, Testplan will merge the dynamically generated tag context with the explicitly passed tag values.

  • docstring_func (callable or NoneType) –

    Custom testcase docstring generation function. The signature is:

    docstring_func(docstring: str or None, kwargs: collections.OrderedDict) -> str or None

    Where docstring is document string of the parametrization target function, kwargs is an ordered dictionary of parametrized arguments.

  • custom_wrappers (callable or NoneType) – Wrapper to decorate parametrized testcases (used instead of @decorator syntax) that uses testplan.common.utils.callable.wraps()
  • summarize (bool) – Whether the testcase should be summarized in its output
  • num_passing (int) – The number of passing assertions reported per category per assertion type
  • num_failing (int) – The number of failing assertions reported per category per assertion type
  • key_combs_limit (int) – Max number of failed key combinations on fix/dict summaries.
  • execution_group (str or NoneType) – Group of test cases to run in parallel with ( groups overall are executed serially)
  • timeout (int or NoneType) – Time elapsed in seconds until TimeoutException raised
testplan.testing.multitest.suite.testsuite(*args, **kwargs)[source]

Annotate a class as being a test suite.

An @testsuite-annotated class must have one or more @testcase-annotated methods. These methods will be executed in their order of definition. If a setup(self, env) and teardown(self, env) methods are present on the @testsuite-annotated class, then they will be executed respectively before and after the @testcase-annotated methods have executed.

Parameters:
  • name (str or callable or NoneType) –

    Custom name to be used instead of class name for test suite in test report. A callable should has a signature like following:

    name(cls_name: str, suite: testsuite) -> str

    Where test suite class name will be passed to cls_name and instance of test suite class will be passed to suite.

  • tags (str or tuple(str) or dict( str: str) or dict( str: tuple(str)) or NoneType) – Allows filtering of tests with simple tags, or multi-simple tags, or named tags, or multi-named tags.
  • strict_order (bool) – Force testcases to run sequentially as they were defined in test suite.
testplan.testing.multitest.suite.timeout(seconds)[source]

Decorator for non-testcase method in a test suite, can be used for setup, teardown, pre_testcase and post_testcase.

testplan.testing.multitest.suite.update_tag_index(obj, tag_dict)[source]

Utility for updating __tags_index__ attribute of an object.

testplan.testing.multitest.suite.xfail(reason, strict=False)[source]

Mark a testcase/testsuit as XFail(known to fail) when not possible to fix immediately. This decorator mandates a reason that explains why the test is marked as passed. XFail testcases will be highlighted as orange on testplan report. By default, should the test pass while we expect it to fail, the report will mark it as failed. For unstable tests, set strict to False. Note that doing so decreases the value of the test.

Parameters:
  • reason (str) – Explains why the test is marked as passed.
  • strict – Should the test pass while we expect it to fail, the report

will mark it as failed if strict is True, default is False. :type strict: bool

testplan.testing.multitest.test_metadata module

class testplan.testing.multitest.test_metadata.BasicInfo(name: str, description: Union[str, NoneType])[source]

Bases: object

id = None
class testplan.testing.multitest.test_metadata.LocationMetadata(object_name: str, file: str, line_no: int)[source]

Bases: object

classmethod from_object(obj) → testplan.testing.multitest.test_metadata.LocationMetadata[source]
class testplan.testing.multitest.test_metadata.TestCaseMetadata(name: str, description: Union[str, NoneType], location: Union[testplan.testing.multitest.test_metadata.LocationMetadata, NoneType])[source]

Bases: testplan.testing.multitest.test_metadata.TestCaseStaticMetadata, testplan.testing.multitest.test_metadata.BasicInfo

class testplan.testing.multitest.test_metadata.TestCaseStaticMetadata(location: Union[testplan.testing.multitest.test_metadata.LocationMetadata, NoneType])[source]

Bases: object

class testplan.testing.multitest.test_metadata.TestMetadata(name: str, description: Union[str, NoneType], test_suites: List[testplan.testing.multitest.test_metadata.TestSuiteMetadata])[source]

Bases: testplan.testing.multitest.test_metadata.BasicInfo

compute_ids()[source]
class testplan.testing.multitest.test_metadata.TestPlanMetadata(name: str, description: Union[str, NoneType], tests: List[testplan.testing.multitest.test_metadata.TestMetadata])[source]

Bases: testplan.testing.multitest.test_metadata.BasicInfo

class testplan.testing.multitest.test_metadata.TestSuiteMetadata(name: str, description: Union[str, NoneType], location: Union[testplan.testing.multitest.test_metadata.LocationMetadata, NoneType], test_cases: List[testplan.testing.multitest.test_metadata.TestCaseMetadata])[source]

Bases: testplan.testing.multitest.test_metadata.TestSuiteStaticMetadata, testplan.testing.multitest.test_metadata.BasicInfo

class testplan.testing.multitest.test_metadata.TestSuiteStaticMetadata(location: Union[testplan.testing.multitest.test_metadata.LocationMetadata, NoneType])[source]

Bases: object

Module contents

Multitest main test execution framework.