testplan.testing.cpp package

Submodules

testplan.testing.cpp.gtest module

class testplan.testing.cpp.gtest.GTest(name, binary, description=None, gtest_filter='', gtest_also_run_disabled_tests=False, gtest_repeat=1, gtest_shuffle=False, gtest_random_seed=0, gtest_stream_result_to='', gtest_death_test_style='fast', **options)[source]

Bases: testplan.testing.base.ProcessRunnerTest

Subprocess test runner for Google Test: https://github.com/google/googletest

For original docs please see:

https://github.com/google/googletest/blob/master/googletest/docs/AdvancedGuide.md https://github.com/google/googletest/blob/master/googletest/docs/FAQ.md

Most of the configuration options of GTest are just simple wrappers for native arguments.

Parameters:
  • name (str) – Test instance name, often used as uid of test entity.
  • binary (str) – Path to the application binary or script.
  • description (str) – Description of test instance.
  • gtest_filter (str) – Native test filter pattern that will be used by GTest internally.
  • gtest_also_run_disabled_tests (bool) – Will run disabled tests as well when set to True.
  • gtest_repeat (int) – Repeats the GTest multiple times. Only allows nonzero values, otherwise Testplan would stop the test execution due to timeout.
  • gtest_shuffle (bool) – Will run the tests in random order when set to True.
  • gtest_random_seed (int) – Integer that can be used for the shuffling operation.
  • gtest_stream_result_to (str) – Flag for specifying host name and port number on which to stream test results.
  • gtest_death_test_style (str) – Test style flag, can either be threadsafe or fast. (Default value is fast)

Also inherits all ProcessRunnerTest options.

CONFIG

alias of GTestConfig

ENVIRONMENT

alias of testplan.testing.environment.base.TestEnvironment

RESULT

alias of testplan.testing.base.TestResult

STATUS

alias of testplan.common.entity.base.RunnableStatus

abort()

Default abort policy. First abort all dependencies and then itself.

abort_dependencies()

Yield all dependencies to be aborted before self abort.

aborted

Returns if entity was aborted.

aborting()

Aborting logic for self.

active

Entity not aborting/aborted.

add_main_batch_steps()

Runnable steps to be executed while environment is running.

add_post_main_steps()

Runnable steps to run before environment stopped.

add_post_resource_steps()

Runnable steps to run after environment stopped.

add_pre_main_steps()

Runnable steps to run after environment started.

add_pre_resource_steps()

Runnable steps to be executed before environment starts.

add_resource(resource: testplan.common.entity.base.Resource, uid: Optional[str] = None)

Adds a resource in the runnable environment.

Parameters:
  • resource (Subclass of Resource) – Resource to be added.
  • uid (str or NoneType) – Optional input resource uid.
Returns:

Resource uid assigned.

Return type:

str

add_start_resource_steps()

Runnable steps to start environment

add_stop_resource_steps()

Runnable steps to stop environment

apply_xfail_tests()

Apply xfail tests specified via –xfail-tests or @test_plan(xfail_tests=…).

base_command()[source]
cfg

Configuration object.

context_input(exclude: list = None) → Dict[str, Any]

All attr of self in a dict for context resolution

define_runpath()

Define runpath directory based on parent object and configuration.

description
dry_run()

Return an empty report skeleton for this test including all testsuites, testcases etc. hierarchy. Does not run any tests.

filter_levels = [<FilterLevel.TEST: 'test'>]
classmethod filter_locals(local_vars)

Filter out init params of None value, they will take default value defined in its ConfigOption object; also filter out special vars that are not init params from local_vars.

Parameters:local_vars
get_filter_levels() → List[testplan.testing.filtering.FilterLevel]
get_metadata() → testplan.testing.multitest.test_metadata.TestMetadata
get_proc_env()

Fabricate the env var for subprocess. Precedence: user-specified > hardcoded > system env

get_process_check_report(retcode, stdout, stderr)

When running a process fails (e.g. binary crash, timeout etc) we can still generate dummy testsuite / testcase reports with a certain hierarchy compatible with exporters and XUnit conventions. And logs of stdout & stderr can be saved as attachment.

get_stdout_style(passed)

Stdout style for status.

get_tags_index() → Union[str, Iterable[str], Dict[KT, VT]]

Return the tag index that will be used for filtering. By default this is equal to the native tags for this object.

However subclasses may build larger tag indices by collecting tags from their children for example.

get_test_context(list_cmd=None)

Run the shell command generated by list_command in a subprocess, parse and return the stdout generated via parse_test_context.

Parameters:list_cmd (str) – Command to list all test suites and testcases
Returns:Result returned by parse_test_context.
Return type:list of list
i
interactive
list_command() → Optional[List[str]]

List custom arguments before and after the executable if they are defined. :return: List of commands to run before and after the test process, as well as the test executable itself. :rtype: list of str or NoneType

list_command_filter(testsuite_pattern, testcase_pattern)[source]

Return the base list command with additional filtering to list a specific set of testcases.

log_test_results(top_down=True)

Log test results. i.e. ProcessRunnerTest or PyTest

Parameters:top_down (bool) – Flag logging test results using a top-down approach or a bottom-up approach.
logger

logger object

make_runpath_dirs()

Creates runpath related directories.

name

Instance name.

parent

Returns parent Entity.

parse_test_context(test_list_output)[source]

Parse GTest test listing from stdout

pause()

Pauses entity execution.

pausing()

Pauses the resource.

post_step_call(step)

Callable to be invoked before each step.

pre_step_call(step)

Callable to be invoked before each step.

prepare_binary()

Resolve the real binary path to run

process_test_data(test_data)[source]

Process raw test data that was collected and return a list of entries (e.g. TestGroupReport, TestCaseReport) that will be appended to the current test instance’s report as children.

Parameters:test_data (xml.etree.Element) – Root node of parsed raw test data
Returns:List of sub reports
Return type:list of TestGroupReport / TestCaseReport
propagate_tag_indices()

Basic step for propagating tag indices of the test report tree. This step may be necessary if the report tree is created in parts and then added up.

read_test_data()[source]

Parse output generated by the 3rd party testing tool, and then the parsed content will be handled by process_test_data.

You should override this function with custom logic to parse the contents of generated file.

report

Shortcut for the test report.

report_path
reset_context() → None
resolved_bin
resources

Returns the Environment of Resources.

result

Returns a RunnableResult

resume()

Resumes entity execution.

resuming()

Resumes the resource.

run()

Executes the defined steps and populates the result object.

run_result()

Returns if a run was successful.

run_testcases_iter(testsuite_pattern: str = '*', testcase_pattern: str = '*', shallow_report: Dict[KT, VT] = None) → Generator[T_co, T_contra, V_co]

Runs testcases as defined by the given filter patterns and yields testcase reports. A single testcase report is made for general checks of the test process, including checking the exit code and logging stdout and stderr of the process. Then, testcase reports are generated from the output of the test process.

For efficiency, we run all testcases in a single subprocess rather than running each testcase in a seperate process. This reduces the total time taken to run all testcases, however it will mean that testcase reports will not be generated until all testcases have finished running.

Parameters:
  • testsuite_pattern – pattern to match for testsuite names
  • testcase_pattern – pattern to match for testcase names
  • shallow_report – shallow report entry
Returns:

generator yielding testcase reports and UIDs for merge step

run_tests()

Run the tests in a subprocess, record stdout & stderr on runpath. Optionally enforce a timeout and log timeout related messages in the given timeout log path.

runpath

Path to be used for temp/output files by entity.

scratch

Path to be used for temp files by entity.

set_discover_path(path: str) → None

If the Test is materialized from a task that is discovered outside pwd(), this might be needed for binary/library path derivation to work properly. :param path: the absolute path where the task has been discovered

setup()

Setup step to be executed first.

should_log_test_result(depth, test_obj, style)

Return a tuple in which the first element indicates if need to log test results (Suite report, Testcase report, or result of assertions). The second one is the indent that should be kept at start of lines.

should_run()

Determines if current object should run.

skip_step(step)

Callable to determine if step should be skipped.

start_test_resources()

Start all test resources but do not run any tests. Used in the interactive mode when environments may be started/stopped on demand. The base implementation is very simple but may be overridden in sub- classes to run additional setup pre- and post-environment start.

status

Status object.

stderr
stdout
stdout_style

Stdout style input.

stop_test_resources()

Stop all test resources. As above, this method is used for the interactive mode and is very simple in this base Test class, but may be overridden by sub-classes.

teardown()

Teardown step to be executed last.

test_command() → List[str]

Add custom arguments before and after the executable if they are defined. :return: List of commands to run before and after the test process, as well as the test executable itself. :rtype: list of str

test_command_filter(testsuite_pattern, testcase_pattern)[source]

Return the base test command with additional filtering to run a specific set of testcases.

test_context
timeout_callback()

Callback function that will be called by the daemon thread if a timeout occurs (e.g. process runs longer than specified timeout value).

timeout_log
uid()

Instance name uid.

update_test_report()[source]

Attach XML report contents to the report, which can be used by XML exporters, but will be discarded by serializers.

wait(target_status, timeout=None)

Wait until objects status becomes target status.

Parameters:
  • target_status (str) – expected status
  • timeout (int or NoneType) – timeout in seconds
class testplan.testing.cpp.gtest.GTestConfig(**options)[source]

Bases: testplan.testing.base.ProcessRunnerTestConfig

Configuration object for GTest.

classmethod build_schema()

Build a validation schema using the config options defined in this class and its parent classes.

denormalize()

Create new config object that inherits all explicit attributes from its parents as well.

get_local(name, default=None)

Returns a local config setting (not from container)

classmethod get_options()[source]

Runnable specific config options.

ignore_extra_keys = False
parent

Returns the parent configuration.

set_local(name, value)

set without any check

testplan.testing.cpp.cppunit module

class testplan.testing.cpp.cppunit.Cppunit(name, binary, description=None, file_output_flag='-y', output_path='', filtering_flag=None, cppunit_filter='', listing_flag=None, parse_test_context=None, **options)[source]

Bases: testplan.testing.base.ProcessRunnerTest

Subprocess test runner for Cppunit: https://sourceforge.net/projects/cppunit

For original docs please see:

http://cppunit.sourceforge.net/doc/1.8.0/ http://cppunit.sourceforge.net/doc/cvs/cppunit_cookbook.html

Please note that the binary (either native binary or script) should output in XML format so that Testplan is able to parse the result. By default Testplan reads from stdout, but if file_output_flag is set (e.g. “-y”), the binary should accept a file path and write the result to that file, which will be loaded and parsed by Testplan. For example:

./cppunit_bin -y /path/to/test/result
Parameters:
  • name (str) – Test instance name, often used as uid of test entity.
  • binary (str) – Path to the application binary or script.
  • description (str) – Description of test instance.
  • file_output_flag (NoneType or str) – Customized command line flag for specifying path of output file, default to -y
  • output_path (str) – Where to save the test report, should work with file_output_flag, if not provided a default path can be generated.
  • filtering_flag (NoneType or str) – Customized command line flag for filtering testcases, “-t” is suggested, for example: ./cppunit_bin -t .some_text.
  • cppunit_filter (str) – Native test filter pattern that will be used by Cppunit internally.
  • listing_flag (NoneType or str) – Customized command line flag for listing all testcases, “-l” is suggested, for example: ./cppunit_bin -l
  • parse_test_context (NoneType or callable) – Function to parse the output which contains listed test suites and testcases. refer to the default implementation parse_test_context().

Also inherits all ProcessRunnerTest options.

CONFIG

alias of CppunitConfig

ENVIRONMENT

alias of testplan.testing.environment.base.TestEnvironment

RESULT

alias of testplan.testing.base.TestResult

STATUS

alias of testplan.common.entity.base.RunnableStatus

abort()

Default abort policy. First abort all dependencies and then itself.

abort_dependencies()

Yield all dependencies to be aborted before self abort.

aborted

Returns if entity was aborted.

aborting()

Aborting logic for self.

active

Entity not aborting/aborted.

add_main_batch_steps()

Runnable steps to be executed while environment is running.

add_post_main_steps()

Runnable steps to run before environment stopped.

add_post_resource_steps()

Runnable steps to run after environment stopped.

add_pre_main_steps()

Runnable steps to run after environment started.

add_pre_resource_steps()

Runnable steps to be executed before environment starts.

add_resource(resource: testplan.common.entity.base.Resource, uid: Optional[str] = None)

Adds a resource in the runnable environment.

Parameters:
  • resource (Subclass of Resource) – Resource to be added.
  • uid (str or NoneType) – Optional input resource uid.
Returns:

Resource uid assigned.

Return type:

str

add_start_resource_steps()

Runnable steps to start environment

add_stop_resource_steps()

Runnable steps to stop environment

apply_xfail_tests()

Apply xfail tests specified via –xfail-tests or @test_plan(xfail_tests=…).

cfg

Configuration object.

context_input(exclude: list = None) → Dict[str, Any]

All attr of self in a dict for context resolution

define_runpath()

Define runpath directory based on parent object and configuration.

description
dry_run()

Return an empty report skeleton for this test including all testsuites, testcases etc. hierarchy. Does not run any tests.

filter_levels = [<FilterLevel.TEST: 'test'>]
classmethod filter_locals(local_vars)

Filter out init params of None value, they will take default value defined in its ConfigOption object; also filter out special vars that are not init params from local_vars.

Parameters:local_vars
get_filter_levels() → List[testplan.testing.filtering.FilterLevel]
get_metadata() → testplan.testing.multitest.test_metadata.TestMetadata
get_proc_env()

Fabricate the env var for subprocess. Precedence: user-specified > hardcoded > system env

get_process_check_report(retcode, stdout, stderr)

When running a process fails (e.g. binary crash, timeout etc) we can still generate dummy testsuite / testcase reports with a certain hierarchy compatible with exporters and XUnit conventions. And logs of stdout & stderr can be saved as attachment.

get_stdout_style(passed)

Stdout style for status.

get_tags_index() → Union[str, Iterable[str], Dict[KT, VT]]

Return the tag index that will be used for filtering. By default this is equal to the native tags for this object.

However subclasses may build larger tag indices by collecting tags from their children for example.

get_test_context(list_cmd=None)

Run the shell command generated by list_command in a subprocess, parse and return the stdout generated via parse_test_context.

Parameters:list_cmd (str) – Command to list all test suites and testcases
Returns:Result returned by parse_test_context.
Return type:list of list
i
interactive
list_command() → Optional[List[str]]

List custom arguments before and after the executable if they are defined. :return: List of commands to run before and after the test process, as well as the test executable itself. :rtype: list of str or NoneType

list_command_filter(testsuite_pattern, testcase_pattern)[source]

Return the base list command with additional filtering to list a specific set of testcases.

log_test_results(top_down=True)

Log test results. i.e. ProcessRunnerTest or PyTest

Parameters:top_down (bool) – Flag logging test results using a top-down approach or a bottom-up approach.
logger

logger object

make_runpath_dirs()

Creates runpath related directories.

name

Instance name.

parent

Returns parent Entity.

parse_test_context(test_list_output)[source]

Default implementation of parsing Cppunit test listing from stdout. Assume the format of output is like that of GTest listing. If the Cppunit test lists the test suites and testcases in other format, then this function needs to be re-implemented.

pause()

Pauses entity execution.

pausing()

Pauses the resource.

post_step_call(step)

Callable to be invoked before each step.

pre_step_call(step)

Callable to be invoked before each step.

prepare_binary()

Resolve the real binary path to run

process_test_data(test_data: testplan.importers.cppunit.CPPUnitImportedResult)[source]

XML output contains entries for skipped testcases as well, which are not included in the report.

propagate_tag_indices()

Basic step for propagating tag indices of the test report tree. This step may be necessary if the report tree is created in parts and then added up.

read_test_data()[source]

Parse output generated by the 3rd party testing tool, and then the parsed content will be handled by process_test_data.

You should override this function with custom logic to parse the contents of generated file.

report

Shortcut for the test report.

report_path
reset_context() → None
resolved_bin
resources

Returns the Environment of Resources.

result

Returns a RunnableResult

resume()

Resumes entity execution.

resuming()

Resumes the resource.

run()

Executes the defined steps and populates the result object.

run_result()

Returns if a run was successful.

run_testcases_iter(testsuite_pattern: str = '*', testcase_pattern: str = '*', shallow_report: Dict[KT, VT] = None) → Generator[T_co, T_contra, V_co]

Runs testcases as defined by the given filter patterns and yields testcase reports. A single testcase report is made for general checks of the test process, including checking the exit code and logging stdout and stderr of the process. Then, testcase reports are generated from the output of the test process.

For efficiency, we run all testcases in a single subprocess rather than running each testcase in a seperate process. This reduces the total time taken to run all testcases, however it will mean that testcase reports will not be generated until all testcases have finished running.

Parameters:
  • testsuite_pattern – pattern to match for testsuite names
  • testcase_pattern – pattern to match for testcase names
  • shallow_report – shallow report entry
Returns:

generator yielding testcase reports and UIDs for merge step

run_tests()

Run the tests in a subprocess, record stdout & stderr on runpath. Optionally enforce a timeout and log timeout related messages in the given timeout log path.

runpath

Path to be used for temp/output files by entity.

scratch

Path to be used for temp files by entity.

set_discover_path(path: str) → None

If the Test is materialized from a task that is discovered outside pwd(), this might be needed for binary/library path derivation to work properly. :param path: the absolute path where the task has been discovered

setup()

Setup step to be executed first.

should_log_test_result(depth, test_obj, style)

Return a tuple in which the first element indicates if need to log test results (Suite report, Testcase report, or result of assertions). The second one is the indent that should be kept at start of lines.

should_run()

Determines if current object should run.

skip_step(step)

Callable to determine if step should be skipped.

start_test_resources()

Start all test resources but do not run any tests. Used in the interactive mode when environments may be started/stopped on demand. The base implementation is very simple but may be overridden in sub- classes to run additional setup pre- and post-environment start.

status

Status object.

stderr
stdout
stdout_style

Stdout style input.

stop_test_resources()

Stop all test resources. As above, this method is used for the interactive mode and is very simple in this base Test class, but may be overridden by sub-classes.

teardown()

Teardown step to be executed last.

test_command() → List[str]

Add custom arguments before and after the executable if they are defined. :return: List of commands to run before and after the test process, as well as the test executable itself. :rtype: list of str

test_command_filter(testsuite_pattern, testcase_pattern)[source]

Return the base test command with additional filtering to run a specific set of testcases.

test_context
timeout_callback()

Callback function that will be called by the daemon thread if a timeout occurs (e.g. process runs longer than specified timeout value).

timeout_log
uid()

Instance name uid.

update_test_report()[source]

Attach XML report contents to the report, which can be used by XML exporters, but will be discarded by serializers.

wait(target_status, timeout=None)

Wait until objects status becomes target status.

Parameters:
  • target_status (str) – expected status
  • timeout (int or NoneType) – timeout in seconds
class testplan.testing.cpp.cppunit.CppunitConfig(**options)[source]

Bases: testplan.testing.base.ProcessRunnerTestConfig

Configuration object for Cppunit.

classmethod build_schema()

Build a validation schema using the config options defined in this class and its parent classes.

denormalize()

Create new config object that inherits all explicit attributes from its parents as well.

get_local(name, default=None)

Returns a local config setting (not from container)

classmethod get_options()[source]

Runnable specific config options.

ignore_extra_keys = False
parent

Returns the parent configuration.

set_local(name, value)

set without any check

testplan.testing.cpp.hobbestest module

class testplan.testing.cpp.hobbestest.HobbesTest(name, binary, description=None, tests=None, json='report.json', other_args=None, **options)[source]

Bases: testplan.testing.base.ProcessRunnerTest

Subprocess test runner for Hobbes Test: https://github.com/morganstanley/hobbes

Parameters:
  • name (str) – Test instance name, often used as uid of test entity.
  • binary (str) – Path the to application binary or script.
  • description (str) – Description of test instance.
  • tests (list) – Run one or more specified test(s).
  • json (str) – Generate test report in JSON with the specified name. The report will be placed under rundir unless user specifies an absolute path. The content of the report will be parsed to generate testplan report.
  • other_args (list) – Any other arguments to be passed to the test binary.

Also inherits all ProcessTest options.

CONFIG

alias of HobbesTestConfig

ENVIRONMENT

alias of testplan.testing.environment.base.TestEnvironment

RESULT

alias of testplan.testing.base.TestResult

STATUS

alias of testplan.common.entity.base.RunnableStatus

abort()

Default abort policy. First abort all dependencies and then itself.

abort_dependencies()

Yield all dependencies to be aborted before self abort.

aborted

Returns if entity was aborted.

aborting()

Aborting logic for self.

active

Entity not aborting/aborted.

add_main_batch_steps()

Runnable steps to be executed while environment is running.

add_post_main_steps()

Runnable steps to run before environment stopped.

add_post_resource_steps()

Runnable steps to run after environment stopped.

add_pre_main_steps()

Runnable steps to run after environment started.

add_pre_resource_steps()

Runnable steps to be executed before environment starts.

add_resource(resource: testplan.common.entity.base.Resource, uid: Optional[str] = None)

Adds a resource in the runnable environment.

Parameters:
  • resource (Subclass of Resource) – Resource to be added.
  • uid (str or NoneType) – Optional input resource uid.
Returns:

Resource uid assigned.

Return type:

str

add_start_resource_steps()

Runnable steps to start environment

add_stop_resource_steps()

Runnable steps to stop environment

apply_xfail_tests()

Apply xfail tests specified via –xfail-tests or @test_plan(xfail_tests=…).

cfg

Configuration object.

context_input(exclude: list = None) → Dict[str, Any]

All attr of self in a dict for context resolution

define_runpath()

Define runpath directory based on parent object and configuration.

description
dry_run()

Return an empty report skeleton for this test including all testsuites, testcases etc. hierarchy. Does not run any tests.

filter_levels = [<FilterLevel.TEST: 'test'>]
classmethod filter_locals(local_vars)

Filter out init params of None value, they will take default value defined in its ConfigOption object; also filter out special vars that are not init params from local_vars.

Parameters:local_vars
get_filter_levels() → List[testplan.testing.filtering.FilterLevel]
get_metadata() → testplan.testing.multitest.test_metadata.TestMetadata
get_proc_env()

Fabricate the env var for subprocess. Precedence: user-specified > hardcoded > system env

get_process_check_report(retcode, stdout, stderr)

When running a process fails (e.g. binary crash, timeout etc) we can still generate dummy testsuite / testcase reports with a certain hierarchy compatible with exporters and XUnit conventions. And logs of stdout & stderr can be saved as attachment.

get_stdout_style(passed)

Stdout style for status.

get_tags_index() → Union[str, Iterable[str], Dict[KT, VT]]

Return the tag index that will be used for filtering. By default this is equal to the native tags for this object.

However subclasses may build larger tag indices by collecting tags from their children for example.

get_test_context(list_cmd=None)

Run the shell command generated by list_command in a subprocess, parse and return the stdout generated via parse_test_context.

Parameters:list_cmd (str) – Command to list all test suites and testcases
Returns:Result returned by parse_test_context.
Return type:list of list
i
interactive
list_command() → Optional[List[str]]

List custom arguments before and after the executable if they are defined. :return: List of commands to run before and after the test process, as well as the test executable itself. :rtype: list of str or NoneType

list_command_filter(testsuite_pattern, testcase_pattern)[source]

Return the base list command with additional filtering to list a specific set of testcases.

log_test_results(top_down=True)

Log test results. i.e. ProcessRunnerTest or PyTest

Parameters:top_down (bool) – Flag logging test results using a top-down approach or a bottom-up approach.
logger

logger object

make_runpath_dirs()

Creates runpath related directories.

name

Instance name.

parent

Returns parent Entity.

parse_test_context(test_list_output)[source]

Parse test binary –list_tests output. This is used when we run test_plan.py with –list/–info option.

pause()

Pauses entity execution.

pausing()

Pauses the resource.

post_step_call(step)

Callable to be invoked before each step.

pre_step_call(step)

Callable to be invoked before each step.

prepare_binary()

Resolve the real binary path to run

process_test_data(test_data)[source]

JSON output contains entries for skipped testcases as well, which are not included in the report.

propagate_tag_indices()

Basic step for propagating tag indices of the test report tree. This step may be necessary if the report tree is created in parts and then added up.

read_test_data()[source]

Parse JSON report generated by Hobbes test and return the Json object.

Returns:Json object of parsed raw test data
Return type:dict ot list
report

Shortcut for the test report.

report_path
reset_context() → None
resolved_bin
resources

Returns the Environment of Resources.

result

Returns a RunnableResult

resume()

Resumes entity execution.

resuming()

Resumes the resource.

run()

Executes the defined steps and populates the result object.

run_result()

Returns if a run was successful.

run_testcases_iter(testsuite_pattern: str = '*', testcase_pattern: str = '*', shallow_report: Dict[KT, VT] = None) → Generator[T_co, T_contra, V_co]

Runs testcases as defined by the given filter patterns and yields testcase reports. A single testcase report is made for general checks of the test process, including checking the exit code and logging stdout and stderr of the process. Then, testcase reports are generated from the output of the test process.

For efficiency, we run all testcases in a single subprocess rather than running each testcase in a seperate process. This reduces the total time taken to run all testcases, however it will mean that testcase reports will not be generated until all testcases have finished running.

Parameters:
  • testsuite_pattern – pattern to match for testsuite names
  • testcase_pattern – pattern to match for testcase names
  • shallow_report – shallow report entry
Returns:

generator yielding testcase reports and UIDs for merge step

run_tests()

Run the tests in a subprocess, record stdout & stderr on runpath. Optionally enforce a timeout and log timeout related messages in the given timeout log path.

runpath

Path to be used for temp/output files by entity.

scratch

Path to be used for temp files by entity.

set_discover_path(path: str) → None

If the Test is materialized from a task that is discovered outside pwd(), this might be needed for binary/library path derivation to work properly. :param path: the absolute path where the task has been discovered

setup()

Setup step to be executed first.

should_log_test_result(depth, test_obj, style)

Return a tuple in which the first element indicates if need to log test results (Suite report, Testcase report, or result of assertions). The second one is the indent that should be kept at start of lines.

should_run()

Determines if current object should run.

skip_step(step)

Callable to determine if step should be skipped.

start_test_resources()

Start all test resources but do not run any tests. Used in the interactive mode when environments may be started/stopped on demand. The base implementation is very simple but may be overridden in sub- classes to run additional setup pre- and post-environment start.

status

Status object.

stderr
stdout
stdout_style

Stdout style input.

stop_test_resources()

Stop all test resources. As above, this method is used for the interactive mode and is very simple in this base Test class, but may be overridden by sub-classes.

teardown()

Teardown step to be executed last.

test_command() → List[str]

Add custom arguments before and after the executable if they are defined. :return: List of commands to run before and after the test process, as well as the test executable itself. :rtype: list of str

test_command_filter(testsuite_pattern='*', testcase_pattern='*')[source]

Return the base test command with additional filtering to run a specific set of testcases.

test_context
timeout_callback()

Callback function that will be called by the daemon thread if a timeout occurs (e.g. process runs longer than specified timeout value).

timeout_log
uid()

Instance name uid.

update_test_report()

Update current instance’s test report with generated sub reports from raw test data. Skip report updates if the process was killed.

wait(target_status, timeout=None)

Wait until objects status becomes target status.

Parameters:
  • target_status (str) – expected status
  • timeout (int or NoneType) – timeout in seconds
class testplan.testing.cpp.hobbestest.HobbesTestConfig(**options)[source]

Bases: testplan.testing.base.ProcessRunnerTestConfig

Configuration object for HobbesTest.

classmethod build_schema()

Build a validation schema using the config options defined in this class and its parent classes.

denormalize()

Create new config object that inherits all explicit attributes from its parents as well.

get_local(name, default=None)

Returns a local config setting (not from container)

classmethod get_options()[source]

Runnable specific config options.

ignore_extra_keys = False
parent

Returns the parent configuration.

set_local(name, value)

set without any check