Introduction

Testplan is a python testing framework mainly used for integration tests, application black box testing and has the ability to integrate with external unit testing frameworks as well, i.e GTest, BoostTest.

Testplan starts a local live interconnected environment and execute test scenarios against it. It has all the build it mechanisms to dynamically retrieve processes/services endpoints, instantiate configuration files from templates based on dynamic resource assignments and provides fixtures like setup, teardown, after_start, after_stop etc. to customize the tests.

A typical use case is to start an application, connect it to other services, perform some operations via the application and service drivers and assert on expected results.

Components

The three main components of a Testplan are:

  1. Test (MultiTest, GTest) is defined as a runnable that will be executed by Testplan and create a TestReport. Multiple tests can be added to a Testplan and these are independent entities.

    A Test is a collections of @testsuite decorated classes that contain @testcase decorated methods in which a user performs the assertions. The hierarchy used is the following:

    MultiTest1 (object)
       Testsuite1 (class)
           Testcase1 (method)
               Assertion1 (callable -> pass/fail)
               Assertion2
           Testcase2
               Assertion3
       Testsuite2
           Testcase3
               Assertion4
               Assertion5
    MultiTest2
       Testsuite3
           ...
    
  2. Execution runtime to define how and where the tests can be executed. By default all tests add added to the default LocalRunner executor that executes them sequentially in the order added. For parallel test execution, testplan uses pools of workers (i.e ThreadPool, ProcessPool).

    @test_plan(name='ThreadPool')
    def main(plan):
    
        # Add 10 tests for sequential execution.
        for idx in range(10):
            test = MultiTest(name='MultiplyTest',
                             suites=[BasicSuite()])
            plan.add(test)
    
        # Schedule tests to a thread pool to execute 10 in parallel.
        pool = ThreadPool(name='MyPool', size=10)
        plan.add_resource(pool)
    
        for idx in range(10):
            task = Task(target='make_multitest',
                        module='tasks')
            plan.schedule(task, resource='MyPool')
    
  3. Output / Report to control the different representations for the test results. The assertions have unique representation in console output as well as in PDF report. XML and JSON output is also supported.

    Access to the TestReport is provided by the TestplanResult object that is returned by run() method that is invoked by test_plan() decorator of main().

    @test_plan(name='Multiply', )
    def main(plan):
        test = MultiTest(name='MultiplyTest',
                         suites=[BasicSuite()])
        plan.add(test)
    
    if __name__ == '__main__':
      res = main()
      print(res)  # TestplanResult
      print(res.report) # TestReport
      sys.exit(not res)
    

Program

./test_plan.py

A Testplan application is usually a test_plan.py file that instantiates a Testplan object and adds tests to it. A very basic testplan application looks like this:

Code

import sys

from testplan import test_plan
from testplan.testing.multitest import MultiTest, testsuite, testcase


def multiply(numA, numB):
    return numA * numB


@testsuite
class BasicSuite(object):

    @testcase
    def basic_multiply(self, env, result):
        result.equal(multiply(2, 3), 6, description='Passing assertion')
        result.equal(multiply(2, 2), 5, description='Failing assertion')


@test_plan(name='Multiply')
def main(plan):
    test = MultiTest(name='MultiplyTest',
                     suites=[BasicSuite()])
    plan.add(test)


if __name__ == '__main__':
  sys.exit(not main())

The parts of this applications are:

  1. Mandatory imports to create the plan object and the tests hierarchy.

    import sys
    
    from testplan import test_plan
    from testplan.testing.multitest import MultiTest, testsuite, testcase
    
  2. Piece of code to be tested.

    def multiply(numA, numB):
        return numA * numB
    
  3. The actual assertions organised in testsuite/testcases. The result argument provides all assertions that accept various configuration options (i.e result.fix.match API) and have unique rendering representation:

    @testsuite
    class BasicSuite(object):
    
        @testcase
        def basic_multiply(self, env, result):
            result.equal(multiply(2, 3), 6,               # 2 * 3 == 6
                         description='Passing assertion')
            result.equal(multiply(2, 2), 5,
                         description='Failing assertion') # 2 * 2 != 5
    
  1. A decorated main function that provides a plan object to add the tests.

    @test_plan(name='Multiply')
    def main(plan):
        test = MultiTest(name='MultiplyTest',
                         suites=[BasicSuite()])
        plan.add(test)
    
  1. Logic to exit with non-zero exit code on plan test failure.

    if __name__ == '__main__':
      sys.exit(not main())
    

Console output

$ python ./test_plan.py --verbose
_images/intro_basic_example.png

PDF report

$ python ./test_plan.py --verbose --pdf report.pdf --pdf-style detailed
_images/intro_basic_example1.png

Local environment

A Test can start a local environment and then run the tests against in. The following environment:

------------------          -----------------          ------------------
|                | -------> |               | -------> |                |
|     Client     |          |  Application  |          |    Service     |
|                | <------- |               | <------- |                |
------------------          -----------------          ------------------

could be defined and used in the plan like this:

@test_plan(name='MyPlan')
def main(plan):
    test = MultiTest(
               name='MyTest',
               suites=[Suite1(), Suite2()],
               environment=[
                   Service(name='service'),
                   Application(name='app',
                               port=context('service', '{{port}}'))
                   Client(name='client',
                          port=context('app', '{{port}}'))
                   ])
    plan.add(test)

Before test execution, the environment will start and its parts will be connected using the context mechanism. Then in will be accessible from within the testcases making it able to execute real operations and perform assertions against expected results.

The environment can be accessed using env argument of the testcases:

@testcase
def send_message(self, env, result):
    message = 'Hello'
    env.client.send(message)  # Client sends a message to the application
                              # and the application should forward it to
                              # the connected service.
    received = env.service.receive()  # Try to receive the message from the
                                      # service. This can timeout.
    result.equal(received, message,
                 'Message service received.')  # Actual assertion to check
                                               # that the correct message
                                               # was received from service.

A list of self-explanatory downloadable examples can be found here.

Configuration

Most of the objects in testplan take **options as parameters and these are validated using a schema at initialization stage. For example, Testplan validates all input options using a schema defined in the TestplanConfig that inherits the schema of a RunnableManagerConfig and TestRunnerConfig. In this case, Testplan accepts all arguments of RunnableManager entity and TestRunner entity.

This is to avoid duplication of configuration options in similar components and enable re-usability and extendability of existing classes.

Example Testplan initialization where all input parameters (name, pdf_path, stdout_style, pdf_style) are part of TestRunnerConfig schema of TestRunner entity.

@test_plan(name='FXConverter',
           pdf_path='report.pdf',
           stdout_style=OUTPUT_STYLE,
           pdf_style=OUTPUT_STYLE)
def main(plan):
    ...

Command line

Arguments can be provided in a test_plan.py application:

Information:
-h, --help show this help message and exit
--list Shortcut for –info name.
--info

(default: None)

“pattern-full” - List tests in –patterns / –tags compatible format.

“name-full” - List tests in readable format.

“count” - Lists top level instances and total number of suites & testcases per instance.

“pattern” - List tests in –patterns / –tags compatible format. Max 25 testcases per suite will be displayed.

“name” - List tests in readable format. Max 25 testcases per suite will be displayed.

-i, --interactive
 Enable interactive mode. A port may be specified, otherwise the port defaults to 0.
General:
--runpath Path under which all temp files and logs will be created.
--timeout Expiry timeout on test execution.
Filtering:
--patterns

Test filter, supports glob notation & multiple arguments.

–pattern <Multitest Name>

–pattern <Multitest Name 1> <Multitest Name 2>

–pattern <Multitest Name 1> –pattern <Multitest Name 2>

–pattern <Multitest Name>:<Suite Name>

–pattern <Multitest Name>:<Suite Name>:<Testcase name>

–pattern <Multitest Name>:*:<Testcase name>

–pattern *:<Suite Name>:<Testcase name>

--tags

Test filter, runs tests that match ANY of the given tags.

–tags <tag_name_1> –tags <tag_name 2>

–tags <tag_name_1> <tag_category_1>=<tag_name_2>

--tags-all

Test filter, runs tests that match ALL of the given tags.

–tags-all <tag_name_1> –tags <tag_name 2>

–tags-all <tag_name_1> <tag_category_1>=<tag_name_2>

Ordering:
--shuffle

{all,instances,suites,testcases}

Shuffle execution order

--shuffle-seed Seed shuffle with a specific value, useful to reproduce a particular order.
Reporting:
--stdout-style

(default: summary)

“result-only” - Display only root level pass/fail status.

“summary” - Display top level (e.g. multitest) pass/fail status .

“extended-summary” - Display assertion details for failing tests, testcase level statuses for the rest.

“detailed” - Display details of all tests & assertions.

--pdf Path for PDF report.
--json Path for JSON report.
--xml Directory path for XML reports.
--report-dir Target directory for tag filtered report output.
--pdf-style

(default: extended-summary)

“result-only” - Display only root level pass/fail status.

“summary” - Display top level (e.g. multitest) pass/fail status .

“extended-summary” - Display assertion details for failing tests, testcase level statuses for the rest.

“detailed” - Display details of all tests & assertions.

-v, --verbose Enable verbose mode that will also set the stdout-style option to “detailed”.
-d, --debug Enable debug mode.
-b, --browser Automatically open report in browser.
--report-tags

Report filter, generates a separate report (PDF by default) that match ANY of the given tags.

–report-tags <tag_name_1> –report-tags <tag_name 2>

–report-tags <tag_name_1> <tag_category_1>=<tag_name_2>

--report-tags-all
 

Report filter, generates a separate report (PDF by default) that match ALL of the given tags.

–report-tags-all <tag_name_1> –report-tags-all <tag_name 2>

–report-tags-all <tag_name_1> <tag_category_1>=<tag_name_2>

--file-log-level
 

{exporter_info,test_info,driver_info,critical,error,warning,info,debug,none}

Specify log level for file logs. Set to None to disable file logging.

Highlighted features

Some features that should be highlighted are:

  1. Testcase tagging for flexible testcase filtering and multiple reports creation.
  2. Testcase parametrization to dynamically create testcases from input parameters and provide features like dynamic testcase name generation, docstring manipulation for better PDF reports and dynamic testcase tagging.
  3. Configurable output styles mechanism to fully control what is being displayed while tests run.
  4. CI/CD Jenkins integration by creating XML result files for the tests using XMLExporter.
  5. Parallel test execution using ThreadPool, ProcessPool etc.
  6. Ability for the user to provide custom TestLister, TestSorter and Exporter components that can be configured programmatically.