Overview of ReFrame

Introduction

Tests in ReFrame are simple Python classes that specify the basic parameters of the test. These define a generic test, independent of system details like scheduler, MPI libraries, launcher etc. These aspects are defined for the system(s) under test in the reframe configuration file reframe_config.py. Once the target system is configured, the tests can be run using the Python scripts inside each application folder. This way the logic of the test is abstracted away from the system under test. As we want to test new systems, we need to update the reframe_config.py for that system and use the same Python scripts to run benchmark tests. More details on ReFrame can be found in the documentation. In the following sections, a brief overview is provided on how to configure the ReFrame for different system(s) under test. However, this documentation is not meant to be an extensive overview of ReFrame functionalities. The reader is advised to check the ReFrame documentation for more detailed overview.

ReFrame configuration

Configuration of the different system(s) under test is the vital part of ReFrame workflow. Once the configuration is done properly, running tests will become as simple as running python scripts from CLI. The main parts of the ReFrame configuration are system and partition configuration and defining environments. These configuration aspects are defined in the so-called reframe_config.py file that resides in the root of the repository. A typical configuration file looks like this. However, as number of systems increase in the configuration file, it becomes very long and not so easy to read and edit. Hence, in the current repository, the configuration files are split into different systems and stored in config folder in the root of the repository. The main configuration file is assembled by importing the individual configurations of each system. Adding new systems or editing existing systems to the configuration will be more easier by using this approach.

System configuration

A system definition contains one or more partitions which are not necessarily actual scheduler partitions, but simply logical separations within the system. Let’s dissect one of the system configuration.

    'name': 'alaska',
    'descr': 'Default AlaSKA OpenHPC p3-appliances slurm cluster',
    'hostnames': ['alaska-login-0', 'alaska-compute'],
    'modules_system': 'lmod',
    'partitions': [
        {
            'name': 'login',
            'descr': 'Login node of AlaSKA OpenHPC cluster '
                     '(Intel Core Processor (Broadwell, IBRS))',
            'scheduler': 'local',
            'launcher': 'local',
            'environs': [
                'builtin', 'gnu',
            ],
            'processor': {
                **alaska_login_topo,
            },
            'prepare_cmds': [
                'module purge',
            ],
            'extras': {
                'interconnect': '25',  # in Gb/s
            },
        },
        {
            'name': 'compute-gcc9-ompi4-roce-umod',
            'descr': 'AlaSKA OpenHPC cluster with 25Gb/s RoCE with gcc 9.3.0, openmpi 4.1.1 and '
                     'UCX transport layer (Intel Core Processor (Broadwell, IBRS))',
            'scheduler': 'slurm',
            'launcher': 'mpirun',
            'time_limit': '0d8h0m0s',
            'access': [
                '--partition=full',
                '--exclusive',
            ],
            'max_jobs': 8,
            'environs': [
                'babel-stream-omp',
                'builtin',
                'ior',
                'imaging-iotest',
                'imaging-iotest-mkl',
                'imb',
                'numpy',
                'rascil',
            ],
            'modules':  [
                'gcc/9.3.0', 'git/2.31.1',
                'git-lfs/2.11.0', 'openmpi/4.1.1'
            ],
            'variables': [
                # scratch dir
                ['SCRATCH_DIR', '/scratch/mahendra'],
                # use RoCE 25 Gb/s
                ['UCX_NET_DEVICES', 'mlx5_0:1'],
                # UCX likes to spit out tons of warnings. Confine log to errors
                ['UCX_LOG_LEVEL', 'ERROR'],
                # Set locale
                ['LC_ALL', 'en_US.UTF-8'],
            ],
            'processor': {
                **alaska_compute_topo,
            },
            'prepare_cmds': [
                # 'mpirun () { command mpirun --tag-output --timestamp-output '
                # '\"$@\"; }',  # wrap mpirun output tag and timestamp
                'module purge',
                'module use ${SPACK_ROOT}/var/spack/environments/alaska/lmod/linux*/Core',
            ],
            'extras': {
                'interconnect': '25',  # in Gb/s
                'mem': '115234864000',  # total memory in bytes
            },
        },
        {
            'name': 'compute-icc21-impi21-roce-umod',
            'descr': 'AlaSKA OpenHPC cluster with 25Gb/s RoCE with ICC 2021.4.0, '
                     'Intel-MPI 2021.4.0 (Intel Core Processor (Broadwell, IBRS))',
            'scheduler': 'slurm',
            'launcher': 'mpiexec',
            'time_limit': '0d8h0m0s',
            'access': [
                '--partition=full',
                '--exclusive',
            ],
            'max_jobs': 8,
            'environs': [
                'babel-stream-tbb',
                'builtin',
                'intel-hpcg',
                'intel-hpl',
                'imaging-iotest',
                'imaging-iotest-mkl',
                'intel-stream',
            ],
            'modules':  [
                'intel-oneapi-compilers/2021.4.0', ' git/2.31.1',
                'git-lfs/2.11.0', 'intel-oneapi-mpi/2021.4.0',
            ],
            'variables': [
                # scratch dir
                ['SCRATCH_DIR', '/scratch/mahendra'],
                # # use ib (default) https://software.intel.com/content/www/us/en/develop/articles/intel-mpi-library-2019-over-libfabric.html
                # ['FI_VERBS_IFACE', 'ib'],
                # Set locale
                ['LC_ALL', 'en_US.UTF-8'],
            ],
            'processor': {
                **alaska_compute_topo,
            },
            'prepare_cmds': [
                # 'mpiexec () { command mpiexec -prepend-pattern \"[%r]: \" '
                # '\"$@\"; }',  # wrap mpirun with rank tag (intel mpi specific)
                'module purge',
                'module use ${SPACK_ROOT}/var/spack/environments/alaska/lmod/linux*/Core',
            ],
            'extras': {
                'interconnect': '25',  # in Gb/s

The first part of the systems configuration is very self-explanatory. The key hostnames must have the names of the machines in this system. For instance, in this case, it is hostnames of the login and compute nodes of AlaSKA SLURM cluster.

Note

Note that the hostnames can be provided in the form of regular expressions and within the ReFrame, standard python package re is used to match the names with the hostnames

The module_system key specifies the type of the environment modules used by the system.

For each system, several partitions can be defined. As stated earlier, they can be physical scheduler partitions or abstract ones. The definition of partitions depends on the user and they can be defined based on type of tests to be performed on the system. Let’s look at the first partition of our example here:

        {
            'name': 'login',
            'descr': 'Login node of AlaSKA OpenHPC cluster '
                     '(Intel Core Processor (Broadwell, IBRS))',
            'scheduler': 'local',
            'launcher': 'local',
            'environs': [
                'builtin', 'gnu',
            ],
            'processor': {
                **alaska_login_topo,
            },
            'prepare_cmds': [
                'module purge',
            ],
            'extras': {
                'interconnect': '25',  # in Gb/s
            },
        },

It is clear that these is a physical partitions of the cluster with is the login node of the AlaSKA cluster. The key scheduler defines the underlying workload manager used on the cluster and launcher is for the type of MPI wrapper used on the cluster to launch MPI jobs. For the login node, they both are local which means the jobs will be run on the shell without any parallel launcher. Typically, this partition can be used to clone the repositories, download datasets and compile the codes. We will come back to environs later. prepare_cmds are emitted at the top of the job scripts which can be the commands that needed for that partition to run the jobs. Finally, processor key specifies the processor topology of the node.

Important

The processor topology can be detected using ReFrame by running following command:

reframe/bin/reframe --detect-host-topology=topo.json

on the node that we want to get processor topology.

Now let’s look into other partitions that we defined for AlaSKA.

        {
            'name': 'compute-gcc9-ompi4-roce-umod',
            'descr': 'AlaSKA OpenHPC cluster with 25Gb/s RoCE with gcc 9.3.0, openmpi 4.1.1 and '
                     'UCX transport layer (Intel Core Processor (Broadwell, IBRS))',
            'scheduler': 'slurm',
            'launcher': 'mpirun',
            'time_limit': '0d8h0m0s',
            'access': [
                '--partition=full',
                '--exclusive',
            ],
            'max_jobs': 8,
            'environs': [
                'babel-stream-omp',
                'builtin',
                'ior',
                'imaging-iotest',
                'imaging-iotest-mkl',
                'imb',
                'numpy',
                'rascil',
            ],
            'modules':  [
                'gcc/9.3.0', 'git/2.31.1',
                'git-lfs/2.11.0', 'openmpi/4.1.1'
            ],
            'variables': [
                # scratch dir
                ['SCRATCH_DIR', '/scratch/mahendra'],
                # use RoCE 25 Gb/s
                ['UCX_NET_DEVICES', 'mlx5_0:1'],
                # UCX likes to spit out tons of warnings. Confine log to errors
                ['UCX_LOG_LEVEL', 'ERROR'],
                # Set locale
                ['LC_ALL', 'en_US.UTF-8'],
            ],
            'processor': {
                **alaska_compute_topo,
            },
            'prepare_cmds': [
                # 'mpirun () { command mpirun --tag-output --timestamp-output '
                # '\"$@\"; }',  # wrap mpirun output tag and timestamp
                'module purge',
                'module use ${SPACK_ROOT}/var/spack/environments/alaska/lmod/linux*/Core',
            ],
            'extras': {
                'interconnect': '25',  # in Gb/s
                'mem': '115234864000',  # total memory in bytes
            },
        },
        {
            'name': 'compute-icc21-impi21-roce-umod',
            'descr': 'AlaSKA OpenHPC cluster with 25Gb/s RoCE with ICC 2021.4.0, '
                     'Intel-MPI 2021.4.0 (Intel Core Processor (Broadwell, IBRS))',
            'scheduler': 'slurm',
            'launcher': 'mpiexec',
            'time_limit': '0d8h0m0s',
            'access': [
                '--partition=full',
                '--exclusive',
            ],
            'max_jobs': 8,
            'environs': [
                'babel-stream-tbb',
                'builtin',
                'intel-hpcg',
                'intel-hpl',
                'imaging-iotest',
                'imaging-iotest-mkl',
                'intel-stream',
            ],
            'modules':  [
                'intel-oneapi-compilers/2021.4.0', ' git/2.31.1',
                'git-lfs/2.11.0', 'intel-oneapi-mpi/2021.4.0',
            ],
            'variables': [
                # scratch dir
                ['SCRATCH_DIR', '/scratch/mahendra'],
                # # use ib (default) https://software.intel.com/content/www/us/en/develop/articles/intel-mpi-library-2019-over-libfabric.html
                # ['FI_VERBS_IFACE', 'ib'],
                # Set locale
                ['LC_ALL', 'en_US.UTF-8'],
            ],
            'processor': {
                **alaska_compute_topo,
            },
            'prepare_cmds': [
                # 'mpiexec () { command mpiexec -prepend-pattern \"[%r]: \" '
                # '\"$@\"; }',  # wrap mpirun with rank tag (intel mpi specific)
                'module purge',
                'module use ${SPACK_ROOT}/var/spack/environments/alaska/lmod/linux*/Core',
            ],
            'extras': {
                'interconnect': '25',  # in Gb/s

It is clear that these are “abstract” partitions that are based on physical partitions of compute nodes of AlaSKA. For instance, partition compute-gcc-ompi-roce-umod supports 25 Gb/s RDMA over Converged Ethernet (RoCE) network interface with GCC 9.3.0 and OpenMPI 4.1.1 using UCX support. List of modules that needed to be loaded every time this partition is used can be specified using modules key. To be able to use this partition with the above stated specs, we will have to load OpenMPI 4.1.1 module which is present in modules key. The key access defines the additional parameters that needed to be passed to the scheduler in order to submit jobs. These typically include the partition that user can access and account name of the user on the system. max_jobs is maximum number of concurrent jobs that ReFrame can submit to the scheduler. The variables key can be used to define any environment variables that needed to be defined for this partitions before we run the job. Similarly, in the variables we are setting UCX parameter to use RoCE for the transport layer and specifying mlx5_0:1 port. When we run a test in this partition, ReFrame loads all necessary modules and sets environment variables to use this spec. Likewise, compute-gcc-impi-roce-umod partition uses Intel MPI.

This gives a general idea of what system and partition can do in ReFrame framework. It gives a plethora of possibilities to the user to define several partitions and we can run tests on these partitions without changing any generic logic to the test per se.

Environment configuration

Partitions then support one or more environments which describe the modules to be loaded, environment variables, options etc. Environments are defined separately from partitions so they may be specific to a system and partition, common to multiple systems or partitions, or a default environment may be overridden for specific systems and/or partitions. The third level is the tests themselves, which may also define modules to load etc. as well as which environments, partitions and systems they are valid for. ReFrame then runs tests on combinations of valid partitions and environments. So we can see the hierarchy of configuration using systems, environments and tests.

Consider the environment example shown below:

    {
        'name': 'imaging-iotest',
        'target_systems': [
            'juwels-cluster:batch-gcc9-ompi4-ib-smod',
            'juwels-cluster:batch-gcc9-ompi4-ib-smod-mem192',
        ],
        'modules': [
            'HDF5/1.10.6', 'FFTW/3.3.8 ',
            'CMake/3.18.0',
        ],
        'cc': 'mpicc',
        'cxx': 'mpicxx',
        'ftn': 'mpif90',
    },
    {
        'name': 'imaging-iotest',
        'target_systems': [
            'juwels-cluster:batch-icc20-pmpi5-ib-smod',
            'juwels-cluster:batch-icc20-pmpi5-ib-smod-mem192',
        ],
        'modules': [
            'HDF5/1.10.6', 'FFTW/3.3.8 ',
            'CMake/3.18.0'
        ],
        'cc': 'mpiicc',
        'cxx': 'mpiicpc',
        'ftn': 'mpiifort',
    },  # <end JUWELS system software stack>
    {
        'name': 'imaging-iotest-mkl',
        'target_systems': [
            'juwels-cluster:batch-gcc9-ompi4-ib-smod',
            'juwels-cluster:batch-gcc9-ompi4-ib-smod-mem192',

As the name of the environment suggests, it is defined for Imaging IO Test. We need to define the key target_systems where this environment is valid. Similarly, for each system definition, we need to define the environs key to specify the environments that we want to use within that system partition.

Note

The environments defined in environ for each system partition must be appear in target_systems of that environment and vice-versa. Otherwise, ReFrame will complain about missing system partition or environment for a given test

And finally, the modules keyword specifies the dependencies of the test we will run within this environment. In the current example of Imaging IO test, we need HDF5 and FFTW libraries for the test and hence, we load them. Additionally, Imaging IO test can use FFTW from Intel MKL libraries as well when Intel OneAPI is available on the system. Hence, we define another environment here that uses FFTW from intel MKL libraries. In this way, environments can be defined for different tests.

It is up to the user how the system, partitions and environments are defined. A very generic systems, partitions and environments can be defined and test related modules and variables can be defined within python test scripts as well.

ReFrame usage

Basic usage

Once the system and environment configuration is finished, we can run ReFrame tests. Let’s consider a simple hello world ReFrame:

import reframe as rfm
import reframe.utility.sanity as sn


@rfm.simple_test
class HelloMultiLangTest(rfm.RegressionTest):

    lang = parameter(['c', 'cpp'])
    arg = parameter(['Mercury', 'Venus', 'Mars'])

    def __init__(self):

        self.valid_systems = ['*']
        self.valid_prog_environs = ['gnu']
        self.tags |= {self.lang, self.arg}
        self.executable_opts = [self.arg, '> hello.out']
        self.sanity_patterns = sn.assert_found(
            r'Hello, World from {}!'.format(self.arg), 'hello.out')

    @run_before('compile')
    def set_sourcepath(self):
        self.sourcepath = f'hello.{self.lang}'

The test can be launched using the following command

reframe/bin/reframe -C reframe_config.py -c helloworld/reframe_helloworld.py -r

This command has to be executed from the root of the repository. This will run all the tests defined in reframe_iotest.py file. The flag -C is used to specify the ReFrame configuation file. Alternatively, an environment variable RFM_CONFIG_FILE can be set to avoid passing this variable every time on CLI. The flag -c is used to tell ReFrame which test we want to run and finally, -r tells ReFrame to actually run the tests. Useful CLI arguments are as follows:

  • Option -l / --list : List all tests defined in the python script

  • Option -L / --list-detailed : List all the dependencies of the tests. More details on test dependencies in ReFrame can be found here.

  • Option --performance-report : Print the performance metrics at the end of the test

  • Option -p / --prgenv : Choose the environments that we want to run the tests. By default, ReFrame will run tests on all valid environments

  • Option --system : Choose the system partition to run the tests.

  • Option -t / --tag : Choose the tags we want to confine the tests. More about tags will be discussed later.

Parameterisation of tests

Parameterisation is very powerful feature of ReFrame. In the present example, we defined two sets of parameters namely, lang and arg . The parameter lang specifies the language the source code is written. Both C and C++ source codes of the sample code can be found in helloworld/src folder. And the parameter arg adds the CLI argument to the source. For example, running gcc -o helloworld helloworld.c && helloworld Mercury will print Hello, World from Mercury! on the terminal. Now let’s check number of tests ReFrame recognises from this simple test by running reframe/bin/reframe -C reframe_config.py -c helloworld/reframe_helloworld.py -l. The output is as follows:

[ReFrame Setup]
version:           3.8.0-dev.2+8a9ceeda
command:           'reframe/bin/reframe -C reframe_config.py -c helloworld/reframe_helloworld.py -l'
launched by:       mpaipuri@fnancy
working directory: '/home/mpaipuri/ska-sdp-benchmark-tests'
settings file:     'reframe_config.py'
check search path: (R) '/home/mpaipuri/ska-sdp-benchmark-tests/helloworld/reframe_helloworld.py'
stage directory:   '/home/mpaipuri/ska-sdp-benchmark-tests/stage'
output directory:  '/home/mpaipuri/ska-sdp-benchmark-tests/output'

[List of matched checks]
- HelloMultiLangTest_cpp_Venus (found in '/home/mpaipuri/ska-sdp-benchmark-tests/helloworld/reframe_helloworld.py')
- HelloMultiLangTest_c_Mercury (found in '/home/mpaipuri/ska-sdp-benchmark-tests/helloworld/reframe_helloworld.py')
- HelloMultiLangTest_cpp_Mars (found in '/home/mpaipuri/ska-sdp-benchmark-tests/helloworld/reframe_helloworld.py')
- HelloMultiLangTest_c_Venus (found in '/home/mpaipuri/ska-sdp-benchmark-tests/helloworld/reframe_helloworld.py')
- HelloMultiLangTest_c_Mars (found in '/home/mpaipuri/ska-sdp-benchmark-tests/helloworld/reframe_helloworld.py')
- HelloMultiLangTest_cpp_Mercury (found in '/home/mpaipuri/ska-sdp-benchmark-tests/helloworld/reframe_helloworld.py')
Found 6 check(s)

Log file(s) saved in '/home/mpaipuri/ska-sdp-benchmark-tests/reframe.log', '/home/mpaipuri/ska-sdp-benchmark-tests/reframe.out'

It is clear that ReFrame found 6 tests, helloworld code with C and 3 arguments and helloworld with C++ and 3 arguments. Let’s run these tests and see what output we will get by using reframe/bin/reframe -C reframe_config.py -c helloworld/reframe_helloworld.py -r command

[ReFrame Setup]
version:           3.8.0-dev.2+8a9ceeda
command:           'reframe/bin/reframe -C reframe_config.py -c helloworld/reframe_helloworld.py -r'
launched by:       mpaipuri@fnancy
working directory: '/home/mpaipuri/ska-sdp-benchmark-tests'
settings file:     'reframe_config.py'
check search path: (R) '/home/mpaipuri/ska-sdp-benchmark-tests/helloworld/reframe_helloworld.py'
stage directory:   '/home/mpaipuri/ska-sdp-benchmark-tests/stage'
output directory:  '/home/mpaipuri/ska-sdp-benchmark-tests/output'

[==========] Running 6 check(s)
[==========] Started on Mon Aug 16 11:20:49 2021

[----------] started processing HelloMultiLangTest_c_Mercury (HelloMultiLangTest_c_Mercury)
[ RUN      ] HelloMultiLangTest_c_Mercury on nancy-g5k:frontend using gnu
[----------] finished processing HelloMultiLangTest_c_Mercury (HelloMultiLangTest_c_Mercury)

[----------] started processing HelloMultiLangTest_c_Venus (HelloMultiLangTest_c_Venus)
[ RUN      ] HelloMultiLangTest_c_Venus on nancy-g5k:frontend using gnu
[----------] finished processing HelloMultiLangTest_c_Venus (HelloMultiLangTest_c_Venus)

[----------] started processing HelloMultiLangTest_c_Mars (HelloMultiLangTest_c_Mars)
[ RUN      ] HelloMultiLangTest_c_Mars on nancy-g5k:frontend using gnu
[----------] finished processing HelloMultiLangTest_c_Mars (HelloMultiLangTest_c_Mars)

[----------] started processing HelloMultiLangTest_cpp_Mercury (HelloMultiLangTest_cpp_Mercury)
[ RUN      ] HelloMultiLangTest_cpp_Mercury on nancy-g5k:frontend using gnu
[----------] finished processing HelloMultiLangTest_cpp_Mercury (HelloMultiLangTest_cpp_Mercury)

[----------] started processing HelloMultiLangTest_cpp_Venus (HelloMultiLangTest_cpp_Venus)
[ RUN      ] HelloMultiLangTest_cpp_Venus on nancy-g5k:frontend using gnu
[----------] finished processing HelloMultiLangTest_cpp_Venus (HelloMultiLangTest_cpp_Venus)

[----------] started processing HelloMultiLangTest_cpp_Mars (HelloMultiLangTest_cpp_Mars)
[ RUN      ] HelloMultiLangTest_cpp_Mars on nancy-g5k:frontend using gnu
[----------] finished processing HelloMultiLangTest_cpp_Mars (HelloMultiLangTest_cpp_Mars)

[----------] waiting for spawned checks to finish
[       OK ] (1/6) HelloMultiLangTest_cpp_Venus on nancy-g5k:frontend using gnu [compile: 0.445s run: 0.652s total: 1.136s]
[       OK ] (2/6) HelloMultiLangTest_c_Mars on nancy-g5k:frontend using gnu [compile: 0.142s run: 1.703s total: 1.886s]
[       OK ] (3/6) HelloMultiLangTest_c_Mercury on nancy-g5k:frontend using gnu [compile: 0.149s run: 2.160s total: 2.349s]
[       OK ] (4/6) HelloMultiLangTest_cpp_Mars on nancy-g5k:frontend using gnu [compile: 0.451s run: 0.455s total: 0.946s]
[       OK ] (5/6) HelloMultiLangTest_c_Venus on nancy-g5k:frontend using gnu [compile: 0.131s run: 2.249s total: 2.427s]
[       OK ] (6/6) HelloMultiLangTest_cpp_Mercury on nancy-g5k:frontend using gnu [compile: 0.437s run: 1.738s total: 2.215s]
[----------] all spawned checks have finished

[  PASSED  ] Ran 6/6 test case(s) from 6 check(s) (0 failure(s), 0 skipped)
[==========] Finished on Mon Aug 16 11:20:52 2021
Run report saved in '/home/mpaipuri/.reframe/reports/run-report.json'
Log file(s) saved in '/home/mpaipuri/ska-sdp-benchmark-tests/reframe.log', '/home/mpaipuri/ska-sdp-benchmark-tests/reframe.out'

ReFrame ran all the possible tests and they all passed. The way ReFrame judges if a test is passed or failed is through the sanity check. In the reframe_helloworld.py script, we define the so-called sanity_patterns. As each test should spit out Hello, World from <arg>!, ReFrame checks using regular expressions if this line is present in the standard output. If present, ReFrame marks test as pass, else it marks it as fail. Of course more advanced sanity checks can be written for complicated benchmarks. More details on sanity checking can be found here.

Tagging of tests

What if we want to run only subset of tests. This can come handy when we are running relatively big benchmarks when we do not want to run all the tests we defined within the ReFrame script. This can be achieved through the tag feature of the ReFrame. That is where the line self.tags |= {self.lang, self.arg} comes into play. We are tagging each parameterised test with its tag. We can customise the tags as per our need, for instance, self.tags |= {"language=%s" % self.lang, "planet=%s" % self.arg}. To restrict the tests to a given tag, we need to simply provide the -t flag at CLI as follows:

reframe/bin/reframe -C reframe_config.py -c helloworld/reframe_helloworld.py -t c$ -r

Note

Reframe uses regular expressions to match the tags with parameters. In this case, if we use -t c -t Mercury, it selects the tests from cpp as well as c matches with cpp in regular expression context. So, we should use the end of line $ regular expression operator in these sort of situations

Let’s check the output of the above command:

[ReFrame Setup]
version:           3.8.0-dev.2+8a9ceeda
command:           'reframe/bin/reframe -C reframe_config.py -c helloworld/reframe_helloworld.py -t c$ -r'
launched by:       mpaipuri@fnancy
working directory: '/home/mpaipuri/ska-sdp-benchmark-tests'
settings file:     'reframe_config.py'
check search path: (R) '/home/mpaipuri/ska-sdp-benchmark-tests/helloworld/reframe_helloworld.py'
stage directory:   '/home/mpaipuri/ska-sdp-benchmark-tests/stage'
output directory:  '/home/mpaipuri/ska-sdp-benchmark-tests/output'

[==========] Running 3 check(s)
[==========] Started on Mon Aug 16 11:45:13 2021

[----------] started processing HelloMultiLangTest_c_Mercury (HelloMultiLangTest_c_Mercury)
[ RUN      ] HelloMultiLangTest_c_Mercury on nancy-g5k:frontend using gnu
[----------] finished processing HelloMultiLangTest_c_Mercury (HelloMultiLangTest_c_Mercury)

[----------] started processing HelloMultiLangTest_c_Venus (HelloMultiLangTest_c_Venus)
[ RUN      ] HelloMultiLangTest_c_Venus on nancy-g5k:frontend using gnu
[----------] finished processing HelloMultiLangTest_c_Venus (HelloMultiLangTest_c_Venus)

[----------] started processing HelloMultiLangTest_c_Mars (HelloMultiLangTest_c_Mars)
[ RUN      ] HelloMultiLangTest_c_Mars on nancy-g5k:frontend using gnu
[----------] finished processing HelloMultiLangTest_c_Mars (HelloMultiLangTest_c_Mars)

[----------] waiting for spawned checks to finish
[       OK ] (1/3) HelloMultiLangTest_c_Mercury on nancy-g5k:frontend using gnu [compile: 0.143s run: 0.493s total: 0.675s]
[       OK ] (2/3) HelloMultiLangTest_c_Venus on nancy-g5k:frontend using gnu [compile: 0.136s run: 0.476s total: 0.649s]
[       OK ] (3/3) HelloMultiLangTest_c_Mars on nancy-g5k:frontend using gnu [compile: 0.138s run: 0.451s total: 0.628s]
[----------] all spawned checks have finished

[  PASSED  ] Ran 3/3 test case(s) from 3 check(s) (0 failure(s), 0 skipped)
[==========] Finished on Mon Aug 16 11:45:14 2021
Run report saved in '/home/mpaipuri/.reframe/reports/run-report.json'
Log file(s) saved in '/home/mpaipuri/ska-sdp-benchmark-tests/reframe.log', '/home/mpaipuri/ska-sdp-benchmark-tests/reframe.out'

As we can see from the output, only tests with helloworld.c has been executed. We can specify multiple tags using as many -t options as we want as follows:

reframe/bin/reframe -C reframe_config.py -c helloworld/reframe_helloworld.py -t c$ -t Mercury -r

This will execute only one test using helloworld.c and Mercury as a command line argument.

Similarly, if there are multiple environments defined for each test, we can confine the test to given environment using -p flag.

Note

The CLI arguments for tags, -t , name, -n , and environment, -p , take regular expression as input and match the corresponding names from the tests. Hence, care should be taken while specifying them.

Tests dependencies

One of the typical scenario when benchmarking is to do scalability tests. Using a naive approach of ReFrame to do scalability test would be to clone the repository, compile the sources and run the benchmark for each node/runtime configuration. This is a shear waste of time and resources as all the runs within a given partition and environment share same sources and executable. This can be addressed using test dependencies and fixtures.

An extensive overview of how test dependencies work in ReFrame is out-of-scope of current documentation. The user are advised to check the official documentation of the test dependencies from ReFrame which gives a very good idea of how it works and how to implement them. Similarly, fixtures can be used as well in place of dependencies which is documented with a nice example in the official documentation of ReFrame.

Multiple runs

Since ReFrame v3.12.0 there exists support for the CLI-flag --repeat=N. This runs all given tests N times independently of each others and collects their performance variables in independent perflogs. Those perflogs are aggregated by the perflog reading methods in modules/utils.py.