Cheetah - SKA - PSS - Prototype Time Domain Search Pipeline
|
cheetah uses cmake to generate the makefiles or similar for your required tool chain. First create a directory for your build (you can have more than one, for each type of build such as optimised or debug, with or without accelerator support built in etc). Then use cmake with suitable options to generate the build you require.
If all is well cmake will generate the necessary files for your build system (by default on linux systems this will be makefiles). Once the files are generated you can use your build tool (make
on linux) to build the libraries and executables e.g.
-DBOOST_ROOT=<path_to_boost> : location of your boost version (see /usr/share/cmake/Modules/FindBoost.cmake for more options) -DCMAKE_INSTALL_PREFIX : Root directory used to install files when calling 'make install'. The default for this is usually /usr/local. -DCMAKE_BUILD_TYPE={debug|release|profile) : The type of build
-DENABLE_CUDA=true : link against the CUDA libs for GPU support -DENABLE_PSRDADAD=true : generate psrdada functions linking against the psrdada library -DENABLE_ASTROACCELERATE=true : use functions dependent on the astroaccelerate libs
If you are having trouble with some specific package, the cmake modules can be checked for documentation. They will reside in either cheetahs cmake directory, or the default cmake modules can be found in /usr/share/cmake/Modules.
Releases builds can be optimized based on profiling information generated when running the test suite. This is a 3 stage process which involves building the libraries with profiling instrumentation, running the test suite to generate profiling information and finally re-building the libraries using the profiling files.
The first step is to build the code with training instrumentation. To do this, create a build directory, and from within this run cmake with the option -DCMAKE_BUILD_TYPE=pgotrain
.
If the training build directory is called training
and is placed at the same level as the final build directory, step 3 is easier.
Step 2 involves running the test suite, which, due to the training instrumentation built into the code will generate coverage and profiling information used in the next step.
You will need to create another build directory from which you can run cmake with the build optimization option. cmake needs to know where to look for the generated profiling files, so you supply a -DPGO_TRAINING_DIR
pointing at the directory you created in step 1 above. You must also specify -DCMAKE_BUILD_TYPE=pgobuild
so that it uses the profiling information to generate optimized code.
The recommendation is to place the build directory at the same level as the training directory, so that it is easier to find. If the training build directory was called training
you do not need to supply the -DPGO_TRAINING_DIR option.
You should be able to just run make
in the final build area and make will step back into the training area and run make all
followed by make test
before making all targets in the final build area.
Currently the profiling information used to optimize the final build is derived from running the test suite. Generally the profiling information should be generated when running the required executables in the way they would typically be run, as this will give the best optimization.
If this is changed in the future then the 'run test suite' phase should be changed to run whatever process has been devised to generate the profiling data back in the correct directory.