SDP Workflow MID Self-calibration

Self-calibration is an iterative process which uses the visibility data to calibrate itself, building up a more complete model of the sky using progressively more clean components, as the calibrated visibilities and image converge towards the true sky. It relies on the fact that a given array containing n antennas has n complex gains to solve for, but with n * (n - 1) / 2 baseline measurements, there is enough redundancy to allow this to happen if n is large enough.

Typically, self-calibration is done using an initial shallow clean, finding only the brightest sources in the field to start with. Model visibilities are generated from these, and used to calibrate the measured visibility data by solving for sets of complex gains, which are then applied to generate a corrected data set. This new data set is then imaged and cleaned a little more deeply, and the expanded list of clean components is used to generate a more accurate set of model visibility data. The process is repeated until the image quality stops improving (usually after only a few iterations). The first iteration of the self-calibration cycle may solve only for the phases, if the cleaning is shallow enough for the source amplitudes to not be determined sufficiently accurately at that point. In subsequent iterations, the calibration can proceed by solving for both amplitude and phase.

This documentation describes how to run the Mid self-calibration pipelines which implement this self-calibration loop using the LOFAR software components DP3 and WSClean. One performs direction-independent (DI) calibration, the other performs direction-dependent (DD) calibration but runs considerably slower.

  • See the Installation page for installation instructions.

  • See the Docker and singularity images page if you wish the pipeline to run DP3 and WSClean inside a singularity container; with this option, you don’t have to install these programs onto your machine.

  • See the DD Pipeline Usage page for how to run the direction-dependent pipeline.

  • See the Launching a Dask Cluster page for how to leverage dask distribution across one or multiple nodes for some eligible steps of the DD pipeline.

  • See the Running on SLURM page for an example SLURM batch script for use on HPC clusters.

  • See the Additional Apps page for information about the additional command-line apps that are included with the pipeline.