CSP.LMC Low project

Table of contents


Documentation Status

The documentation with Architecture description can be found at SKA developer portal:

CSP.LMC low documentation

Repository organization

The repository has the following organization:

  • src: the folder with all the project source code

  • resources: contains the pogo directory with POGO files of the TANGO Device Classes of the project and taranta_dashboards directory

  • tests: this folder is organized in sub-folders with unit tests and bdd tests to be run with real and simulated sub-system devices.

  • charts: stores the HELM charts to deploy the LOW CSP.LMC system under kubernets environment.

  • docs: contains all the files to generate the documentation for the project.

Containerised Low CSP.LMC in Kubernetes

The TANGO devices of the CSP_low.LMC prototype run in a containerised environment. Currently only a limited number of low CSP.LMC and Low CBF devices are run in Docker containers:

  • the LowCspController and LOW CbfController

  • three instances of the Low CSP.LMC subarray and one instance of Low CBF subarray

  • the LowCbf Allocator, the Processor and and an Alveo Card FPGA simulator.

Note: Check umbrella chart for the number of deployed CBF subarrays in the version used.

The Low CSP.LMC containerised TANGO servers are managed via Kubernetes. The system is setup so that each k8s Pod has only one Docker container that in turn runs only one Tango Device Server application.

Low CSP.LMC TANGO Servers rely on three different Docker images: ska-csp-lmc-low, ska-low-cbfand ska-low-cbf-proc.
The first one runs the Low CSP.LMC TANGO devices (real and simulators) and the second those of the Low CBF.LMC prototype. The third one, i.e. the Low.CBF Processor TANGO device used to control & monitor an Alveo FPGA card, is also essential for the proper operation of CSP.LMC.

Note: Low CSP.LMC is deployed with three subarrays, but only one is fully supported.

Build the Low CSP.LMC docker image

The Low CSP.LMC project fully relies on the standard SKA CI/CD makefiles.

In order to locally deploy and test the project, Minikube has to be installed. ska-cicd-deploy-minikube provides all the instructions to setup a minikube machine running in a virtual environment. The instructions are very detailed and cover many frequent issues. You can check the deployment with a make vars. Be aware of heavy HW requirements: 4 cores or more, and more than 8GB ram.

Following a short installation procedure:

git clone git@gitlab.com:ska-telescope/sdi/deploy-minikube.git
cd deploy-minikube

to use Pod driver:

make all

to use Docker driver:

make all DRIVER=docker

To check that the minikube environment is up and runing, issue the command

minikube status

the output should be:

type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

To use the built image in Low CSP.LMC system deployment, the environment should be configured to use the local minikube’s Docker daemon running with

eval $(minikube docker-env)

The local Makefile, in the root folder of the project, defines the setting and variables used to customize the docker image building and the deployment of the system.

To build the Low CSP.LMC docker image, issue the command from the project root:

make oci-build

Low CSP.LMC Kubernetes Deployment via Helm Charts

The deployment of the system is handled by the Helm tool, via the Helm Charts, a set of YAML files describing how the Kubernetes resources are related.
The Low CSP.LMC Helm Charts are stored in the charts directory, organized in several sub-folders:

  • ska-csp-lmc-low with the Helm chart to deploy only the Low CSP.LMC devices: LowCspController and LowCspSubarray (4 instances)

  • low-csp-umbrella with the Helm chart to deploy the whole Low CSP.LMC system, including the TANGO Database and the Low CBF.LMC devices and the Low CBF Processor. Using custom values YAML files stored in this folder, the Low CSP.LMC can be deployed with a set of simulators devices for all the

In particular, the low-csp-umbrella chart depends on the Low CSP.LMC, Low CBF.LMC, the Tango DB and Taranta charts and these dependencies are dynamically linked specifying the dependencies field in the Chart.yaml.

Deployment of Low CSP.LMC with real Low CBF sub-system devices

In the following there are the instructions to deploy the Low CSP.LMC system with the real Low CBF devices (https://gitlab.com/ska-telescope/ska-low-cbf; https://gitlab.com/ska-telescope/ska-low-cbf-proc).

After having a Minikube running, the ska-csp-lmc-low has to be installed. To do that, issue the command:

$ git clone git@gitlab.com:ska-telescope/ska-csp-lmc-low.git
$ cd ska-csp-lmc-low
$ git submodule update --init --recursive

to deploy the Low CSP devices in a k8s environment, without a GUI support, issue the command:

make k8s-install-chart

This command uses the values file values-default.yaml to install only the Low CSP.LMC and Low CBF real devices. This configuration does not deploy Taranta pods. The TANGO Devices can accessed using a itango or jupiter shell

If a GUI support is desiderable, the following command can be specified:

make VALUES_FILE=charts/low-csp-umbrella/values-taranta.yaml k8s-install-chart

In both cases, the output of the command should be something like the following one:

+++ Updating low-csp-umbrella chart +++
Getting updates for unmanaged Helm repositories...
...Successfully got an update from the "https://artefact.skao.int/repository/helm-internal" chart repository
...Successfully got an update from the "https://artefact.skao.int/repository/helm-internal" chart repository
...Successfully got an update from the "https://artefact.skao.int/repository/helm-internal" chart repository
...Successfully got an update from the "https://artefact.skao.int/repository/helm-internal" chart repository
...Successfully got an update from the "https://artefact.skao.int/repository/helm-internal" chart repository
...Successfully got an update from the "https://artefact.skao.int/repository/helm-internal" chart repository
Saving 7 charts
Downloading ska-tango-base from repo https://artefact.skao.int/repository/helm-internal
Downloading ska-low-cbf from repo https://artefact.skao.int/repository/helm-internal
Downloading ska-low-cbf-proc from repo https://artefact.skao.int/repository/helm-internal
Downloading ska-taranta from repo https://artefact.skao.int/repository/helm-internal
Downloading ska-taranta-auth from repo https://artefact.skao.int/repository/helm-internal
Downloading ska-dashboard-repo from repo https://artefact.skao.int/repository/helm-internal
Deleting outdated charts
+++ Updating ska-csp-lmc-low chart +++
Getting updates for unmanaged Helm repositories...
...Successfully got an update from the "https://artefact.skao.int/repository/helm-internal" chart repository
...Successfully got an update from the "https://artefact.skao.int/repository/helm-internal" chart repository
Saving 2 charts
Downloading ska-tango-util from repo https://artefact.skao.int/repository/helm-internal
Downloading ska-tango-base from repo https://artefact.skao.int/repository/helm-internal
Deleting outdated charts
Name:         low-csp
Labels:       kubernetes.io/metadata.name=low-csp
Annotations:  <none>
Status:       Active

No resource quota.

No LimitRange resource.
helm upgrade --install test \
--set global.minikube=true \
--set global.tango_host=tango-host-databaseds-from-makefile-test:10000 \
--set ska-csp-lmc-low.lowcsplmc.image.tag=0.1.3  \
--values gitlab_values.yaml \
         charts/low-csp-umbrella/ --namespace low-csp; \
rm gitlab_values.yaml
Release "test" does not exist. Installing it now.
NAME: test
LAST DEPLOYED: Wed Dec  1 09:39:43 2021
NAMESPACE: low-csp
STATUS: deployed

The CSP system is deployed in the namespace ‘low-csp’: to access any information about pods, logs etc. please specify this namespace.

To monitor the deployment progress and wait its completion, issue the command:

make k8s-wait

The deployment takes some time because if the docker images are not already present on the disk, they are downloaded from the CAR repository. The command output it’s similar to the following one:

k8sWait: waiting for DatabaseDS(s) and DeviceServer(s) to be ready in 'low-csp'
mer 3 apr 2024, 18:32:46, CEST
NAME                                                   COMPONENTS   SUCCEEDED   AGE   STATE
databaseds.tango.tango-controls.org/tango-databaseds   2            2           8h    Running

NAME                                                             COMPONENTS   SUCCEEDED   AGE   STATE
deviceserver.tango.tango-controls.org/allocator-default          1            1           8h    Running
deviceserver.tango.tango-controls.org/controller-default         1            1           8h    Running
deviceserver.tango.tango-controls.org/cspcontroller-controller   1            1           8h    Running
deviceserver.tango.tango-controls.org/cspsubarray-subarray1      1            1           8h    Running
deviceserver.tango.tango-controls.org/cspsubarray-subarray2      1            1           8h    Running
deviceserver.tango.tango-controls.org/cspsubarray-subarray3      1            1           8h    Running
deviceserver.tango.tango-controls.org/cspsubarray-subarray4      1            1           8h    Running
deviceserver.tango.tango-controls.org/low-pst-beam-01            1            1           8h    Running
deviceserver.tango.tango-controls.org/processor-0                1            1           8h    Running
deviceserver.tango.tango-controls.org/processor-1                1            1           8h    Running
deviceserver.tango.tango-controls.org/subarray-1                 1            1           8h    Running
deviceserver.tango.tango-controls.org/subarray-2                 1            1           8h    Running
deviceserver.tango.tango-controls.org/subarray-3                 1            1           8h    Running
deviceserver.tango.tango-controls.org/subarray-4                 1            1           8h    Running
deviceserver.tango.tango-controls.org/tangotest-test             1            1           8h    Running
k8sWait: DatabaseDS(s) found: tango-databaseds
k8sWait: DeviceServer(s) found: allocator-default controller-default cspcontroller-controller cspsubarray-subarray1 cspsubarray-subarray2 cspsubarray-subarray3 cspsubarray-subarray4 low-pst-beam-01 processor-0 processor-1 subarray-1 subarray-2 subarray-3 subarray-4 tangotest-test
databaseds.tango.tango-controls.org/tango-databaseds condition met

real    0m0,272s
user    0m0,178s
sys     0m0,084s
k8sWait: DatabaseDS(s) running - tango-databaseds
deviceserver.tango.tango-controls.org/allocator-default condition met
deviceserver.tango.tango-controls.org/controller-default condition met
deviceserver.tango.tango-controls.org/cspcontroller-controller condition met
deviceserver.tango.tango-controls.org/cspsubarray-subarray1 condition met
deviceserver.tango.tango-controls.org/cspsubarray-subarray2 condition met
deviceserver.tango.tango-controls.org/cspsubarray-subarray3 condition met
deviceserver.tango.tango-controls.org/cspsubarray-subarray4 condition met
deviceserver.tango.tango-controls.org/low-pst-beam-01 condition met
deviceserver.tango.tango-controls.org/processor-0 condition met
deviceserver.tango.tango-controls.org/processor-1 condition met
deviceserver.tango.tango-controls.org/subarray-1 condition met
deviceserver.tango.tango-controls.org/subarray-2 condition met
deviceserver.tango.tango-controls.org/subarray-3 condition met
deviceserver.tango.tango-controls.org/subarray-4 condition met
deviceserver.tango.tango-controls.org/tangotest-test condition met

real    0m1,845s
user    0m0,322s
sys     0m0,130s
k8sWait: DeviceServer(s) running - allocator-default controller-default cspcontroller-controller cspsubarray-subarray1 cspsubarray-subarray2 cspsubarray-subarray3 cspsubarray-subarray4 low-pst-beam-01 processor-0 processor-1 subarray-1 subarray-2 subarray-3 subarray-4 tangotest-test
k8sWait: waiting for jobs to be ready in 'low-csp'
k8sWait: Jobs found:
k8sWait: no Jobs found to wait for using: kubectl get job --output=jsonpath={.items..metadata.name} -n low-csp
mer 3 apr 2024, 18:32:49, CEST
k8sWait: Pods found: ds-allocator-default-0 ds-controller-default-0 ds-cspcontroller-controller-0 ds-cspsubarray-subarray1-0 ds-cspsubarray-subarray2-0 ds-cspsubarray-subarray3-0 ds-cspsubarray-subarray4-0 ds-low-pst-beam-01-0 ds-processor-0-0 ds-processor-1-0 ds-subarray-1-0 ds-subarray-2-0 ds-subarray-3-0 ds-subarray-4-0 ds-tangotest-test-0 ska-tango-base-itango-console
k8sWait: going to - kubectl -n low-csp wait --for=condition=ready --timeout=360s pods ds-allocator-default-0 ds-controller-default-0 ds-cspcontroller-controller-0 ds-cspsubarray-subarray1-0 ds-cspsubarray-subarray2-0 ds-cspsubarray-subarray3-0 ds-cspsubarray-subarray4-0 ds-low-pst-beam-01-0 ds-processor-0-0 ds-processor-1-0 ds-subarray-1-0 ds-subarray-2-0 ds-subarray-3-0 ds-subarray-4-0 ds-tangotest-test-0 ska-tango-base-itango-console
pod/ds-allocator-default-0 condition met
pod/ds-controller-default-0 condition met
pod/ds-cspcontroller-controller-0 condition met
pod/ds-cspsubarray-subarray1-0 condition met
pod/ds-cspsubarray-subarray2-0 condition met
pod/ds-cspsubarray-subarray3-0 condition met
pod/ds-cspsubarray-subarray4-0 condition met
pod/ds-low-pst-beam-01-0 condition met
pod/ds-processor-0-0 condition met
pod/ds-processor-1-0 condition met
pod/ds-subarray-1-0 condition met
pod/ds-subarray-2-0 condition met
pod/ds-subarray-3-0 condition met
pod/ds-subarray-4-0 condition met
pod/ds-tangotest-test-0 condition met
pod/ska-tango-base-itango-console condition met

real    0m1,950s
user    0m0,327s
sys     0m0,129s
k8sWait: all Pods ready

The command:

helm list -n low-csp

returns information about the release name (test) and the namespace (low-csp).

NAME    NAMESPACE       REVISION        UPDATED                                         STATUS          CHART                   APP VERSION
test    low-csp         2               2024-04-03 18:30:32.412462847 +0200 CEST        deployed        low-csp-umbrella-0.12.0 0.12.0

To display the information about the system deployed in the low-csp namespace:

make k8s-watch


kubectl get all -n low-csp

If all the system pods are correctly deployed, the output of the above command should be like this one:

NAME                                        READY   STATUS    RESTARTS     AGE
pod/databaseds-ds-tango-databaseds-0        1/1     Running   0            8h
pod/databaseds-tangodb-tango-databaseds-0   1/1     Running   0            8h
pod/ds-allocator-default-0                  1/1     Running   0            8h
pod/ds-controller-default-0                 1/1     Running   0            8h
pod/ds-cspcontroller-controller-0           1/1     Running   0            8h
pod/ds-cspsubarray-subarray1-0              1/1     Running   0            8h
pod/ds-cspsubarray-subarray2-0              1/1     Running   0            8h
pod/ds-cspsubarray-subarray3-0              1/1     Running   0            8h
pod/ds-cspsubarray-subarray4-0              1/1     Running   0            8h
pod/ds-low-pst-beam-01-0                    1/1     Running   1 (8h ago)   8h
pod/ds-processor-0-0                        1/1     Running   0            8h
pod/ds-processor-1-0                        1/1     Running   0            8h
pod/ds-subarray-1-0                         1/1     Running   0            8h
pod/ds-subarray-2-0                         1/1     Running   0            8h
pod/ds-subarray-3-0                         1/1     Running   0            8h
pod/ds-subarray-4-0                         1/1     Running   0            8h
pod/ds-tangotest-test-0                     1/1     Running   0            8h
pod/ska-tango-base-itango-console           1/1     Running   0            8h

NAME                                          TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)                                           AGE
service/databaseds-tangodb-tango-databaseds   ClusterIP   <none>           3306/TCP                                          8h
service/ds-allocator-default                  LoadBalancer   45450:30510/TCP,45460:30200/TCP,45470:31589/TCP   8h
service/ds-controller-default                 LoadBalancer   45450:30103/TCP,45460:32476/TCP,45470:31453/TCP   8h
service/ds-cspcontroller-controller           LoadBalancer   45450:31829/TCP,45460:30871/TCP,45470:31114/TCP   8h
service/ds-cspsubarray-subarray1              LoadBalancer   45450:31420/TCP,45460:30253/TCP,45470:32269/TCP   8h
service/ds-cspsubarray-subarray2              LoadBalancer    45450:31154/TCP,45460:32257/TCP,45470:31132/TCP   8h
service/ds-cspsubarray-subarray3              LoadBalancer   45450:30864/TCP,45460:31581/TCP,45470:30417/TCP   8h
service/ds-cspsubarray-subarray4              LoadBalancer   45450:31967/TCP,45460:30405/TCP,45470:30194/TCP   8h
service/ds-low-pst-beam-01                    LoadBalancer   45450:30087/TCP,45460:30341/TCP,45470:32746/TCP   8h
service/ds-processor-0                        LoadBalancer   45450:30802/TCP,45460:31573/TCP,45470:32060/TCP   8h
service/ds-processor-1                        LoadBalancer   45450:31593/TCP,45460:30252/TCP,45470:30637/TCP   8h
service/ds-subarray-1                         LoadBalancer    45450:32666/TCP,45460:32421/TCP,45470:32206/TCP   8h
service/ds-subarray-2                         LoadBalancer   45450:30925/TCP,45460:32008/TCP,45470:32165/TCP   8h
service/ds-subarray-3                         LoadBalancer   45450:31393/TCP,45460:30901/TCP,45470:31139/TCP   8h
service/ds-subarray-4                         LoadBalancer   45450:32107/TCP,45460:31480/TCP,45470:30860/TCP   8h
service/ds-tangotest-test                     LoadBalancer   45450:30800/TCP,45460:30935/TCP,45470:31006/TCP   8h
service/tango-databaseds                      LoadBalancer    10000:30749/TCP                                   8h

NAME                                                   READY   AGE
statefulset.apps/databaseds-ds-tango-databaseds        1/1     8h
statefulset.apps/databaseds-tangodb-tango-databaseds   1/1     8h
statefulset.apps/ds-allocator-default                  1/1     8h
statefulset.apps/ds-controller-default                 1/1     8h
statefulset.apps/ds-cspcontroller-controller           1/1     8h
statefulset.apps/ds-cspsubarray-subarray1              1/1     8h
statefulset.apps/ds-cspsubarray-subarray2              1/1     8h
statefulset.apps/ds-cspsubarray-subarray3              1/1     8h
statefulset.apps/ds-cspsubarray-subarray4              1/1     8h
statefulset.apps/ds-low-pst-beam-01                    1/1     8h
statefulset.apps/ds-processor-0                        1/1     8h
statefulset.apps/ds-processor-1                        1/1     8h
statefulset.apps/ds-subarray-1                         1/1     8h
statefulset.apps/ds-subarray-2                         1/1     8h
statefulset.apps/ds-subarray-3                         1/1     8h
statefulset.apps/ds-subarray-4                         1/1     8h
statefulset.apps/ds-tangotest-test                     1/1     8h

Other Makefile targets, such as k8s-describe and k8s-podlogs, provide some useful information in case of pods failures.

Note: to reduce the resources used by Taranta, the Low project starts only one replica of tangogql (see values-taranta.yaml file in the root folder of the project)

When all the pods (Low CSP, CBF and Taranta) are in running, you can access the system via itango shell or, via the taranta GUI interface, if taranta has been deployed.

To uninstall the low-csp-umbrella chart and delete the test release in the low-csp namespace, issue the command:

make k8s-uninstall-chart

Deployment of Low CSP LMC with CSP sub-system simulators devices

To deploy the Low CSP.LMC with the simulators, a different values file from the default one has to be specified. Set the variable VALUES_FILE to point to values-sim-devs.yaml file to deploy the system with the Low CSP simulators devices for the sub-systems. The file values-sim-with-taranta.yaml can be used to enable also the deployment of Taranta.

To make use of CSP subsystem’s simulators the ska-csp-simulators chart has to be included in the umbrella charts. To deploy:

make VALUES_FILE=charts/low-csp-umbrella/values-sim-devs.yaml k8s-install-chart

In this case the list of deployed pods is the following one:

NAME                               READY   STATUS      RESTARTS   AGE
NAME                                    READY   STATUS    RESTARTS   AGE
databaseds-ds-tango-databaseds-0        1/1     Running   0          69s
databaseds-tangodb-tango-databaseds-0   1/1     Running   0          75s
ds-cspcontroller-controller-0           1/1     Running   0          42s
ds-cspsubarray-subarray1-0              1/1     Running   0          38s
ds-cspsubarray-subarray2-0              1/1     Running   0          41s
ds-cspsubarray-subarray3-0              1/1     Running   0          41s
ds-lowcbfctrl-ctrl-0                    1/1     Running   0          38s
ds-lowcbfsubarray-sub1-0                1/1     Running   0          38s
ds-lowcbfsubarray-sub2-0                1/1     Running   0          37s
ds-lowcbfsubarray-sub3-0                1/1     Running   0          39s
ds-lowpssctrl-ctrl-0                    1/1     Running   0          40s
ds-lowpsssubarray-sub1-0                1/1     Running   0          42s
ds-lowpsssubarray-sub2-0                1/1     Running   0          36s
ds-lowpsssubarray-sub3-0                1/1     Running   0          36s
ds-lowpstbeam-beam1-0                   1/1     Running   0          41s
ds-lowpstbeam-beam2-0                   1/1     Running   0          35s
ds-lowpstbeam-beam3-0                   1/1     Running   0          35s

This deployment is used to test the Low CSP.LMC system behavior:

  • with all the systems, including those not yet developed such as PSS and PST

  • when fault or anomalous conditions are injected in the simulated devices

To use a setup without PSS simulators (i.e. the one expected for AA 0.5) refer to values-sim-devs-aa05.yaml file.

Further information on how to drive CSP simulators can be found in the documentation of ska-csp-simulators.

Run integration tests on a local k8s/minikube cluster

The project includes a set of BDD tests that can be run both with real and simulated TANGO Devices.

The tests with real devices are in the tests/integration folder, while those with simulators are in tests/simulated-system.

To run the tests on the local k8s cluster, deploy either the real or symulated system (see above). To run integration tests with real devices issue the command, from the root project directory:

make k8s-test

For integration test with simulated devices the default TEST_FOLDER variable has to be changed. To run those tests, issue the command, from the root project directory:

make TEST_FOLDER=simulated-system k8s-test

On test completion, uninstall the low-csp-umbrella chart.

Connect an interactive Tango Client to Low CSP.LMC

To test Low CSP.LMC functionalities, it is possible to connect an interactive Tango Client using the following tools: itango, Jupyter Notebook and Taranta. The following sections will guide the user step by step.


Just give the command

kubectl exec -it  ska-tango-base-itango-console  -n low-csp -- itango3

The command completion is enabled, just give a <tab>.


To monitor and control the Low CSP.LMC via a GUI interface, the Low project provides a set of Taranta dashboards: they can be found in resources/taranta_dashbords folder.

To deploy the Low CSP.LMC with the support of Taranta, use the following values files:

make VALUES_FILE=charts/low-csp-umbrella/values-taranta.yaml k8s-install-chart

to work with the real CBF sub-system, or

make VALUES_FILE=charts/low-csp-umbrella/values-sim-with-taranta.yaml k8s-install-chart

to work with the CSP sub-systems simulators.

Start the Taranta dashboard

To start and use it, execute the following steps:

Open a browser (preferibly Chrome) and specify the url: 



On success, the browser shows this page:

Login is required to issue any command on the system. Press the Login button (top right) and specify your team credentials (https://developer.skao.int/projects/ska-tango-taranta-suite/en/latest/taranta_users.html) using capitol letters. In general these are:
uid: TeamName
pwd: TeamName_SKA

To load a Taranta Dashboard:

  • click on Dashbords button (top left)

  • click on ‘Import Dashboard’ button

  • select the dashboard

Run the dashbord pressing the button ‘Start’ on the top left of the page and enjoy the tour!

If the minikube is running inside a remote machine, you can still access Taranta by ssh redirection. In another terminal, give:

ssh -L 8081: <user>@<remmote_machine>

And point your browser to


Taranta Dashboards to control CSP.LMC Low can be found at /resources/taranta_dashboards


It is possible to have jupyter-hub as a client in a local (minikube) environment. The first thing to do is to include the correspondant helm chart in the deployment. To do this:

make k8s-install-chart VALUES_FILE=charts/low-csp-umbrella/values-jupyter.yaml

a few pods are added to the deployment. They should look like as below:

NAME                                    READY   STATUS              RESTARTS   AGE
continuous-image-puller-mqwcw           1/1     Running             0          38s
hub-5d84f6dffd-qtzlh                    1/1     Running             0          38s
user-scheduler-8f6d6d4c6-cwqg5          1/1     Running             0          38s
user-scheduler-8f6d6d4c6-kwd7s          1/1     Running             0          38s

This also add the following services: (to access them run kubectl get svc -n low-csp)

NAME         TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE
hub          ClusterIP    <none>           8081/TCP       2m12s
proxy-api    ClusterIP   <none>           8001/TCP       2m12s
proxy-public LoadBalancer    80:30492/TCP   2m12s

In particular, the proxy-public is a LoadBalancer that exposes an external-ip ( in the above example) to access the JupiterHub (note: this could change for each deployment).

To open JupyterHub, navigate to http://<proxy-public-external-ip>/hub in a web browser.

If the deployment is in a remote machine accessed via ssh, a port forwardin is needed as for Taranta:

ssh -L 8081:<proxy-public-external-ip>:80 <user>@<remmote_machine>

And point your browser to


Username and password can be anything. Please note that a pod will be created for each username, so in case of multiple login it is suggested to use the same one (while password can be changed everytime)

A collection of notebook to be run is in notebooks.

Deploy CSP.LMC on Low-PSI

CSP-LMC is periodically deployed and tested on Low Prototype System Integration (Low-Psi) enviroment. It is a K8s cluster located in the CSIRO facility at Marsfield (Sydney - Australia). Low-Psi is hardware set-up dedicated to the testing and verification of Low Telescope hardware, including CSP. Ssh access to that cluster has to be granted in order to deploy the code. To request access and perform the following operation, follow the procedure described here

Once the access has been granted, access the Psi-Head machine “jumping” through the venice gateway server:

ssh <username>@psi-head.atnf.csiro.au -J <username>@venice.atnf.csiro.au

When in the psi-head, the present repository can be cloned. To deploy the system:

make k8s-install-chart VALUES_FILE=charts/low-csp-umbrella/values-psi-low.yaml

This configuration file will deploy the CSP-LMC and LOW-CBF in the Low-PSI kubernetes cluster. A typical deployment will look as in the following picture.

The software devices are run within pods that are assigned randomly to the CPUs that are part of the k8s cluster (psi-nodeX). A different situation is for the two pods processor-X-0: these pods are deployed to CPUs that directly control the FPGA hardware used for Correlation and Beam Forming. Since these are very dedicated resources, if they are not available, e.g. they are used in other deployments for other purposes, the pods will remain in Pending, because the k8s taint can’t be satisfied.

If the deployment is successful, the system can be controlled via itango interface (entering the pod ska-tango-base-itango-console), Jupyter Notebook or Taranta dashboard.

Use Jupyter Notebook and Taranta Dashboard on Low-Psi

Low-Psi allows to use JupyterHub and Taranta as services to access the deployment and control the Tango Devices present there. These are deployed as common services shared through the entire cluster.

The first step to be done is to forward to the client machine the relevant net traffic to access these services. To do this:

sshuttle -r <username>@venice.atnf.csiro.au

Low-Psi JupyterHub

After the connection is established, to access JupyterHub connect to https://psi-low.atnf.csiro.au/jupyterhub/.

After connecting, select PSI Low staging and press Start button. The typical Jupyter Hub interface will be as the following image Be sure that the notebook is created under a proper space (i.e. /csplmc/ folder) in the left File Browser section of the interface.

To control the specific low-csp deployment, the proper Tango DataBase address has to be specified in the correspondant enviroment variable. To do this, on the top Python code cell of the notebook, execute:

import os
os.environ["TANGO_HOST"] = "tango-databaseds.low-csp:10000"

JupyterNotebook to control CSP.LMC Low can be found at notebooks`

Low-Psi Taranta

To use Taranta connect to https://psi-low.atnf.csiro.au/low-csp/taranta/devices. Please note that this address is specific to the low-csp deployment, and Taranta will control the Tango Devices present in the TangoDataBase present there.

Taranta Dashboards to control CSP.LMC Low can be found at /resources/taranta_dashboards

Run automated tests on Low-Psi

Automated tests on Low-Psi can be run directly on the cluster via the command

make psi-k8s-test MARK=psi_low

NOTE: the mark will be no longer needed in the future. Please note that the image that is used in the test runner need to be updated in the Makefile at this line. The same test can be triggered manually by the pipeline via the psi-low-test job

Documentation of Low-Psi

For further information about PSI Low Deployment and Operations refer to SKA Solution Intent documentation

Use a local version of ska-csp-lmc-common

During development could be useful to test the local changes on the ska-csp-lmc-common before releasing a new version of it. It is possible to use it using some Makefile commands that act ont the pyproject.toml and poetry.lock files.

Note: ska-csp-lmc-common folder must be in the parent directory of ska-csp-lmc-low.

To set poetry for the installation of local ska-csp-lmc-common:

make pre-local-install-common

while to restore the original files:

make post-local-install-common

These commands are integrated into others in order to simplify the procedure. They are presented in the following

To build the image with the local ska-csp-lmc-common:

make local-oci-build

This will automatically change the poetry files and restore them after the installation. After image is build, integration tests can be performed as usual.

To perform unit tests, a new command will open a shell in a container with local ska-csp-lmc-common already installed:

make dev-container

After launching this command, the tests can be performed as usual:

make python-test

In the same container also linting can be performed with the local package.

Known bugs


If the command

kubectl logs -f pod/<podname> -n low-csp

aborts with a failed to watch file <name-json.log>: no space left on device, you can correct by connecting to the k8s node and enlarging the space to be used for log:

 $ ssh -l root  <psw root>
 $ sysctl fs.inotify.max_user_watches=1048576
 $ sysctl fs.inotify.max_user_watches

If the configurations pods gives a lot of errors, and the TangoDB pod gives the following message: [Warning] Aborted connection 3 to db: ‘unconnected’ user: ‘unauthenticated’ host: ‘’ (This connection closed normally without authentication)

Then, before making a deployment you need to give:



See the LICENSE file for details.