SKAMPI Sub-systems

This page briefly describes the various sub-systems integrated within SKAMPI, and provides useful links to the projects. You can also read more about the various Helm modules and their dependencies, which make up SKAMPI at this Confluence page.

SDP (Science Data Processor)

The SDP is the system of the telescope responsible for processing observed data into required data products, preserving these products, and delivering them to the SKA Regional Centres.

OET (Observation Execution Tool)

The OET is an application, which provides on-demand Python script (telescope control script) execution for the SKA.

Taranta

The Taranta deployment from SKAMPI consists of four components. Following the deployment steps to enable Taranta, a deployment can be made according to the applicable requirements for the environment.

Please refer to the Taranta documentation for further information.

Todo

(the link provided is not to the latest documentation version - update this link as soons as Taranta namechange is on https://taranta.readthedocs.io/en/master/)

Taranta specific deployment notes for Minikube environment

Two important aspects for developers deploying Taranta on their local Minikube environment, are the resource requirements, and the need for authorization if the user wants to be able to log into the web UI.

Enabling Taranta with authorization for the Dashboard UI

See the note in the README.

Resource Requirements

For the Resource requirements, if it becomes apparent that the default scaled deployment of TangoGQL (replicas=3) is too much, this can be rectified by scaling down the replicaset.

As example (assuming you’re using integration namespace):

$ kubectl  get all -n integration -l app=tangogql-ska-taranta-test
NAME                              READY   STATUS    RESTARTS   AGE
pod/tangogql-ska-taranta-test-0   1/1     Running   0          18h
pod/tangogql-ska-taranta-test-1   1/1     Running   0          18h
pod/tangogql-ska-taranta-test-2   0/1     Pending   0          3s

NAME                                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/tangogql-ska-taranta-test   ClusterIP   10.105.252.8   <none>        5004/TCP   18h

NAME                                         READY   AGE
statefulset.apps/tangogql-ska-taranta-test   2/3     18h

That meant that the third pod was not deployed for some reason. Let’s find out why:

$ kubectl  describe pod/tangogql-ska-taranta-test-2 -n integration
... snip ...
Events:
Type     Reason            Age   From               Message
----     ------            ----  ----               -------
Warning  FailedScheduling  69s   default-scheduler  0/1 nodes are available: 1 Insufficient cpu.

So let’s scale it down to only one replica:

$ kubectl -n integration scale statefulset tangogql-ska-taranta-test --replicas 1
statefulset.apps/tangogql-ska-taranta-test scaled

Verify the scaling worked:

$ kubectl get all -n integration -l app=tangogql-ska-taranta-test
NAME                              READY   STATUS    RESTARTS   AGE
pod/tangogql-ska-taranta-test-0   1/1     Running   0          18h

NAME                                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/tangogql-ska-taranta-test   ClusterIP   10.105.252.8   <none>        5004/TCP   18h

NAME                                         READY   AGE
statefulset.apps/tangogql-ska-taranta-test   1/1     18h