SKAMPI Sub-systems

This page briefly describes the various sub-systems integrated within SKAMPI, and provides useful links to the projects. You can also read more about the various Helm modules and their dependencies, which make up SKAMPI at this Confluence page.

SDP (Science Data Processor)

The SDP is the system of the telescope responsible for processing observed data into required data products, preserving these products, and delivering them to the SKA Regional Centres.

OET (Observation Execution Tool)

The OET is an application providing on-demand Python script (telescope control script) execution for the SKA.

Taranta

The Taranta deployment from SKAMPI consists of four components. Following the deployment steps to enable Taranta, a deployment can be made according to the applicable requirements for the environment.

Please refer to the Taranta documentation for further information.

Todo

(the link provided is not to the latest documentation version - update this link as soons as Taranta namechange is on https://taranta.readthedocs.io/en/master/)

Taranta specific deployment notes for Minikube environment

Two important aspects for developers deploying Taranta on their local Minikube environment, are the resource requirements, and the need for authorization if the user wants to be able to log into the web UI.

Enabling Taranta with authorization for the Dashboard UI

See the note in the README.

Resource Requirements

For the Resource requirements, if it becomes apparent that the default scaled deployment of TangoGQL (replicas=3) is too much, this can be rectified by scaling down the replicaset.

As example (assuming you’re using integration namespace):

$ kubectl  get all -n integration -l app=tangogql-ska-taranta-test
NAME                              READY   STATUS    RESTARTS   AGE
pod/tangogql-ska-taranta-test-0   1/1     Running   0          18h
pod/tangogql-ska-taranta-test-1   1/1     Running   0          18h
pod/tangogql-ska-taranta-test-2   0/1     Pending   0          3s

NAME                                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/tangogql-ska-taranta-test   ClusterIP   10.105.252.8   <none>        5004/TCP   18h

NAME                                         READY   AGE
statefulset.apps/tangogql-ska-taranta-test   2/3     18h

That meant that the third pod was not deployed for some reason. Let’s find out why:

$ kubectl  describe pod/tangogql-ska-taranta-test-2 -n integration
... snip ...
Events:
Type     Reason            Age   From               Message
----     ------            ----  ----               -------
Warning  FailedScheduling  69s   default-scheduler  0/1 nodes are available: 1 Insufficient cpu.

So let’s scale it down to only one replica:

$ kubectl -n integration scale statefulset tangogql-ska-taranta-test --replicas 1
statefulset.apps/tangogql-ska-taranta-test scaled

Verify the scaling worked:

$ kubectl get all -n integration -l app=tangogql-ska-taranta-test
NAME                              READY   STATUS    RESTARTS   AGE
pod/tangogql-ska-taranta-test-0   1/1     Running   0          18h

NAME                                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/tangogql-ska-taranta-test   ClusterIP   10.105.252.8   <none>        5004/TCP   18h

NAME                                         READY   AGE
statefulset.apps/tangogql-ska-taranta-test   1/1     18h

TMC (Telescope Monitoring and Control)

The Telescope Monitor and Control (TMC) is the software module identified to perform the telescope management, and data management functions of the Telescope Manager. Main responsibilities identified for TMC are:

Support execution of astronomical observations

Manage telescope hardware and software subsystems in order to perform astronomical observations

Manage the data to support operators, maintainers, engineers and science users to achieve their goals

Determine telescope state.

To support these responsibilities, the TMC performs high-level functions such as Observation Execution, Monitoring and Control of Telescope, Resource Management, Configuration Management, Alarm and Fault Management, and Telescope Data Management (Historical data and Real time data). These high level functions are again divided into lower level functions to perform the specific functionalities.

The TMC has a hierarchy of control nodes for Mid and Low- Central Node, Subarray Node, SDP Leaf Nodes, CSP Leaf Nodes, MCCS Leaf Nodes, Dish Leaf Nodes.

The components(CentralNode, SubarrayNode, Leaf Nodes) of the TMC system are integrated in the TMC integration repository, which contains the Helm chart to deploy the TMC. More details on the design of the TMC and how to run it locally or in the integration environment can be found in the Documentation