CSP.LMC Mid project

Table of contents

Documentation

Documentation Status

The documentation with Architecture description can be found at SKA developer portal:

CSP.LMC mid documentation

Repository organization

The repository has the following organization:

  • src: the folder with all the project source code

  • resources: contains the pogo directory with POGO files of the TANGO Device Classes of the project and taranta_dashboards directory

  • tests: this folder is organized in sub-folders with unit tests and bdd tests to be run with real and simulated sub-system devices.

  • charts: stores the HELM charts to deploy the Mid CSP.LMC system under kubernets environment.

  • docs: contains all the files to generate the documentation for the project.

Containerised Mid CSP.LMC in Kubernetes

The TANGO devices of the CSP_Mid.LMC prototype run in a containerised environment. Currently only a limited number of Mid CSP.LMC and Mid CBF devices are run in Docker containers:

  • the MidCspController and MID CbfController

  • three instances of the Mid CSP_Mid Mid CBF sub-arrays

  • four instances of the Very Coarse Channelizer (VCC) devices

  • four instance of the Frequency Slice Processor (FPS) devices

  • two instances of the TM TelState Simulator devices

  • one instance of the TANGO database

The Mid CSP.LMC containerised TANGO servers are managed via Kubernetes. The system is setup so that each k8s Pod has only one Docker container that in turn runs only one Tango Device Server application.

Mid CSP.LMC TANGO Servers rely on two different Docker images: ska-csp-lmc-mid and ska-mid-cbf-mcs.
The first one runs the Mid CSP.LMC TANGO devices (real and simulators) and the second those of the Mid CBF.LMC prototype.

Build the Mid CSP.LMC docker image

The Mid CSP.LMC project fully relies on the standard SKA CI/CD makefiles.

In order to locally deploy and test the project, Minikube has to be installed. ska-cicd-deploy-minikube provides all the instructions to setup a minikube machine running in a virtual environment. The instructions are very detailed and cover many frequent issues. You can check the deployment with a make vars. Be aware of heavy HW requirements: 4 cores or more, and more than 8GB ram.

Following a short installation procedure:

git clone git@gitlab.com:ska-telescope/sdi/deploy-minikube.git
cd deploy-minikube

to use Pod driver:

make all

to use Docker driver:

make all DRIVER=docker

To check that the minikube environment is up and runing, issue the command

minikube status

the output should be:

minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

To use the built image in Low CSP.LMC system deployment, the environment should be configured to use the local minikube’s Docker daemon running with

eval $(minikube docker-env)

To check that the minikube environment is up and runing, issue the command

minikube status

the output should be:

minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kube

The local Makefile, in the root folder of the project, defines the setting and variables used to customize the docker image building and the deployment of the system.

To build the Mid CSP.LMC docker image, issue the command from the project root:

make oci-build

Mid CSP.LMC Kubernetes Deployment via Helm Charts

The deployment of the system is handled by the Helm tool, via the Helm Charts, a set of YAML files describing how the Kubernetes resources are related.
The Mid CSP.LMC Helm Charts are stored in the charts directory, organized in several sub-folders:

  • ska-csp-lmc-mid with the Helm chart to deploy only the Mid CSP.LMC devices:(MidCspController and MidCspSubarray (3 instances)

  • mid-csp-umbrella with the Helm chart to deploy the whole Mid CSP.LMC system, including the TANGO Database and the Mid CBF.LMC devices. Using custom values YAML files stored in this folder, the Mid CSP.LMC can be deployed with a set of simulators devices for all the

In particular, the mid-csp-umbrella chart depends on the Mid CSP.LMC, Mid CBF.LMC, the Tango DB and Taranta charts and these dependencies are dynamically linked specifying the dependencies field in the Chart.yaml.

Deployment of Mid CSP.LMC with real Mid CBF sub-system devices

In the following there are the instructions to deploy the Mid CSP.LMC system with the real Mid CBF devices (https://gitlab.com/ska-telescope/ska-mid-cbf-mcs).

After having a Minikube running, the ska-csp-lmc-mid has to be installed. To do that, issue the command:

$ git clone git@gitlab.com:ska-telescope/ska-csp-lmc-mid.git
$ cd ska-csp-lmc-mid
$ git submodule update --init --recursive

to deploy the Mid CSP devices in a k8s environment, without a GUI support, issue the command:

make k8s-install-chart

This command uses the values file values-default.yaml to install only the Mid CSP.LMC and Mid CBF real devices. This configuration does not deploy Taranta pods. The TANGO Devices can accessed using a itango or jupiter shell

If a GUI support is desiderable, the following command can be specified:

make VALUES_FILE=charts/mid-csp-umbrella/values-taranta.yaml k8s-install-chart

In both cases, the output of the command should be something like the following one:

k8s-dep-update: updating dependencies
+++ Updating mid-csp-umbrella chart +++
Getting updates for unmanaged Helm repositories...
...Successfully got an update from the "https://artefact.skao.int/repository/helm-internal" chart repository
...Successfully got an update from the "https://artefact.skao.int/repository/helm-internal" chart repository
...Successfully got an update from the "https://artefact.skao.int/repository/helm-internal" chart repository
...Successfully got an update from the "https://artefact.skao.int/repository/helm-internal" chart repository
...Successfully got an update from the "https://artefact.skao.int/repository/helm-internal" chart repository
...Successfully got an update from the "https://artefact.skao.int/repository/helm-internal" chart repository
...Successfully got an update from the "https://artefact.skao.int/repository/helm-internal" chart repository
Saving 11 charts
Downloading ska-tango-base from repo https://artefact.skao.int/repository/helm-internal
Downloading ska-mid-cbf from repo https://artefact.skao.int/repository/helm-internal
Downloading ska-mid-cbf-tmleafnode from repo https://artefact.skao.int/repository/helm-internal
Downloading ska-tango-taranta from repo https://artefact.skao.int/repository/helm-internal
Downloading ska-taranta-auth from repo https://artefact.skao.int/repository/helm-internal
Downloading ska-tango-taranta-dashboard from repo https://artefact.skao.int/repository/helm-internal
Downloading ska-tango-taranta-dashboard-pvc from repo https://artefact.skao.int/repository/helm-internal
Deleting outdated charts
Name:         mid-csp
Labels:       kubernetes.io/metadata.name=mid-csp
Annotations:  <none>
Status:       Active

No resource quota.

No LimitRange resource.
install-chart: install charts/mid-csp-umbrella/ release: test in Namespace: mid-csp with params: --set global.minikube= --set global.tango_host=tango-databaseds:10000 --set ska-tango-base.display=192.168.49.1:0 --set ska-tango-base.xauthority=/home/egiani/.Xauthority --set ska-tango-base.jive.enabled= --set sim-cbf.simsubsystem.image.tag=0.11.5-dirty --set sim-pss.simsubsystem.image.tag=0.11.5-dirty --set sim-pst.simsubsystem.image.tag=0.11.5-dirty --set ska-csp-lmc-mid.midcsplmc.image.tag=0.11.5-dirty --values charts/mid-csp-umbrella/values-default.yaml 
helm upgrade --install test \
--set global.minikube= --set global.tango_host=tango-databaseds:10000 --set ska-tango-base.display=192.168.49.1:0 --set ska-tango-base.xauthority=/home/egiani/.Xauthority --set ska-tango-base.jive.enabled= --set sim-cbf.simsubsystem.image.tag=0.11.5-dirty --set sim-pss.simsubsystem.image.tag=0.11.5-dirty --set sim-pst.simsubsystem.image.tag=0.11.5-dirty --set ska-csp-lmc-mid.midcsplmc.image.tag=0.11.5-dirty --values charts/mid-csp-umbrella/values-default.yaml  \
 charts/mid-csp-umbrella/ --namespace mid-csp
Release "test" does not exist. Installing it now.
NAME: test
LAST DEPLOYED: Thu May  5 09:48:18 2022
NAMESPACE: mid-csp
STATUS: deployed
REVISION: 1
TEST SUITE: None

The CSP system is deployed in the namespace ‘mid-csp’: to access any information about pods, logs etc. please specify this namespace.

To monitor the deployment progress and wait its completion, issue the command:

make k8s-wait

The deployment takes some time because if the docker images are not already present on the disk, they are downloaded from the CAR repository. The command output it’s similar to the following one:

k8sWait: waiting for pods to be ready in mid-csp
gio 5 mag 2022, 09:48:39, CEST
NAME                                       READY   STATUS      RESTARTS   AGE
cbfcontroller-controller-0                 0/1     Init:0/10   0          17s
cbfmcs-mid-configuration-test-ljn4c        0/1     Init:0/1    0          18s
cbfsubarray01-cbfsubarray-01-0             0/1     Init:0/2    0          17s
cbfsubarray02-cbfsubarray-02-0             0/1     Init:0/2    0          16s
cbfsubarray03-cbfsubarray-03-0             0/1     Init:0/2    0          18s
csp-lmc-configuration-test-vp744           0/1     Init:0/1    0          18s
fsp01-fsp-01-0                             0/1     Init:0/5    0          17s
fsp02-fsp-02-0                             0/1     Init:0/5    0          17s
fsp03-fsp-03-0                             0/1     Init:0/5    0          16s
fsp04-fsp-04-0                             0/1     Init:0/5    0          18s
midcspcontroller-controller-0              0/1     Init:0/1    0          18s
midcspsubarray01-subarray1-0               0/1     Init:0/1    0          18s
midcspsubarray02-subarray2-0               0/1     Init:0/1    0          18s
midcspsubarray03-subarray3-0               0/1     Init:0/1    0          17s
powerswitch001-powerswitch-001-0           0/1     Init:0/1    0          18s
ska-tango-base-tangodb-0                   1/1     Running     0          18s
talonlru001-talonlru-001-0                 0/1     Init:0/2    0          16s
tango-databaseds-0                         1/1     Running     0          18s
tangotest-config-4bfqp                     1/1     Running     0          18s
tangotest-test-0                           0/1     Init:0/2    0          15s
tmcspsubarrayleafnodetest-tm-0             0/1     Init:0/2    0          16s
tmcspsubarrayleafnodetest2-tm2-0           0/1     Init:0/2    0          18s
tmsimulator-mid-configuration-test-4g6r2   0/1     Init:0/1    0          18s
vcc001-vcc-001-0                           0/1     Init:0/2    0          17s
vcc002-vcc-002-0                           0/1     Init:0/2    0          18s
vcc003-vcc-003-0                           0/1     Init:0/2    0          18s
vcc004-vcc-004-0                           0/1     Init:0/2    0          16s
gio 5 mag 2022, 09:48:39, CEST
k8sWait: Jobs found: cbfmcs-mid-configuration-test csp-lmc-configuration-test tangotest-config tmsimulator-mid-configuration-test
job.batch/cbfmcs-mid-configuration-test condition met
job.batch/csp-lmc-configuration-test condition met
job.batch/tangotest-config condition met
job.batch/tmsimulator-mid-configuration-test condition met

real	0m10,320s
user	0m0,062s
sys	0m0,033s
k8sWait: Jobs complete - cbfmcs-mid-configuration-test csp-lmc-configuration-test tangotest-config tmsimulator-mid-configuration-test 
gio 5 mag 2022, 09:48:49, CEST
k8sWait: Pods found: cbfcontroller-controller-0 cbfsubarray01-cbfsubarray-01-0 cbfsubarray02-cbfsubarray-02-0 cbfsubarray03-cbfsubarray-03-0 fsp01-fsp-01-0 fsp02-fsp-02-0 fsp03-fsp-03-0 fsp04-fsp-04-0 midcspcontroller-controller-0 midcspsubarray01-subarray1-0 midcspsubarray02-subarray2-0 midcspsubarray03-subarray3-0 powerswitch001-powerswitch-001-0 talonlru001-talonlru-001-0 vcc001-vcc-001-0 vcc002-vcc-002-0 vcc003-vcc-003-0 vcc004-vcc-004-0
k8sWait: going to - kubectl -n mid-csp wait --for=condition=ready --timeout=360s pods cbfcontroller-controller-0 cbfsubarray01-cbfsubarray-01-0 cbfsubarray02-cbfsubarray-02-0 cbfsubarray03-cbfsubarray-03-0 fsp01-fsp-01-0 fsp02-fsp-02-0 fsp03-fsp-03-0 fsp04-fsp-04-0 midcspcontroller-controller-0 midcspsubarray01-subarray1-0 midcspsubarray02-subarray2-0 midcspsubarray03-subarray3-0 powerswitch001-powerswitch-001-0 talonlru001-talonlru-001-0 vcc001-vcc-001-0 vcc002-vcc-002-0 vcc003-vcc-003-0 vcc004-vcc-004-0
pod/cbfcontroller-controller-0 condition met
pod/cbfsubarray01-cbfsubarray-01-0 condition met
pod/cbfsubarray02-cbfsubarray-02-0 condition met
pod/cbfsubarray03-cbfsubarray-03-0 condition met
pod/fsp01-fsp-01-0 condition met
pod/fsp02-fsp-02-0 condition met
pod/fsp03-fsp-03-0 condition met
pod/fsp04-fsp-04-0 condition met
pod/midcspcontroller-controller-0 condition met
pod/midcspsubarray01-subarray1-0 condition met
pod/midcspsubarray02-subarray2-0 condition met
pod/midcspsubarray03-subarray3-0 condition met
pod/powerswitch001-powerswitch-001-0 condition met
pod/talonlru001-talonlru-001-0 condition met
pod/vcc001-vcc-001-0 condition met
pod/vcc002-vcc-002-0 condition met
pod/vcc003-vcc-003-0 condition met
pod/vcc004-vcc-004-0 condition met

The command:

helm list -n mid-csp

returns information about the release name (test) and the namespace (mid-csp).

NAME	NAMESPACE	REVISION	UPDATED                                 	STATUS  	CHART                  	APP VERSION
test	mid-csp  	1       	2022-05-05 09:32:18.964172768 +0200 CEST	deployed	mid-csp-umbrella-0.11.5	0.11.5  

To display the information about the system deployed in the mid-csp namespace:

make k8s-watch

or

kubectl get all -n mid-csp

If all the system pods are correctly deployed, the output of the above command should be like this one:

NAME                               READY   STATUS    RESTARTS   AGE
cbfcontroller-controller-0         1/1     Running   0          3m53s
cbfsubarray01-cbfsubarray-01-0     1/1     Running   0          3m53s
cbfsubarray02-cbfsubarray-02-0     1/1     Running   0          3m54s
cbfsubarray03-cbfsubarray-03-0     1/1     Running   0          3m53s
fsp01-fsp-01-0                     1/1     Running   0          3m53s
fsp02-fsp-02-0                     1/1     Running   0          3m52s
fsp03-fsp-03-0                     1/1     Running   0          3m52s
fsp04-fsp-04-0                     1/1     Running   0          3m53s
midcspcontroller-controller-0      1/1     Running   0          3m53s
midcspsubarray01-subarray1-0       1/1     Running   0          3m54s
midcspsubarray02-subarray2-0       1/1     Running   0          3m51s
midcspsubarray03-subarray3-0       1/1     Running   0          3m54s
powerswitch001-powerswitch-001-0   1/1     Running   0          3m53s
ska-tango-base-tangodb-0           1/1     Running   0          3m52s
talonlru001-talonlru-001-0         1/1     Running   0          3m53s
tango-databaseds-0                 1/1     Running   0          3m53s
tangotest-test-0                   1/1     Running   0          3m51s
tmcspsubarrayleafnodetest-tm-0     1/1     Running   0          3m54s
tmcspsubarrayleafnodetest2-tm2-0   1/1     Running   0          3m54s
vcc001-vcc-001-0                   1/1     Running   0          3m52s
vcc002-vcc-002-0                   1/1     Running   0          3m52s
vcc003-vcc-003-0                   1/1     Running   0          3m53s
vcc004-vcc-004-0                   1/1     Running   0          3m51s

NAME                                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)           AGE
service/cbfcontroller-controller         ClusterIP   None            <none>        <none>            16m
service/cbfsubarray01-cbfsubarray-01     ClusterIP   None            <none>        <none>            16m
service/cbfsubarray02-cbfsubarray-02     ClusterIP   None            <none>        <none>            16m
service/cbfsubarray03-cbfsubarray-03     ClusterIP   None            <none>        <none>            16m
service/fsp01-fsp-01                     ClusterIP   None            <none>        <none>            16m
service/fsp02-fsp-02                     ClusterIP   None            <none>        <none>            16m
service/fsp03-fsp-03                     ClusterIP   None            <none>        <none>            16m
service/fsp04-fsp-04                     ClusterIP   None            <none>        <none>            16m
service/midcspcontroller-controller      ClusterIP   None            <none>        <none>            16m
.....

Other Makefile targets, such as k8s-describe and k8s-podlogs, provide some useful information in case of pods failures.

When all the pods (Low CSP, CBF and Taranta) are in running, you can access the system via itango shell or, via the taranta GUI interface, if taranta has been deployed.

To uninstall the mid-csp-umbrella chart and delete the test release in the mid-csp namespace, issue the command:

make k8s-uninstall-chart

Deployment of Mid CSP.LMC without the deployment of the PST

To deploy the Mid CSP.LMC withou using the PST, a different values file from the default one has to be specified. Set the variable VALUES_FILE to point to values-cbf-only.yaml file.

make VALUES_FILE=charts/mid-csp-umbrella/values-cbf-only.yaml k8s-install-chart

Deployment of Mid CSP.LMC with CSP sub-system simulators devices

To deploy the Mid CSP.LMC with the simulators, a different values file from the default one has to be specified. Set the variable VALUES_FILE to point to values-sim-devs.yaml file to deploy the system with the Low CSP simulators devices for the sub-systems. The file values-sim-with-taranta.yaml can be used to enable also the deployment of Taranta.

To make use of CSP subsystem’s simulators the ska-csp-simulators chart has to be included in the umbrella charts

make VALUES_FILE=charts/mid-csp-umbrella/values-sim-devs.yaml k8s-install-chart

In this case the list of deployed pods is the following one:

NAME                                       READY   STATUS    RESTARTS   AGE
databaseds-ds-tango-databaseds-0           1/1     Running   0          78s
databaseds-tangodb-tango-databaseds-0      1/1     Running   0          84s
ds-midcbfctrl-ctrl-0                       1/1     Running   0          44s
ds-midcbfsubarray-sub1-0                   1/1     Running   0          49s
ds-midcbfsubarray-sub2-0                   1/1     Running   0          42s
ds-midcbfsubarray-sub3-0                   1/1     Running   0          41s
ds-midcspcapabilityfsp01-capabilityfsp-0   1/1     Running   0          40s
ds-midcspcapabilityvcc01-capabilityvcc-0   1/1     Running   0          48s
ds-midcspcontroller-controller-0           1/1     Running   0          45s
ds-midcspsubarray01-subarray1-0            1/1     Running   0          46s
ds-midcspsubarray02-subarray2-0            1/1     Running   0          32s
ds-midcspsubarray03-subarray3-0            1/1     Running   0          52s
ds-midpssctrl-ctrl-0                       1/1     Running   0          47s
ds-midpsssubarray-sub1-0                   1/1     Running   0          51s
ds-midpsssubarray-sub2-0                   1/1     Running   0          50s
ds-midpsssubarray-sub3-0                   1/1     Running   0          52s
ds-tangotest-test-0                        1/1     Running   0          43s
ska-tango-base-itango-console              1/1     Running   0          85s

This deployment is used to test the Mid CSP.LMC system behavior:

  • with all the systems, including those not yet developed such as PSS and PST

  • when fault or anomalous conditions are injected in the simulated devices

To use a setup without PSS simulators (i.e. the one expected for AA 0.5) refer to values-sim-devs-aa05.yaml file.

Further information on how to drive CSP simulators can be found in the documentation of ska-csp-simulators.

Deployment of Mid CSP.LMC with CBF-SDP emulator

It is possible to set the variable VALUES_FILE to point to the values-sdp-cbf-emulator.yaml file to deploy the system with Mid CBF-SDP pods emulating the sending and receive of the visibilities.

make VALUES_FILE=charts/mid-csp-umbrella/values-sdp-cbf-emulator.yaml k8s-install-chart

This will also trigger the installation of other charts needed for this purpose, that are the receiver pod, a Persistent Volume and a Persistent Volume Claim. All their definition are contained in the yaml files in the folder charts/mid-cbf-emulated. The installation commands are defined in the Makefile entry k8s-post-install-chart.

The list of deployed pods is the following:

NAME                                    READY   STATUS      RESTARTS   AGE
cbfmaster-master-0                      1/1     Running     0          6m40s
cbfmcs-mid-configuration-test-w4kbn     0/1     Completed   0          6m40s
cbfsubarray01-cbfsubarray-01-0          1/1     Running     0          6m40s
cbfsubarray02-cbfsubarray-02-0          1/1     Running     0          6m40s
cbfsubarray03-cbfsubarray-03-0          1/1     Running     0          6m40s
csp-lmc-configuration-test-mpf4k        0/1     Completed   0          6m40s
midcspcapabilityfsp01-capabilityfsp-0   1/1     Running     0          6m39s
midcspcontroller-controller-0           1/1     Running     0          6m40s
midcspsubarray01-subarray1-0            1/1     Running     0          6m40s
midcspsubarray02-subarray2-0            1/1     Running     0          6m40s
midcspsubarray03-subarray3-0            1/1     Running     0          6m40s
receiver-0                              1/1     Running     0          6m38s
ska-tango-base-itango-console           1/1     Running     0          6m40s
ska-tango-base-tangodb-0                1/1     Running     0          6m40s
tango-databaseds-0                      1/1     Running     0          6m39s
tangotest-test-0                        1/1     Running     0          6m40s
tangotest-test-config-f5gmw             0/1     Completed   0          6m40s

No tests are developed with this environment at the present time. However, happy path sequence of commands can be performed using an itango client and the sequence described in the developer portal . Taranta is not yet supported.

For json command input please use the following files present in the folder tests/test_data:

  • AssignResources_CBF.json for AssignResources;

  • Configure_CBF_simulator.json for Configure;

  • Scan_CBF_simulator.json for Scan.

Once the scan is started on subarray 01, the log of the cbfsubarray01-cbfsubarray-01-0 looks like the following.

The log of the receiver-0 is presented as well.

Run integration tests on a local k8s/minikube cluster

The project includes a set of BDD tests that can be run both with real and simulated TANGO Devices.

The tests with real devices are in the tests/integration folder, while those with simulators are in tests/simulated-system.

To run the tests on the local k8s cluster, deploy either the real or symulated system (see above). To run integration tests with real devices issue the command, from the root project directory:

make k8s-test

For integration test with simulated devices the default TEST_FOLDER variable has to be changed. To run those tests, issue the command, from the root project directory:

make TEST_FOLDER=simulated-system k8s-test

On test completion, uninstall the mid-csp-umbrella chart

Connect an interactive Tango Client to Mid CSP.LMC

To test Mid CSP.LMC functionalities, it is possible to connect an interactive Tango Client using the following tools: itango, Jupyter Notebook and Taranta. The following sections will guide the user step by step.

itango

Just give the command

kubectl exec -it  ska-tango-base-itango-console  -n mid-csp -- itango3

The command completion is enabled, just give a <tab>.

Taranta

To monitor and control the Mid CSP.LMC via a GUI interface, the Low project provides a set of Taranta dashboards: they can be found in resources/taranta_dashbords folder.

To deploy the Mid CSP.LMC with the support of Taranta, use the following values files:

make VALUES_FILE=charts/mid-csp-umbrella/values-taranta.yaml k8s-install-chart

to work with the real CBF sub-system, or

make VALUES_FILE=charts/mid-csp-umbrella/values-sim-with-taranta.yaml k8s-install-chart

to work with the CSP sub-systems simulators.

Start the Taranta dashboard

To start and use it, execute the following steps:

Open a browser (preferibly Chrome) and specify the url:

192.168.49.2/mid-csp/taranta/devices 

or

minikube/mid-csp/taranta/devices.

If successful, the browser displays a page like the one in the image below.


Login is required to issue any command on the system. Press the Login button (top right) and specify your team credentials (https://developer.skao.int/projects/ska-tango-taranta-suite/en/latest/taranta_users.html) using capitol letters. In general these are:
uid: TeamName
pwd: TeamName_SKA

To load a Taranta Dashboard:

  • click on Dashbords button (top left)

  • click on ‘Import Dashboard’ button

  • select one of the available dashboards int the resources/taranta_dashboards folder

Run the dashbord pressing the button ‘Start’ on the top left of the page and enjoy the tour!

If the minikube is running inside a remote machine, you can still access Taranta by ssh redirection. In another terminal, give:

ssh -L 8081:192.168.49.2:80 <user>@<remmote_machine>

And point your browser to

http://localhost:8081/mid-csp/taranta/dashboard

Taranta Dashboards to control CSP.LMC MId can be found at /resources/taranta_dashboards

JupyterHub

It is possible to have jupyter-hub as a client in a local (minikube) environment. The first thing to do is to include the correspondant helm chart in the deployment. To do this:

make k8s-install-chart VALUES_FILE=charts/mid-csp-umbrella/values-jupyter.yaml

a few pods are added to the deployment. They should look like as below:

NAME                                    READY   STATUS              RESTARTS   AGE
continuous-image-puller-mqwcw           1/1     Running             0          38s
hub-5d84f6dffd-qtzlh                    1/1     Running             0          38s
user-scheduler-8f6d6d4c6-cwqg5          1/1     Running             0          38s
user-scheduler-8f6d6d4c6-kwd7s          1/1     Running             0          38s

This also add the following services: (to access them run kubectl get svc -n mid-csp)

NAME         TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE
hub          ClusterIP      10.107.242.98    <none>           8081/TCP       2m12s
proxy-api    ClusterIP      10.104.177.160   <none>           8001/TCP       2m12s
proxy-public LoadBalancer   10.104.155.147   192.168.49.97    80:30492/TCP   2m12s

In particular, the proxy-public is a LoadBalancer that exposes an external-ip (192.168.49.97 in the above example) to access the JupiterHub (note: this could change for each deployment).

To open JupyterHub, navigate to http://<proxy-public-external-ip>/hub in a web browser.

If the deployment is in a remote machine accessed via ssh, a port forwardin is needed as for Taranta:

ssh -L 8081:<proxy-public-external-ip>:80 <user>@<remmote_machine>

And point your browser to

http://localhost:8081/hub

Username and password can be anything. Please note that a pod will be created for each username, so in case of multiple login it is suggested to use the same one (while password can be changed everytime)

A collection of notebook to be run is in notebooks.

Use a local version of ska-csp-lmc-common

During development could be useful to test the local changes on the ska-csp-lmc-common before releasing a new version of it. It is possible to use it using some Makefile commands that act ont the pyproject.toml and poetry.lock files.

Note: ska-csp-lmc-common folder must be in the parent directory of ska-csp-lmc-mid.

To set poetry for the installation of local ska-csp-lmc-common:

make pre-local-install-common

while to restore the original files:

make post-local-install-common

These commands are integrated into others in order to simplify the procedure. They are presented in the following

To build the image with the local ska-csp-lmc-common:

make local-oci-build

This will automatically change the poetry files and restore them after the installation. After image is build, integration tests can be performed as usual.

To perform unit tests, a new command will open a shell in a container with local ska-csp-lmc-common already installed:

make dev-container

After launching this command, the tests can be performed as usual:

make python-test

In the same container also linting can be performed with the local package.

Known bugs

Troubleshooting

If the command

kubectl logs -f pod/<podname> -n mid-csp

aborts with a failed to watch file <name-json.log>: no space left on device, you can correct by connecting to the k8s node and enlarging the space to be used for log:

 $ ssh 192.168.49.2 -l root  <psw root>
 $ sysctl fs.inotify.max_user_watches=1048576
 $ sysctl fs.inotify.max_user_watches

If the configurations pods gives a lot of errors, and the TangoDB pod gives the following message: [Warning] Aborted connection 3 to db: ‘unconnected’ user: ‘unauthenticated’ host: ‘172.17.0.1’ (This connection closed normally without authentication)

Then, before making a deployment you need to give:

unset TANGO_HOST

License

See the LICENSE file for details.