vis-receive
The aim of this chart is to deploy the visibility receive processes. It is implemented as a StatefulSet to provide stable DNS-based IP addresses.
Currently, it is mainly used by with the vis-receive script.
Values
The following sections detail the configuration values that can be given to this Chart.
General
Name |
Description |
Default |
---|---|---|
|
Overrides the default-constructed name for some of the Kubernetes elements (pod, service, etc) |
|
|
Overrides the default-constructed fullname for some of the Kubernetes elements (pod, service, etc) |
|
|
Name of the SDP script that launched this chart |
|
Pod-level settings
These are useful to pin receivers to specific Kubernetes nodes,
and to make use of native network equipment.
If specified at the root level then settings will apply to all pods.
If specified as elements of the podSettings
array, then
as many pods as there are entries in the array will be started,
overriding any values specified at the root level.
Both keys are optional, if you want to startup multiple pods with
no additional configuration specify an array with empty objects
(e.g. podSettings: [{}, {}]
). By default a single pod will be
started with no extra configuration.
Name |
Description |
Default |
---|---|---|
|
Additional key/value items to append to a Pod’s metadata |
|
|
Key/value items to use to select a node where the receiver pods will run |
|
|
Kubernetes resource specification applied to all receiver containers |
|
|
Kubernetes |
|
Global Environment variables
Name |
Description |
Default |
---|---|---|
|
Environment variable to be added to all containers. |
|
Data Product Storage
The SDP provides a Persistent Volume Claim to all SDP scripts for them to
read/write data products; we refer to this PVC as the “Data Product Storage”.
This PVC is required by this chart in order to function correctly.
This PVC is mounted into all containers under “mountPath”.
The mount path is also communicated to containers
through the SDP_DATA_PVC_MOUNT_PATH
and DATA_PRODUCT_PVC_MOUNT_PATH
environment variables.
Name |
Description |
Default |
---|---|---|
|
Name of the PVC where data products are stored. |
|
|
Path under which this PVC will be mounted in containers that need it. This value is exposed to containers via the |
|
Init containers
Arbitrary init containers users may want to add for pre-receiver setup actions. Init containers have access to all volumes defined in the receiver pod so that they read/write data from/into them as necessary.
Name |
Description |
Default |
---|---|---|
|
List of custom init containers to run in the chart before the receiver starts. |
|
Receiver settings
One or more receiver containers will be spawned depending on the instances
value,
and the given total number of streams
will be spread across them as evenly as possible,
covering a continuous port range starting at port_start
. For example:
receiver:
instances: 4
streams: 40
port_start: 4500
This will construct 4 containers, each listening to 10 ports, with starting ports at 4500, 4510, 4520 and 4530.
Note that launching more receiver instances (i.e., containers/processes) has the benefit of being able to deal with more channels from a single pod at the expense of splitting the received frequency range that would otherwise be received on a single receiver. This will result in more RPC calls via Plasma to processors down the line.
For each receiver, the command is built internally from the given settings,
appending any extra options at the end. Additional options
must be given as a two-level
dictionary, with the first level being groups, and the second level being the
individual options within that group.
All volumes are available to all receiver containers; hence they all can connect to the same Plasma store.
Name |
Description |
Default |
---|---|---|
|
The image to use to run the receiver containers. |
|
|
The version of the image used to start the receiver containers. |
|
|
The executable used to launch the receiver. |
|
|
The number of streams (ports) this receiver should listen to. |
|
|
The port on which the first stream will be opened. All streams are opened in a consecutive block of ports. |
|
|
The number of receiver instances to launch (as separate containers) to cover all streams. |
|
|
Whether receiver logging should be more or less verbose. |
|
|
Further options to pass down to all receivers. See the receiver documentation for available options. |
|
|
The file to be created by the receiver to signal that it’s ready to receive data. |
|
|
The initial delay, in seconds, that Kubernetes should wait for before the first readiness probe is issued. |
|
|
The period between readiness probes, in seconds. |
|
Plasma settings
If enabled, a Plasma store container with the given image will be launched. Additionally, two in-memory volumes will be defined: one that stores the Plasma socket, and one used as the in-memory filesystem backing up the object store itself. The latter is always mounted under /dev/shm when used by the different containers.
Name |
Description |
Default |
---|---|---|
|
Whether Plasma support should be enabled in this chart. |
|
|
The image to use to start the Plasma store container. |
|
|
The version of the image used to start the Plasma store container. |
|
|
The executable used to launch the Plasma store. |
|
|
The amount of memory the store should allocate for use, in bytes. |
|
|
The path under which the volume containing the Plasma UNIX socket should be mounted in containers needing access to Plasma. |
|
Processors
Processors are given as a list of containers. The SDP Data PVC and the automatic Plasma volumes are always mounted onto these containers.
Name |
Description |
Default |
---|---|---|
|
List of container specifications for user-defined Plasma processors. |
|
Extra containers
Like processors
, but they don’t get the Plasma volumes automatically mounted onto them.
Name |
Description |
Default |
---|---|---|
|
List of container specifications for user-defined extra containers. |
|
Testing vis-receive in Minikube
To test the vis-receive
chart in Minikube
first start Minikube:
minikube start
Then clone this repository
and go into the charts
subdirectory:
$> git clone https://gitlab.com/ska-telescope/sdp/ska-sdp-helmdeploy-charts
$> cd ska-sdp-helmdeploy-charts/charts
A Storage Class and a Persisntent Volume Claim will be needed.
Create two files storage_class.yaml
and pvc.yaml
with their specifications:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations: {}
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: local-minikube-storage-class
provisioner: k8s.io/minikube-hostpath
reclaimPolicy: Delete
volumeBindingMode: Immediate
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: receive-data
spec:
storageClassName: local-minikube-storage-class
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
Now create a new namespace called receive
, the Storege Class,
and the Persistent Volume Clain:
$> kubectl create namespace receiver
$> kubectl create -f storage_class.yaml
$> kubectl create -f pvc.yaml -n receiver
Create a test.yaml
file with the following values that will be used to configure the vis-receive chart:
data-product-storage:
name: receive-data
mountPath: /mnt/data
initContainers:
- name: download-input-ms
image: artefact.skao.int/ska-sdp-realtime-receive-modules
version: 3.3.0
command:
- /bin/bash
- -c
- |
set -o errexit
ls -altr ${SDP_DATA_PVC_MOUNT_PATH}
test ! -d ${SDP_DATA_PVC_MOUNT_PATH}/input.ms || exit 0
curl -o /dev/stdout https://gitlab.com/ska-telescope/ska-sdp-cbf-emulator/-/raw/master/data/50000ch-vis.ms.tar.gz?inline=false | tar xz
mv 50000ch-vis.ms ${SDP_DATA_PVC_MOUNT_PATH}/input.ms
ls -altr ${SDP_DATA_PVC_MOUNT_PATH}
receiver:
port_start: 41000
options:
reception:
datamodel: /mnt/data/input.ms
consumer:
name: mswriter
outputfilename: output.ms
Install the chart:
$> helm install recv vis-receive -n receiver -f test.yaml
The deployment can be monitored using k9s or by running:
$> kubectl get all -n receiver
NAME READY STATUS RESTARTS AGE
pod/recv-vis-receive-00-0 2/2 Running 0 27s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/recv-vis-receive ClusterIP None <none> <none> 27s
NAME READY AGE
statefulset.apps/recv-vis-receive-00 1/1 27s
which shows the receive pod and the network service. Once the vis-receive pod enters the running state, we can check if the container has been deployed correctly by running:
$> kubectl logs pod/recv-vis-receive-00-0 -c receiver-00 -n receiver
If the receive application has been successfully launched, this will show a Ready to receive data
message.