Development

We use Poetry to handle all of the dependencies we use for this system. We also provide the poetry.lock so that we have a consistent set of dependencies for any build we do.

For local development we provide a docker-compose.yaml file for easy setup, and extra make commands for ease of use.

There are 2 main ways of developing, developing inside docker, and directly on the host.

Warning

That basic usage listed here does not connect to the config DB, refer to Config Database Usage on how to connect to the database during local testing.

If the API is not connecting to the config DB some endpoints will show blank data. And the tasks will connect to the regex version of the topics list instead of using the subarray list of topics to connect to.

Docker Development

For developing directly inside docker you can run make build and then make run which will start up all the services required.

For the API side of the development, changes will be picked up on file save, and the API will restart to pick up those changes, however the background tasks will not do this, and the docker container will need to be rebuilt and restarted for changes to take place.

The way to rebuild the cron image would be to run docker compose build cron and then docker compose up -d cron.

For this reason it is usually a lot simpler to rather develop directly on the host machine.

Host Development

For developing directly on the host machine you will still need to have Kafka and Redis running. To accomplish this there is a helper command which can be used to start these services up which can be run with make run-services.

Note

The Makefile has the variables REDIS_HOST, BROKER_INSTANCE, and NAMESPACE to make running the code locally much easier to accomplish.

There are also helper functions to run any command which is currently supported, for example:

  • make run-services -> brings up Kafka and Redis

  • make run-display-api -> Using above variables links to the services in Docker, but runs locally.

  • make run-task-* -> Runs the specified task using above variables.

Once the above services are running the API can be run with make run-display-api (this will also get poetry dependencies installed, and run inside a new shell).

For running any of the cron commands you need to be in a poetry shell which you can do by running poetry shell. And then to install the new commands run poetry install and then you can run them directly.

For a list of tasks check pyproject.toml and look at the tool.poetry.scripts section.

A word of warning, you will need to set the Kafka and Redis env variables. To set them add BROKER_INSTANCE=localhost:29092 for Kafka and REDIS_HOST=localhost for Redis before any command that you need to run.

As an example to run the Visibility Receive task run the following:

BROKER_INSTANCE=localhost:29092 REDIS_HOST=localhost signal-task-process-visibility-receive --verbose

The standard arguments that can be added is:

  • --verbose or -v - to change the logging to output INFO

  • --debug or -d - to change the logging to output DEBUG

Kubernetes Development

We will split this into developing for SDP and for Signal Displays.

SDP Chart

If you are making changes to the Signal Display Chart, then you need to have the SDP integrations repo locally as well.

To link the SDP chart to your local Signal Display Chart, update the file charts/ska-sdp/Chart.yaml

dependencies:
- name: ska-sdp-qa
  version: 0.25.0
  repository: ../../../ska-sdp-qa-data-api/charts/ska-sdp-qa
  condition: ska-sdp-qa.enabled

Then you can set the required values options to set the version of the docker images you want to use:

ska-sdp-qa:
  api:
    container: artefact.skao.int/ska-sdp-qa-data-api
    version: latest
  display:
    container: artefact.skao.int/ska-sdp-qa-display
    version: latest

If you then push your docker images for the services into the Kubernetes cluster (if you are using a local link for the containers), or to GitLab (if you are using the GitLab repo). Then the images will be able to be used in the helm chart.

Once that is done you can install the SDP chart using helm install ... you will get the Signal API changes deployed as well.

Signal Display Chart

Use this if you are only developing for the Signal Display Chart, and will not be using the SDP chart’s resources.

Set the following values options:

api:
  container: artefact.skao.int/ska-sdp-qa-data-api
  version: latest
display:
  container: artefact.skao.int/ska-sdp-qa-display
  version: latest

If you then push your docker images for the services into the Kubernetes cluster (if you are using a local link for the containers), or to GitLab (if you are using the GitLab repo). Then the images will be able to be used in the helm chart.

Once the images are in the right place, then you can install the chart using helm install ... which will install the chart with your new images.

Config Database Usage

There isn’t an easy way to run this using the Docker Compose setup, so it’s advised to have SDP installed in Kubernetes somewhere you have access to kubectl commands, and then run the API and tasks locally on your machine.

For documentation on the actual Config DB Library go here.

By default the config DB library connects to 127.0.0.1:2379. To change this set the ENV variables:

  • SDP_CONFIG_HOST - to the correct host

  • SDP_CONFIG_PORT - to the correct port

The Helm chart has these variables in the values file.

To connect your local instance to a remote (Kubernetes) instance use:

kubectl port-forward --namespace <the namespace> service/ska-sdp-etcd 2379:2379

Or if you have the repo locally you can use:

make config-db-tunnel NAMESPACE=<the namespace>