Quick start#
This repository contains tools and templates to help DPPS subsystem developers to define their subsystems in Helm charts, and to test them locally and in Gitlab CI/CD.
In order to make deployments in all DPPS environments reproducible, it
is important that the deployments are automated as much as possible. It
means that helm install
(helm upgrade
) should be sufficient to
install (upgrade) the application, including the necessary bootstrap,
migration, and self-test tasks.
Initializing a new project#
Adding the AIV toolkit#
Add the AIV toolkit as a submodule to your repository. Beware that you need to use relative path:
python -c 'from pathlib import Path; import subprocess as sp; print(Path("/cta-computing/dpps/aiv/dpps-aiv-toolkit.git").relative_to("/" + sp.check_output(["git", "remote", "get-url", "origin"]).decode().split(":")[1], walk_up=True))'
Subsequent commands and pipeline jobs assume that the repository root contains a Makefile
which includes dpps-aiv-toolkit/Makefile
.
To create this Makefile
, please consider this example.
Initializing configuration#
You will need to create aiv-config.yaml
configuration file for the AIV toolkit. The recommended way is to copy the example and modify it. See further details in AIV Config.
Including toolkit CI pipelines#
Toolkit provides a set of reusable CI pipelines for building and testing DPPS artifacts (docker images, CWL files, Helm charts, python packages, etc). These pipelines are executing the same tasks as the local development.
For details about the CI pipelines and their configuration, see CI Pipelines.
Adding artifacts#
Python package and documentation#
Root of the repository should contain a pyproject.toml
file, which defines the python package, following the python-project-template . The template also includes example of the documentation. See more details in the Publish Python Packages section.
Adding Docker images#
It is possible to add multiple docker images to the project, see the Build Docker Images section for details.
Adding CWL artifacts#
Many projects, especially DPPS pipelines, use CWL files to define their artifacts. The AIV toolkit can help to test these artifacts in the local See CWL section for details.
Adding Helm charts#
Projects that deploy their artifacts to Kubernetes clusters can use Helm charts. See the Tests section for details on how to test Helm charts.
Usage in local development#
Starting fresh#
If you experience any issues with the local development, please reproduce them by starting from a fresh clone of the repository and without any pre-existing Kubernetes cluster.
If you cloned a repository with DPPS AIV Toolkit included, do not forget to update the submodules, for example:
git clone git@gitlab.cta-observatory.org:cta-computing/dpps/workload/wms.git
cd wms
git submodule update --init --recursive
Installing or synchronizing toolkit package#
Current version of the DPPS AIV Toolkit is available as a Python package. It’s best to use the version included in the repository. We recommend using uv for this, as it allows you to install Python packages in isolated environments.
uv tool install ./dpps-aiv-toolkit
One-shot startup#
You can create a local Kubernetes cluster kind
cluster (see kind documentation for more information), install the Helm chart, create the test container, and attach your shell to it in one command:
make
This command may take some time, depending on your network speed and the size of the Helm chart. Note that we currently expect all DPPS images to be publicly available in the harbor registry.
Note that if you re-run the command twice in the same shell, you will reuse the same kind cluster and upgrade (not re-install) the Helm chart. This can be desirable for faster development.
Cleaning up#
If you want to start from scratch before delivering the change, you can run:
make destroy-k8s-cluster
Advanced#
Inspecting the cluster with kubectl
#
If you want to inspect the state of your application, just use kubectl
as usual. If you do not have it installed, you can use the one provided by the toolkit (available after running make
):
KUBECONFIG=.toolkit/kubeconfig.yaml .toolkit/bin/kubectl get pods
You can also export the access to your current kubeconfig:
make export-kubeconfig
This will allow you to use the kubectl
command without specifying the KUBECONFIG
variable:
kubectl get pods
Install chart but not run the test container#
In some cases it may be convenience to install the Helm chart without running the test containers.
You will need to first build and load you development image into the kind cluster:
make load-dev-docker
If your chart uses additional local images, you can build and load them as well, for example:
make build-dev-server-images
Then you can install the chart:
make install-chart
Setting up ingress#
TBD
Caching container images#
If you find yourself recreating local cluster often, it is possible to greatly accelerate this process by using local container image mirror:
make registry-mirror
Troubleshooting#
After using kind, I can not access my normal cluster (e.g. DESY Test Cluster) anymore#
It may be because kind
modifies configuration for accessing kubernetes when it creates the cluster. But don’t worry, your previous context
is still there! To view all contexts:
kubectl config get-contexts
Then you can do:
kubectl config use-context local