Skip to content

CONT - Containerisation

Powered by

DS2 CONT IceLogo

Project Links
Software GitHub Repository https://github.com/ds2-eu/containerisation
Progress GitHub Project https://github.com/orgs/ds2-eu/projects/7

General Description

To allow easy and automated packaging and deployment of modules on the IDT Kubernetes runtime subcomponent environment. The containerisation module leverages on custom Helm Chart descriptors to automatically convert them into full Kubernetes Helm Charts representing the module, based on standard base templates located in the DS2 Portal Marketplace. The Helm Charts are then deployed on the IDT Module.

The Containerisation module is a core module to the IDT Broker module that enables deployment of all the DS2 modules in the IDT Broker Kubernetes sub-component. The Containerisation module uses Helm Chart standard base templates describing a DS2 module. Those templates are provisioned by the IDT Broker module and provide the standard for DS2 module deployment in IDT Broker. Base templates are stored in the DS2 Portal Marketplace. Then, when uploading a DS2 module by module developers, to the DS2 Portal Marketplace, a custom Helm Chart descriptor that includes values for those base templates needs to be provided with the module. The Containerisation module will use the descriptor together with the base templates to create the Helm Chart for the DS2 module during deployment time on the IDT Broker.

The Containerisation module can work in two different modes:

  • The standard DS2 working mode: developers upload module Helm Chart descriptor to the DS2 Portal Marketplace. Participants use the IDT Broker Kubernetes UI to deploy the descriptor on the IDT. The Containerisation module is triggered when detecting the deployment of that descriptor, retrieves the base templates from the DS2 Portal Marketplace, creates the full Helm Chart and deploys it on the IDT Kubernetes Runtime sub-component
  • The GitOps way: automatic deployment of the Helm Chart descriptor is triggered by the Source controller sub-component upon detecting a change on the descriptor in the DS2 Portal Marketplace. Then as in the previous mode, the Containerisation module, create the full Helm Chart and deploys it on the IDT. This could be the deployment mode of the DS2 Portal

In both cases, the only difference is how the Helm Chart descriptor is deployed on the IDT either by the participant manually deploying the descriptor, or being automatically deployed by the Source Controller sub-component.

Architecture

The figure below represents the module fit into the DS-DS environment. DS2 CONT Architecturefits

The figure below represents the actors, internal structure, primary sub-components, primary DS2 module interfaces, and primary other interfaces of the module. DS2 CONT Architecture

Component Definition

This module has the following subcomponent and other functions:

  • ChartController: The ChartController is a Kubernetes controller, following the Kubernetes controller pattern which keeps track of a new Kubernetes custom resource definition - the “HelmChartDescriptor”. When changes are detected on a descriptor, ie. addition, update, the Controller connects to a configured location ie. GitHub repository, to download the corresponding Helm Chart base templates. Then, together with the HelmChartDescriptor, the Chart Controller will create a full Helm Chart describing the module. This Helm Chart will be deployed into the IDT Kubernetes Runtime subcomponent using the Installer component.

  • ChartManager: The ChartManager is mainly used to monitor the Helm Charts and HelmChartDescriptors deployed in the system. It will query the IDT Module’s Kubernetes subcomponent to retrieve current Charts and descriptors. The Chart Manager can also be used to create a HelmChartDescriptor using some input parameters and install it via the Installer component. Once installed, the ChartController will detect the new ChartDescriptor and will convert it to a Chart deploying it back into the IDT Module’s Kubernetes subcomponent.

  • Installer: This is the component responsible for installing Helm Charts and HelmChartDescriptors in the IDT Kubernetes subcomponent. It will receive the corresponding Charts and HelmChartDescriptors and will apply them in the IDT Kubernetes subcomponent. The Installer also takes care of installing new Sources created by the Source Manager component.

  • Containerisation UI: This is the main module UI that allows users to monitor current existing Charts, ChartDescriptors and Sources in the system. Users will have an overview of what is installed in the system and its current status regarding to those specific resources. The UI can also be used to create, update or delete ChartDescriptors via the ChartManager and Sources via the Source Manager.

  • GitOps Source Controller: The Source Controller, similar to the ChartController, is a Kubernetes controller that keeps track of the custom resource definition Source. A Source mainly represents a reference to a repository where ChartDescriptors are stored. The Source Controller monitors the status of the Source and reacts to changes by reflecting those changes in the IDT Kubernetes subcomponent. The Source Controller is an optional subcomponent, and users can just install the ChartDescriptors using the IDT or via Kubernetes standard kubectl.

  • (DS2) GitOps Source Manager: The Source Manager, similar to the ChartManager is mainly used to monitor the Source in the system and is customised to DS2. It can also be used to create, update, and delete new sources that will be installed via the Installer component. As the Source Controller, this is an optional component.

  • Tier 1 Service Stack for Marketplace and deployment and API: The full stack will be implemented as generically described elsewhere in this document. Exceptions: This module runs in the IDT and uses the IDT Kubernetes subcomponent for Chart and ChartDescriptor installations. The DS2 Portal Marketplace component and its repository system is used to store the Chart base templates. Since the DS2 Portal is also a DS2 module, it is deployed and run on the IDT, so Containerisation module can also be used for the DS2 Portal and other intermediary services.

Screenshots

The Containerisation UI development has not yet been started, so no screenshots. DS2 CONT Screenshots

Commercial Information

Organisation (s) License Nature License
ICE Open Source Apache 2.0

Top Features

  1. Kubernetes Native: The CONT module is a Kubernetes native solution based on open-source system Flux and The GitOps Toolkit
  2. Kubernetes Application Deployment Control: Provides control to Kubernetes administrators or SREs over what and how a module or application is deployed on a given Kubernets cluster ie. IDT2.
  3. Kubernetes Application Abstraction: The CONT module abstracts developers from the Kubernetes complexity when creating a Kubernetes application, leveraging on a templating system based on Helm Charts and Flux HelmRelease CRD
  4. Helm Chart Templates: Ability to create Helm Chart templates for different types of applications
  5. Helm Release CRDs Templates: Based on the HelmRelease CRDs from Flux, the CONT module enables the creation of HelmRelease templates that will make use of the Helm Chart Templates
  6. Application Management using API: Manage application (module) lifecycle (create, install, uninstall, delet) using the CONT Chart Manager API
  7. Containerisation UI: Manage the Containerisation module using a modern web based UI
  8. Operator vs Developer View: Access the Containerisation module features with different views depending on the role Operators vs Developer

How To Install

The Containerisation module will be part of the IDT installation, but a standalone installer is so far provided in order to be able to work with it, which installs Flux Helm and Source Controllers in order to create the Helm Chart from the HelmRelease and Chart templates.

Requirements

The IDT or a Kubernetes cluster and Helm is required. This component can't be installed without a Kubernetes cluster, in fact, the goal of the component is to deploy the modules on the Kubernetes cluster.

Software

Containerisation module so far installs these software utilities and specific tested compatible versions:

  • Flux Helm Controller (Chart Controller + Installer)
  • Flux Source Controller (Source Controller)
  • Flux Notification Controller (default)
  • Flux Kustomization Controller (default)

To Be Implemented:

  • Chart Manager
  • Source Manager
  • Containerisation UI

Summary of installation steps

  1. Clone the repo containerisation

  2. Create the platform side configuration

  3. Install the Containerisation module by running the installfluxghorg.sh script

Detailed steps

  1. Clone the repo containerisation

    git clone https://github.com/ds2-eu/containerisation
    

  2. Create the platform configuration by running the kubernetes_configuration.sh script. The Helm templates to be used by the Containerisation module rely on a platform configmap and secret that are created in this step and contain platform configuration

    ./kubernetes_configuration.sh imagepath github_user github_token organisation_domain namespace
    

    imagepath: The registry path where the docker images are stored ie. in ghcr.io/ds2-eu/ds2charts/image the imagepath would be ds2eu/ds2charts

    github_user: The admin user for the customer organisation registered in the DS2 Portal

    github_token: The admin user token retrieved from the DS2 Portal

    organisation_domain: The user organisation domain for the modules urls ie. customer.domain.com

    namespace: The namespace where to create the configmap and secret ie. icekube

    An example of running the script

    ./kubernetes_configuration.sh ds2-eu/ds2charts user ghp_qRoTB9z1w1xa3ki4hzrl5Vw9bQPd82pkaxU4 192-168-50-5.idt.ds2.sslip.io icekube
    
  3. Install the Containerisation module by running the installfluxghorg.sh script : So far the Kubernetes controllers are available, which will deploy a template helm chart from a helmrelease CRD

    ./installfluxghorg.sh github_token github_organisation github_repository
    

    github_token: a GitHub user token. This is a personal access token that has access permissions to the org repository

    github_organisation: a GitHub organisation. This will be in a later stage the organisation from the Marketplace ie. ds2-marketplace where modules are stored

    github_repository: this is the name of a repository in the GitHub organisation. This will be in a later stage a repository of the organisation id of the participant as registered in the Portal. The repository is created via the Marketplace and the acquired modules will be located in that GitHub repository. This is linked to the Marketplace purchase process

    An example of running the script

    ./installfluxghorg.sh ghp_qRoTB9z1w1xa3ki4hzrl5Vw9bQPd82pkaxU4 ds2-eu containerisation
    

  4. Install the core IDT modules. To do this, run the copy_charts script. This will install and customize the core modules of the IDT. So far only Containerisation UI and Backend are being copied, but at a later stage, the connector and the Connector UI will be made available.

    ./copy_charts.sh github_user github_token github_organisation user_organisation organisation_domain
    

    github_user: The admin user for the customer organisation registered in the DS2 Portal

    github_token: The admin user token retrieved from the DS2 Portal

    github_organisation: a GitHub organisation. This will be in a later stage the organisation from the Marketplace ie. ds2-marketplace where modules are stored

    user_organisation (github_repository): the name of the organisation which must be the same as the GitHub repository name created for that organisation

    organisation_domain: The user organisation domain for the modules urls ie. customer.domain.com

    An example of running the script

    ./copy_charts.sh user ghp_qRoTB9z1w1xa3ki4hzrl5Vw9bQPd82pkaxU4 ds2-eu demoorg ds2.demoorg.com
    

Now the Containerisation UI can be accessed at https://containerisationfrontend.$organisation_domain .

How To Use

The Containerisation UI is now available for installing modules at the moment of writing this documentation and the templates for Module Helm Charts and Base Helm Charts are being developed. The guideline to install the module using the UI is currently being written and the manual installation and manual chart design is already available. This installation requires the IDT to be installed given the Helm template is configured to use the components of the IDT and some default platform configuration

Developer Guide

UI Module Chart Design

At the time of writing the UI is being developed and not yet available to the users.

Manual Module Chart Design

To manually prepare the module chart to be deployed using the containerisation module first create the HelmRelease chart using the demomodule template in the helmreleases repository at demomodule/.

  1. Clone the repository and copy the folder to another folder named after the module name ie. mymodule. This will be the module chart.
    git clone https://github.com/ds2-eu/helmreleases.git
    cp -r helmreleases/demomodule/ mymodule
    
  2. Change the name and the version in the Chart.yaml file with the module name and version. You can edit the file or use sed

    sed -i 's/demomodule/mymodule/g' mymodule/Chart.yaml
    

  3. Navigate to the templates folder of the module chart

    cd mymodule/templates
    

  4. Duplicate the helmrelease.yaml file as per the number of components in your module and delete the original helmrelease.yaml. This means, if your module includes a backend and a frontend, you need two helmreleases, if there is also a database, three helmreleases and so on. Name the files with the name of the module and component ie. mymodulebackend, mymodulefrontend ... Don't use '-' or similar characters. You can see an example of a two components module in the helmreleases repository at demomodule2 folder with a backend and frontend components.

    cp demomodulecomponent1.yaml mymodulebackend.yaml
    cp demomodulecomponent1.yaml mymodulefrontend.yaml
    rm demomodulecomponent1.yaml
    

  5. Edit the components helmrelease files in order to use the component name. An example of the backend where the name and values.app.name are changed. Repeat this step with every component.

    kind: HelmRelease
    metadata:
      name: {{ .Release.Name }}-mymodulebackend    -> Changed this with suffix mymodulebackend
      namespace: {{ .Release.Namespace }} 
    spec:
      interval: 10m
      chart:
        spec:
          chart: ./charts/ds2modulebase
          version: '1.0.0'
          sourceRef:
            kind: GitRepository
            name: ds2charts
            namespace: ds2
          interval: 10m
      values:
        app:
          name: mymodulebackend -> Changed this to mymodulebackend
          port: 80 -> Change this to the docker container port of the component. This is the port you use to expose the component in the docker image.
          env: true
        image:
          name: nginxmessage -> Change this to the docker image name of the component
          tag: "1.0.0" -> Change this to the docker image tag of the component
        service: 
          port: 8080 -> Change this port only if you want to expose the Kubernetes cluster ip through another port. 
        {{- toYaml .Values.mymodulebackend | nindent 4 }} -> Change to .Values.mymodulebackend or if your module does not have extra configuration, REMOVE THIS
    

  6. Navigate back to the main chart folder (cd ..) and edit the values if you are using configuration for the module ie. env variables. In the example we add configuration to two components in the module, with the same variable name WELCOME_MESSAGE. If a variable is a bool with values true or false, double or single quote the value ie. myvariable: "true"

    mymodulebackend: -> Change this to the module component
      config:
        WELCOME_MESSAGE: "The flux containerisation helm chart nginx message backend" -> Add here as many variables as you may have as a key value pair
    mymodulefrontend: -> Chnage this to another module component
      config:
        WELCOME_MESSAGE: "The flux containerisation helm chart nginx message frontend" -> Add here as many variables as you may have as a key value pair
    -> Add more components to the module
    

  7. With this, the module is ready. Push the chart to your ds2 module repository at /charts folder , i.e mymodule. Watch out if the repository has the same name that the chart, in that case move first the chart to a charts folder then clone the repository

    mkdir charts
    cp -r mymodule/ charts/
    rm -rf mymodule/
    git clone https://github.com/ds2-eu/mymodule.git
    cd  mymodule
    cp -r ../charts/mymodule/ charts/
    git push (git add, git commit, git push ...)
    

  8. For DS2, the docker images for the modules need to be pushed to github using ds2charts image path. An example of an image in the ds2charts image path

    docker push ghcr.io/ds2-eu/ds2charts/nginxmessage:1.0.0
    

    ghcr.io is the registry

    ds2-eu/ds2charts is the path

    nginxmessage:1.0.0 the image name and version

Tips For Best Containerisation Integration

  • Building your images: The images will run on top of the IDT, and the IDT runs a Kubernetes cluster on Linux, typically Ubuntu server, so you need to build your images for Linux/amd platform. That is mainly for those developing usin MacOS. The better option is to build the images on a Linux server.
  • Backend URL: A typical issue is to generate the backend URL from the frontend. Since modules will be deployed on top of the IDT, and the IDT will be deployed on-premise for different customer organisations, thus using different domains for the module URLs, the Helm Chart templates are configured to use a platform configuration that contains the organisation domain. This platform configuration is stored in Kubernetes and is created during Containerisation installation. The Helm Chart templates are also configured to create by default an ingress that will expose an https URL for the different components ie. frontend, backend. The component URL is automatically created by joining the component name with the domain, so for instance, if a component is called mymodulefrontend, and the domain for the organisation is myorg.com, the ingress URL of that component is https://mymodulefrontend.myorg.com. So to generate the backend URL dynamically from the frontend, the recommended approach is from the frontend to obtain the domain using the frontend URL and then use the component name which is static, and what you configured as name for the component in the chart. An example of how to do this for dhsare:
    let host = window.location.host;
    let hostname = window.location.hostname;
    host = hostname.replace(/^dsharefrontend\./, ""); --> where dsharefrontend is the name of the dshare frontend component
    app.config.globalProperties.$apiBase = `${protocol}//dsharebackend.${host}`; --> where dhsarebackend is the name of the dshare backend component
    
  • Database connection string: Another typical issue is how to access a database or any internal component that is not being exposed via internet. In this case it is assumed that best practices are being followed and the database is not exposed using ingress and the frontend is only accessing the database via a backend API. The Helm Chart templates are configured to expose the dabase in the cluster using the component that you configured in the chart. The port is the one being exposed by the service, in the example above of the helmrelease file it's the service.port: 8080 from the values. You can set this port to any port and then that's the one you use for accessing the database. So an example of a helmrelease for a database:

    ...
    values:
      app:
        name: mymoduledb -> Changed this to mymodulebackend
        port: 3306 -> This is the port in the image
        env: true
      image:
        name: mysql -> Change this to the docker image name of the component
        tag: "1.0.0" -> Change this to the docker image tag of the component
      service: 
        port:  3300 -> This is the port used to expose the service to the cluster
    ...
    
    With this configuration, the url to access mysql would be jdbc:mysql://mymoduledb:3300/dbname where dbname is the name of the database in the image.

  • Ingress Enable/Disable: As discussed in the previous step 5, when creating the helmrelease file for a component, you can add some additional variables in the values from the Helm Chart template. One of those variables enables or disables the creation of an ingress URL to expose the component outside the cluster. Usually ingress will be enabled for frontend and backend components that need to be accessed outside the cluster, and is enabled by default. For components like databases and others that don't need to be accessed from outside the cluster, you can disable the ingress by adding the variable ingress.enabled and set to false:

    kind: HelmRelease
    metadata:
      name: {{ .Release.Name }}-mymodulebackend    -> Changed this with suffix mymodulebackend
      namespace: {{ .Release.Namespace }} 
    spec:
      interval: 10m
      chart:
        spec:
          chart: ./charts/ds2modulebase
          version: '1.0.0'
          sourceRef:
            kind: GitRepository
            name: ds2charts
            namespace: ds2
          interval: 10m
      values:
        app:
          name: mymodulebackend -> Changed this to mymodulebackend
          port: 80 -> Change this to the docker container port of the component. This is the port you use to expose the component in the docker image.
          env: true
        image:
          name: nginxmessage -> Change this to the docker image name of the component
          tag: "1.0.0" -> Change this to the docker image tag of the component
        service: 
          port: 8080 -> Change this port only if you want to expose the Kubernetes cluster ip through another port.
        ingress:
          enabled: false
        {{- toYaml .Values.mymodulebackend | nindent 4 }} -> Change to .Values.mymodulebackend or if your module does not have extra configuration, REMOVE THIS
    

  • Others: TBD

User Guide

Once the controllers have been deployed Containerisation needs the following resources to deploy a module:

  1. A GitRepository source resource to pull the template Helm Chart

  2. The template Helm Charts which are stored in the GitRepository that has been created. This template is provided by the Portal and Marketplace, and ultimately by a Kubernetes administrator, so users don't need to create this resource.

  3. The Helm Release Chart which are based on HelmRelease CRD (Custom Resource Definition) from flux. This is a simple way of refering to a Helm Chart and deploying it into a Kubernetes cluster. This Helm Release Chart is the chart representing the module and will be provided by the module developers. This is an example of a HelmRelease

  4. The Containerisation HelmRelease Yaml File which is the HelmRelease representing the Module Helm Release Chart. This file will be generated by Containerisation to deploy a module using the Containerisation UI.

There are two module deployment methods, using the UI or manual deployment.

UI Module Deployment

At the time of writing the UI is being developed and not yet available to the users.

Manual Module Deployment

To manually install a module, assuming IDT is already running and Containerisation has been installed following the instructions above, users can create their own Containerisation HelmRelease Yaml File and push it to the repository where the containerisation controllers have been configured during installation. In the next example the organisation fluxtest is used as an example of an organisation using Containerisation to deploy a demo module. Replace fluxtest with your organisation.

  1. Create the repository in GitHub and create a personal access token with permissons to read/write the repository

    Replace with your organisation

  2. IDT has been installed

  3. Platform configuration has been created for the organisation

    ./kubernetes_configuration.sh ds2-eu/ds2charts <demouser> <token> <domain> icekube
    

    Replace the , and variables with the GitHub user, personal access token from GitHub and the domain configured during IDT installation

  4. Containerisation has been installed

    ./installfluxghorg.sh <token> ds2-eu <fluxtest>
    

    Replace and with the personal access token from GitHub and the created GitHub repository (the name of the organisation)

  5. Once everything is installed, containerisation is configured to monitor changes in the repository, and the folders /cluster/my-cluster/flux-system are created. In the flux-system folders there are already some yaml files which are the components from Containerisation flux itself, This can't be modified

  6. Now users need to create a flux GitRepository resource that will monitor the Helm templates folder at helmtemplates repository in ds2 and a namespace resource where this GitRepository will be created.
    Clone the repository

    git clone <fluxtest>
    
    Create a gitrepository.yaml file
    apiVersion: source.toolkit.fluxcd.io/v1
    kind: GitRepository
    metadata:
      name: ds2charts
      namespace: ds2
    spec:
      interval: 1m
      url: https://github.com/ds2-eu/helmtemplates.git 
      ref:
        branch: main
    
    Create a namespace.yaml file
    apiVersion: v1
    kind: Namespace
    metadata:
      name: ds2
    
    Create a kustomization file kustomization.yaml
    apiVersion: kustomize.config.k8s.io/v1beta1
    kind: Kustomization
    resources:
    - namespace.yaml
    - gitrepository.yaml
    
    Navigate to the local cloned repository, create a ds2 folder in the clusters/my-cluster/ folder and copy the gitrepository.yaml, namespace.yaml and kustomization.yaml files. Then push the repository. The resources should be created in the cluster after some seconds
    kubectl get gitrepository -n ds2
    
    DS2 CONT gitrepository

  7. Push the demo module chart to the repository to simulate what would happen when a module is purchased from the Marketplace, and copied to the organisation repository
    Clone the helmreleases repository

    git clone https://github.com/ds2-eu/containerisation.git
    
    Copy the demomodule chart to the repository and push it to the repository in the charts folder. If the charts folder does not exist yet, create it. The module won't be deployed yet.
    mkdir fluxtest/charts
    cp -r helmreleases/charts/demomodule/ fluxtest/charts/
    

  8. Create the Containerisation Helm Release Yaml file, the namespace file and the kustomization file Create the helmrelease-demomodule.yaml with the values overriding any variable that needs to be configured. These variables are the ones in the values.yaml file of the module chart. If a variable is a bool with values true or false, double or single quote the value ie. myvariable: "true"

    kind: HelmRelease
    metadata:
      name: demomodule    --> Change this to the name of the module
      namespace: demomodule --> Change this to the name of the module
    spec:
      interval: 10m
      chart:
        spec:
          chart: ./charts/demomodule --> Change this to ./charts/name of the module
          version: '1.0.0' --> Change this to the version of the module as per the version in the Chart.yaml file of the module chart
          sourceRef:
            kind: GitRepository
            name: flux-system
            namespace: flux-system
          interval: 10m
      values:
        demomodulecomponent1: --> Change this to a module component
          config:
            WELCOME_MESSAGE: "COMPONENT1" --> Change this according to the variables of values.yaml file of the module chart. You can change the value.
    
    You can copy the helmrelease-demomodule.yaml from the helmreleases repository in the releases/demomodule/ folder.
    Create the namespace-demomodule.yaml
    apiVersion: v1
    kind: Namespace
    metadata:
      name: demomodule --> Change this to the name of the module
    
    You can copy the namespace-demomodule.yaml from the helmreleases repository in the releases/demomodule/ folder. Create the kustomization file
    kind: Kustomization
    resources:
    - namespace-demomodule.yaml --> Change this to namespace-nameofthemodule.yaml
    - helmrelease-demomodule.yaml --> Change this to helmrelease-nameofthemodule.yaml
    
    You can copy the kustomization.yaml from the helmreleases repository in the releases/demomodule/ folder. Copy the yaml files to the fluxtest/cluster/my-cluster/releases/demomodule/ folder and push the repository. If the releases and demomodule folders do not exist yet, create them. The module will be deployed after some seconds automatically by Containerisation Flux component. Run the following command to check the helmreleases are installed
    helm ls -n demomodule
    
    DS2 CONT helmcharts You can see there are two charts created, the demomodule helmchart which corresponds to the Containerisation HelmRelease and the demomodule-demomodulecomponent1 which is triggered by the first one and expanded to a full chart using the templates from the modulebase chart. Run the following command to check the pod is running
    kubectl get pods -n demomodule
    
    DS2 CONT pods The modulebase chart also creates an ingress to access the module. Run the following command to retrieve the ingress url
    kubectl get ing -n demomodule
    
    DS2 CONT ingress Run the following command to test access to the module
    curl -k https://demomodulecomponent1.192-168-50-5.idt.ds2.sslip.io
    
    DS2 CONT curl Your ip will be different depending on the domain congfigured during installation when the platform_configuration was created.

If you want to deploy another module, using the same templates but with a different name, just for testing purposes, repeat the previous steps but replacing demomodule by the new module name. Place the module chart in the charts/ folder, the release in the releases folder, and the module will automatically be deployed by Containerisation Flux.

Other Information

No other information at the moment for Containerisation

OpenAPI Specification

To Be Done

Video https://youtube.com/cont

Flux https://fluxcd.io/

Containerisation Repository https://github.com/ds2-eu/containerisation (Private Link for Project Members)