Kubernetes Installation
General Prerequisites
-
Kubernetes v1.24.0 or higher.
-
Helm and kubectl installed.
-
Sufficient permissions to deploy the ERS helm resources on the cluster.
Kubernetes Node Requirements
A high entropy source is needed for the kubernetes nodes. For OSes with linux kernel > 5.4 or Windows systems, no additional actions are needed. For older OSes the haveged service can be installed.
The 'sse2' cpu flag must be available in virtualized kubernetes nodes. Otherwise, some components cannot work correctly. To check if the flag is present and available, you can use:
if [[ ! -z $( cat /proc/cpuinfo | grep -E 'sse2' ) ]] ; then echo sse2 flag is presented ; else echo no sse2 flags presented ; fi
MTG Docker Registry Requirements
Network access to the MTG docker & helm registry "repo.mtg.de" and other public docker image registries.
Login to MTG docker repository with the username and password you were provided via:
docker login -u <MTG_DOCKER_REPO_USER> repo.mtg.de
If you want to verify access for MTG CPKI, make sure you are able to download a container image:
docker pull repo.mtg.de/releases/ers/clm/mtg-clm-ui:4.1.0
or if you want to verify access for MTG KMS:
docker pull repo.mtg.de/releases/ers/kms/kms-ui:3.3.0
User for the login command can be acquired from the MTG download center. The contract/account holder must log in to this platform and set a password for the repository user.
Database Requirements
The ERS application needs a connection to an external database system. Currently, Postgres and MariaDB are supported.
The applications deployed on the k8s cluster must have access to the following EMPTY databases.
You can create them as follows:
Postgres
Databases on Postgres can be created using the following SQL commands:
CREATE USER ers LOGIN PASSWORD 'changeit' ; CREATE DATABASE clm with owner = 'ers' ; CREATE DATABASE keycloak with owner = 'ers' ; CREATE DATABASE cara with owner = 'ers' ; CREATE DATABASE acme with owner = 'ers' ;
MariaDB
Databases on MariaDB can be created using the following SQL commands:
CREATE USER ers@'%' IDENTIFIED BY 'changeit' ; CREATE USER ers@localhost IDENTIFIED BY 'changeit' ; GRANT ALL PRIVILEGES ON *.* TO 'ers'@'%'; flush privileges ; CREATE DATABASE cara CHARACTER SET = 'utf8' COLLATE = 'utf8_unicode_ci'; CREATE DATABASE clm CHARACTER SET = 'utf8' COLLATE = 'utf8_unicode_ci'; CREATE DATABASE keycloak CHARACTER SET = 'utf8' COLLATE = 'utf8_unicode_ci'; CREATE DATABASE acme CHARACTER SET = 'utf8' COLLATE = 'utf8_unicode_ci';
Network Requirements
Ingress Controller
The k8s cluster must have nginx controller running (for the current chart version). For more on nginx, refer here.
Hardware Security Module (HSM) Requirements
An HSM can be used with MTG ERS and is highly recommended for production deployments. Currently, the HSMs listed here are supported.
If you decide to use HSM, the following conditions must be met:
-
Network access to the HSM must be permitted.
-
Connection can only be done over PKCS11 interface. This means that you must already have:
-
Initialized a Slot on the HSM (created the security officer).
-
Created the PIN for the Crypto user (will be used later for the connection to the HSM).
-
Prepared the Cryptoki library (.so) to be loaded into the ERS system. as described thoroughly in the ERS connection section.
-
Initial deployment of MTG ERS via helm
Creating a registry secret
As mentioned in Prerequisites, you must have set a password for the helm repo.
kubectl create secret docker-registry regcred --namespace mtg --docker-server=https://repo.mtg.de/ --docker-username="Repo_1234" --docker-password="Repo_password"
The registry secret name in the above command is regcred
and it’s created in the same namespace where the system is deployed.
Configuring the helm repo on your machine
helm repo add mtg-repo https://Repo_1234:Repo_password@repo.mtg.de/repository/helm/
To update the repo execute:
helm repo update
Creating the values file
Here, it is assumed no HSM in use. The keys are stored encrypted in the database. For detailed instructions on connecting to an HSM, refer to the ERS connection section.
In simple deployments, create a new file named values.yaml
, including following content:
ingress: className: "nginx" ers: host: "erstest.example.com" cara: host: "erstest.example.com" ocsp: host: "erstest.example.com" crl: host: "erstest.example.com" ers: db: hosts: "192.0.2.189:3306" type: "mariadb" caradb: name: "cara" username: "ers" password: "changeit" clmdb: name: "clm" username: "ers" password: "changeit" keycloakdb: name: "keycloak" username: "ers" password: "changeit" acmedb: name: "acme" username: "ers" password: "changeit" defaultPki: common: organization: "MTG AG" organizationalUnit: "Default" country: "DE" rootCa: commonName: "Default Root CA" validityInYears: "30" keyAlgorithm: "RSA" keyParameter: "4096" signatureAlgorithm: "SHA384withRSA" subCa: commonName: "Default Sub CA" validityInYears: "25" keyAlgorithm: "RSA" keyParameter: "4096" signatureAlgorithm: "SHA384withRSA" managementRootCa: commonName: "Default Management RA Root" validityInYears: "15" keyAlgorithm: "RSA" keyParameter: "4096" signatureAlgorithm: "SHA384withRSA" managementAdminUser: firstName: "Devops" lastName: "Devops" password: "changeit" email: "devops@mtg.de" p12Password: "changeit" keycloakAdminUser: username: "kadmin" password: "kadminpassword" clmClients: estServer: enabled: true cmpServer: enabled: true scepServer: enabled: true acmeServer: enabled: true
Then, proceed to making following changes:
-
Update the ingress class name in
ingress.name
, to match the class name on the cluster. To check the name of the ingress class on your k8s cluster:
kubectl get ingressclasses.networking.k8s.io
-
Update the host values in
ingress.ers.host
,ingress.cara.host
,ingress.ocsp.host
,ingress.crl.host
. In simple deployments, you can use just one host to access all the components. -
Update the host of the database server in
ers.db.hosts
-
Update the database type. Choose between either
mariadb
orpostgres
-
Update the name of the database for every component. Also update the user and password.
-
Update all elements under
ers.defaultPki
-
Choose the
managementAdminUser
on the system. Update its name, the password and email used for login into CLM UI, and the certificate password to login into CARA. -
The keycloak admin user is created only once. Choose a non-personal user (e.g., keycloakroot, kcroot, kcadmin, etc).
Helm Release Name
You must select a name for the helm deployment (or helm release).
In this document, test
is used as a reference to the selected helm release.
A helm chart can be installed many times (many instances of the same chart can be created).
Every instance has a name (called release name).
You will reference the release name when you call the install
or upgrade
helm commands.
Installing a new release of the helm chart
Remember the name you picked for this helm release deployment (as stated in Prerequisites).
Assuming the name is test
:
helm upgrade --install --create-namespace --namespace mtg test mtg-repo/ers -f /path/to/values.yaml
Where:
-
mtg
refers to the namespace -
test
refers to the release name -
mtg-repo
refers to the helm repo -
/ers
is the ers chart on the repo. This is the name of the package on the repo. Leave it as is, with/ers
After installing the helm chart, an init script will start. You must wait until it’s complete.
Once the init script is done (takes around 10 minutes), you can start working on the system. You should be able to complete following steps:
-
Open the CLM UI, check the host value in
ingress.ers.host
and use the path/clm-ui
. -
Login to the CLM UI. During the login process, you are redirected to
/auth
to provide username and password. -
Switch to use the
default
realm in CLM UI. Avoid using the system realm. -
Check if you have all clients and all default policies created for you.
If not, you may need to uninstall the chart and repeat the process. Before doing so, remember to drop all databases.
If the system works correctly, you should also be able to download the certificate keystore of the managementAdminUser
.
In a helm chart deployment with a release name test
, in a namespace called mtg
, the above can be accomplished using:
kubectl get secrets --namespace mtg -o jsonpath='{ .data.management-user }' test-management-user | base64 --decode > test-management-user.p12
If you are able to download the keystore, you have to load the keystore in the browser (use password in ers.defaultPki.managementAdminUser.p12Password
).
Open cara-admin UI, preferably with a private browser window (to avoid old authentication decisions you made in the browser), and access /cara-admin
on the host you configured in ingress.cara.host
Uninstalling the chart release
helm uninstall --namespace mtg test
Where:
-
mtg
is the namespace where the chart release is deployed -
test
is the name of the chart release you have deployed
Reinstalling the helm chart
It is possible to reinstall the same chart release with the same name, but the database must be empty (you must drop and recreate all databases).
Understanding the structure of the helm chart deployment
The name of any object created by the helm chart is prefixed by the release name.
For example, the StatefulSet of the CARA WS server in a release with name test
has the name: test-cara-ws-server
.
The following are names of all the components deployed, using the helm chart with a helm release called test
:
Workloads
-
The ACME server (Deployment):
test-acme-server
-
The CARA admin (Deployment):
test-cara-admin
-
The CARA WS server (StatefulSet):
test-cara-ws-server
-
The CLM server (API) (StatefulSet):
test-clm-server
-
The CLM server UI (Deployment):
test-clm-ui
-
The CMP server (StatefulSet):
test-cmp-server
-
The EST server (Deployment):
test-est-server
-
The Authentication server (StatefulSet):
test-keycloak
-
The Revocation server (Deployment):
test-rev-info
-
The SCEP server (StatefulSet):
test-scep-server
To check the generated logs from the deployed workloads:
For Deployments (e.g., for a Revocation server, deployed in a helm release called test
, in a namespace called mtg):
kubectl logs --namespace mtg --tail 300 --follow deployments/test-rev-info
For StatefulSets (e.g., for a CARA ws server deployed in a helm release called test, in a namespace called mtg):
kubectl logs --namespace demos --context k8s03 --tail 3000 --follow statefulsets/test-cara-ws-server
The helm chart applies some common labels on the ERS Deployments and StatefulSets. Those are:
-
Label
helm.sh/chart: ers-<chart-version>
Example:helm.sh/chart: ers-0.1.9
-
Label
app.kubernetes.io/name: ers
(indicates the name of the chart) -
Label
app.kubernetes.io/instance: <release-name>
Example:app.kubernetes.io/instance: test
(indicates the name of the helm release) -
Label
app.kubernetes.io/managed-by : helm
(indicates the tool used to manage the resource)
Extra labels can be added using the helm chart values.
The helm chart applies some common labels on pods. Those are:
-
Label
app.kubernetes.io/name: ers
(indicates the name of the chart) -
Label
app.kubernetes.io/instance: <release-name>
, example:app.kubernetes.io/instance: test
, indicates the name of the helm release -
Label
app.kubernetes.io/component: mtg-<application-name>
, example:app.kubernetes.io/component: mtg-acme-server
, indicates the application running in the pod -
Label
app.kubernetes.io/version: <image-tag>
, example:app.kubernetes.io/version: 3.10.0
, indicates the application version running in the pod
To print the labels of the pods execute:
kubectl get pods --namespace mtg -o jsonpath='{ .metadata.labels }' test-cara-admin-68bc5fccb4-dx5k8
ConfigMaps
The following are the names of all configMaps, deployed using the helm chart with a helm release called test
-
The config for ACME server:
test-acme-server
-
The config for CARA admin:
test-cara-admin
-
The config for CARA ws server:
test-cara-ws-sevre
-
The config for CLM server (API):
test-clm-server
-
The config for CLM UI:
test-clm-ui
-
The config for CMP server:
test-cmp-server
-
The config for EST server:
test-est-server
-
The config for Revocation server:
test-rev-info
-
The config for keycloak server:
test-keycloak
-
config for keycloak realm:
test-keycloak-realm-json
To print the content of a config file:
kubectl get configmaps --namespace mtg -o json test-scep-server
In addition to the previous configMaps, other configs are installed for the init jobs. Those are:
-
Config used with the admin CLI tool:
test-cara-admin-cli
-
Config used with CARA ws initializer tool:
test-cara-ws-init
-
Config used with CLM initializer script:
test-cara-clm-init
-
Config usd with keycloak initializer script:
test-keycloak-init
Secrets
Changing secrets handling is currently in progress.
As of now there is one secret object, holding the keycloak clients secrets and IDs.
In a helm release called test
this secret is named test-ers-clients
.
To print the content of the secret execute:
kubectl get secrets --namespace mtg -o json test-ers-clients
Additionally, the init script will create the following secrets:
-
Secret that holds the user management p12 keystore:
test-management-user
-
Secret that holds certificates for the Default CAs
test-default-cas
-
Secret that holds all the certificates that has to be trusted by Nginx for successful client authentication
test-trust-cas
-
Secret that holds the management Root certificate:
test-keycloak
-
Secret that holds the TLS server certificate/key pair to be used by ingress
test-ingress
You can also add a secret for Nginx TLS server certificate (e.g., issued by any cert provider), if you wish to do so.
Ingress
On a release called test
, an ingress resource is created with name test-ers
with four hosts:
-
A host for Main ERS components, with route to the following paths:
-
/clm-ui
To CLM UI server -
/clm-api
To CLM Server -
/acme
To ACME server -
/est
To EST server -
/scep
To SCEP server -
/cmp
To CMP server -
/auth
To keycloak -
/aec
To AEC server -
/scep
To SCEP server
-
-
A host for MTG Certificate provider (CARA), with route to the following paths:
-
/cara-ws-server
To CARA WS server -
/cara-admin
To CARA admin server
-
-
A host for the CRL distribution point, with route path
/rev-info
to revocation server -
A host to OCSP responder, with route path
/rev-info
to revocation server
For the first and second hosts, the connection must be on port 443
.
For the third and fourth hosts, the connection must be on port 80
.
A TLS server certificate will be generated using the default CAs and used for TLS on the first and second host.
Roles, Service accounts, and RoleBindings
The service account is created and mounted onto the init job. A role is created and bounded to this service account.
Connecting ERS system to HSM
When deploying the MTG Certificate provider (CARA), you have the option to choose between connecting to HSM (Production scenarios) or storing keys in database.
This can be configured in the helm chart values:
ers: ## .... hsm: type: "Software" deviceName: "Any Name" deviceUri: "libcs_pkcs11_R3.so" deviceDomain: "3001@192.0.2.100 - SLOT_0002" pin: "changeit" masterPassword: "changeit" ## ....
To connect to the HSM:
-
set
ers.hsm.type
toGeneric HSM
-
Give in
ers.hsm.deviceName
a name for the HSM -
Set in
ers.hsm.deviceUri
the name of the library (more info on loading the library can be found in the library loading section). -
Set in
ers.hsm.deviceDomain
the name of the PKCS11 slot (called entity or domain or slot by different vendors). Consult the HSM vendor (the above is an example for utimaco HSMs. For Ncipher it’s the softcard name). -
Set the PIN for the crypto user in
ers.hsm.pin
. Check the HSM vendor for more information on how to set such a user PIN. -
Set a random password (used to store securely in pin in the database) in
ers.hsm.masterPassword
.
Loading the Cryptoki Library onto cara-ws-server
The library you set in ers.hsm.deviceUri
must be loaded into cara-ws-server container, at /home/mtg/
.
In addition to that, you may want to load an HSM specific config file or environment variable.
There are multiple ways to do that.
Bringing the HSM Cryptoki Library onto cara-ws-server
One way is to start an init container before the main cara-ws-server container.
Mount a volume (of type emptyDir
), copy the lib from that init container to the mount directory of the volume and then start cara-ws-server with same volume loaded in /home/mtg/
.
The above is demonstrated in the following example (helm values file):
ws: extraVolumes: - name: lib emptyDir: {} extraVolumeMounts: - mountPath: "/home/mtg" name: lib readOnly: true extraInitContainers: - name: copy-hsm-lib-to-cara-ws-container image: "ubuntu:jammy" imagePullPolicy: IfNotPresent volumeMounts: - mountPath: "/home/mtg" name: lib readOnly: false command: ["/bin/sh"] args: - "-c" - | echo "Do here something to get the lib, then move it where you mounted the volume" cp lib.so /home/mtg/ echo "Done!, init container completed, will start now cara-ws-server"
In the init container, you can either download the library (using wget, curl …) or use an image that has already the library, or even mount a directory on the host where the image is located.
To mount the library from a path on the host, you add a host volume (and also optionally a nodeSelector, to guarantee scheduling the pod on the host where the path exists).
ws: extraVolumes: - name: host hostPath: path: "/data/foo" type: Directory - name: lib emptyDir: {} extraVolumeMounts: - mountPath: "/home/mtg" name: lib readOnly: true extraInitContainers: - name: copy-hsm-lib-to-cara-ws-container image: "ubuntu:jammy" imagePullPolicy: IfNotPresent volumeMounts: - mountPath: "/data/foo" name: host readOnly: true - mountPath: "/home/mtg" name: lib readOnly: false command: ["/bin/sh"] args: - "-c" - | cp /data/foo/lib.so /home/mtg/ echo "Done!, init container completed, will start now cara-ws-server"
Adding HSM config to cara-ws-server
In order to establish a successful connection to some HSMs, you need to add config file or environment variables to the cara-ws-server.
Prior to the helm chart deployment, you can create a configMap that holds a config file for the HSM. Then, mount this file into the cara-ws-server container. To illustrate the above, this in an example:
apiVersion: v1 kind: ConfigMap metadata: name: hsm-config namespace: mtg data: hsm.cfg: | content of a config file goes here more content goes here.. ... etc
Deploy this configMap with kubectl command:
kubectl apply -f /path/to/configmap
In the helm values file,
ws: extraVolumes: - name: hsm-config-volume configMap: name: hsm-config extraVolumeMounts: - mountPath: "/tmp/config" name: hsm-config-volume readOnly: true
You can add extra environment variables to cara-ws-server container.
The above is demonstrated in the following example (helm values file):
ws: extraEnvs: - name: CS_PKCS11_R3_CFG value: "/tmp/config/hsm.cfg"
Running a sidecar service required to connect to the HSM
For some HSMs, it is necessary to run a service in the cluster, that initiates a connection to the HSM. The HSM library (loaded into cara-ws-server) will connect to this service.
For that purpose, you can run an extra container in the cara-ws-server pod.
In the helm values file, define an extra container to run. For example:
ws: extraContainers - name: copy-hsm-lib-to-cara-ws-container image: "ubuntu:jammy" imagePullPolicy: IfNotPresent command: ["/bin/sh"] args: - "-c" - | echo "This container is running in parallel to cara-ws-server. Both share the same network namespace" tail -f /dev/null
Using nodeSelectors, toleration and affinity to impact the pod scheduling behavior
In some scenarios you may want to pick the node where a specific pod is scheduled.
Using the standard pod labels (refer to the Helm chart deployment structure section), a selection of pods (to impact their scheduling process) can be made.
The techniques used are:
Assuming a scenario where you want to start cara-ws-server on a node that has a taint (for demonstration purposes, suppose this node is worker33
).
Since you have restricted the HSM access from worker33
, you want to tolerate scheduling pods on this node (which must connect to the HSM exclusively).
You add a taint to the node, for example:
kubectl taint node worker33 hsm=whatever:NoSchedule
kubectl get nodes -o jsonpath='{ .spec.taints }' worker55
Then, you want to tolerate that taint on cara-ws-server pod. In the helm values file:
ws: tolerations: - key: "hsm" operator: "Equal" value: "whatever" effect: "NoSchedule"
With that, you will be able to schedule cara-ws-server pod on worker33
.
However, this does not guarantee the scheduling on that node, because adding a toleration to a taint doesn’t prevent scheduling on a different node.
To guarantee the scheduling of pod cara-ws-server on worker33
for the previous use-case, you need to use either nodeSelector or affinity, in addition to the toleration.
With nodeSelectors, you must first find a unique label for the node you want to select.
kubectl get nodes -o jsonpath='{ .metadata.labels }' worker55
Then you need to add a selector in the helm values file. For example:
ws: nodeSelector kubernetes.io/hostname:"worker55"
A different way to implement that, is using affinity.
An example on how to do it with the nodes worker33
and worker44
:
ws: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker33 - worker44 preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker33
Additionally, you can decide to schedule some pods based on scheduling of other pods. A possible scenario for the above, could be if you wanted to co-locate a pod on the same node where another pod is located.