Introduction
Why Kubernetes
The difference between running Reaction in Docker and Kubernetes is that Kubernetes offers better scalability, easier version rollbacks and automatic restarting.
Docker images & Docker image repository
The images used on the cluster are built the same way as they would be if they are run outside of Kubernetes. To deploy them to Kubernetes using Helm they need to be hosted somewhere the cluster can reach, for this we host our own Sonatype Nexus repository manager but you could use google or something similar.
Helm
Every module in Reaction needs its own chart (storefront, admin, identity, reaction and hydra). The charts are located in the ./charts folder inside the project, example: for storefront it would be ./charts/storefront in the root of the project. The custom value.yaml (environment specific could be values.test.yaml or values.prod.yaml) can be located in either the root folder or in the ./config folder, NOT in the charts folder as this creates conflicts for helm. They should be secured with git secret. We chose to put this file inside the project so it could be part of git versioning and we could revert to an older version, another benefit is that it makes it easier to deploy from local and you don’t have to manage it in the pipeline environment.
Charts
The charts are fairly basic with a few minor changes in ingresses and services. These changes are to accommodate Reaction routing (you can’t run admin on http://www.url.com/admin next to the storefront on http://www.url.com). Hydra has a modified Hydra chart and admin/identity both have an extra ingress for authentication purposes. There is a set of generic charts in this repo.
Setup
Prerequisites
- kubectl CLI
- Helm CLI
- Cluster hosting specific CLI (minikube/az/gcloud/etc)
- Kubernetes cluster
- Reaction projects on local machine
- Charts
- Docker image repository
Info
Everything between <> is a variable that stays consistent between charts/commands (as in namespace and storefront-url are the same everywhere). Whenever we have an -f <path-to-file> we are referencing the files in the config folder.
Order
We follow the order Ingress > Cert manager > databases > Hydra > Reaction > Identity > Storefront & Admin. This is because of the dependency structure of Reaction. Hydra, Reaction and Identity depend on the databases and Admin and Storefront depend on those.
Images
Before we can deploy to the cluster we need all the images to be in a place where they are available to the cluster. For this we need to build and push images to the hub or private nexus. See docker push for more information on this.
If you use a private image repository you need to make it accessible for the cluster, this is done with a Kubernetes secret which references a file, this file can be added using the kubectl cli. See Kubernetes documentation for more information.
If you use a private image repository you need to make it accessible for the cluster, this is done with a Kubernetes secret which references a file, this file can be added using the kubectl cli. See Kubernetes documentation for more information. When filling in the values.yaml for a project the image/tag used to upload the image needs to be filled in under the image information.
Connect to cluster
To reach the cluster with the helm and kubectl cli you need to connect to it. This requires the cli of the respective host for azure and google cloud these are the commands:
az aks get-credentials --name <cluster-name> --resource-group <resource-group-name>
gcloud container clusters get-credentials <cluster-name> --region <region-name> --project <project-name>
Google has the connect button on cluster level that will allow you to copy this command, for Azure you will have to find and add the correct values.
After connection you can run a quick check to see if everything is connected:
kubectl get pods --all-namespaces
This should list all pods and their state.
Cluster setup
Namespace
If the namespace you want to deploy to doesn’t exist create it:
kubectl create namespace <namespace>
Nexus
To allow the cluster to retrieve images from the private nexus you can either use a login to the nexus to generate a secret in the namespace:
kubectl create secret docker-registry -n <namespace> docker-cfg --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-password> --docker-email=<your-email>
Or generate a secret file form the nexus and upload that:
kubectl create secret generic -n <namespace> docker-cfg --from-file=.dockerconfigjson --type=kubernetes.io/dockerconfigjson
From the values.yaml file we reference this secret so the cluster can reach the nexus and retrieve the image.
Ingress
We use the latest stable nginx ingress controller and installed it on our cluster using the following commands:
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo update
helm install <ingress-deployment-name> stable/nginx-ingress --namespace nginx
You can then check if it is running by listing all the pods in the namespace:
kubectl get pods -n nginx
Cert manager
To make sure our certificates are always valid we install a lets-encrypt certificate manager which is installed from the Jetstack chart. See https://letsencrypt.org/docs/ for more information.
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.14.0/cert-manager.crds.yaml
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v0.14.0
kubectl create -f ./cluster-issuer.yaml
Databases
The reaction framework requires a PostgreSQL database and a Mongo database with replication. You can run these either inside the cluster(we used the bitnami charts for this) or outside the cluster. If the databases are run outside the cluster all you need to do is supply the charts with the correct environment variables to connect to them and make sure they are reachable from the database side(private IP or whitelisting the nodes).
Running in cluster
To use the unmodified bitnami charts for Mongodb and PostgreSQL you need to make them available to the cluster.
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
PostgreSQL
You can deploy the PostgreSQL by running the following command:
helm install -n <namespace> <deployment-name> bitnami/postgresql -f postgresql.values.yaml
This will result in the following connection string:
postgres://<user>:<password>@<deployment-name>.<namespace>/hydra?sslmode=disable
The PostgreSQL is not exposed outside of the cluster but can be reached through port forwarding or command line from a hydra pod, see the Hydra section. For port forwarding run this command:
kubectl port-forward -n <namespace> svc/<postgesql-service-name> 5432:5432
MongoDB
You can deploy the MongoDB by running the following command:
helm install -n <namespace> <deployment-name> bitnami/mongodb -f mongo.values.yaml
The MongoDB will be accessible from inside the cluster with the following string:
mongodb://<account-name>:<account-password>@<deployment-name>.<namespace>/<database>
?authSource=admin&replicaSet=rs0
This will not expose the MongoDB to the outside world and in this configuration if you want to access the database you need to forward the service to your machines localhost with kubectl which can be done with this command:
kubectl port-forward -n <namespace> svc/<mongo-service-name> 27017:27017
This will allow you to seed your database with products and connect with it through the mongo shell or mongodb compass through mongodb://<account-name>:<account-password>@localhost:27017/<database>?authSource=admin&replicaSet=rs0
.
To expose the Mongodb outside of the cluster you will need to port forward 27017 from outside the cluster to the service and specify an UTP connection. If you don’t do this the ingress controller will wrap any messages in http and a connection can not be made.
Values
The Identity, Reaction, Storefront and Admin values.yaml are all the same in structure. The difference between these is in naming, port usages, environment variables and ingress. The environment variables are used in the same way as in the .env file but have a different format where they are a pair of name and value in an array. The ingress part of the values.yaml defines how the deployment is reachable (host+path), the environment values only tell the deployment how to reach the other elements and reference itself.
Charts
Each chart can be deployed from local with the following command from the root of the project:
kubectl upgrade --install -n <namespace> <deployment-name> <path-to-chart> -f <path-to-values-yaml>
After you have run this command you can check the pod by running the following command:
kubectl get pods -n <namespace> (-w for watching changes)
If anything goes wrong you can check the pod logs with this command:
kubectl logs -n <namespace> <podname>
or check the settings for the pod/deployment:
kubectl describe pod -n <namespace> <podname>
kubectl describe deployment -n <namespace> <deployment-name>
Hydra
From our experience hydra needs help creating the clients for storefront and admin, this can be done with the following 2 commands:
kubectl exec -it <hydra-pod-name> -n <namespace> hydra -- clients create --callbacks "https://<storefront-url>/callback" --grant-types authorization_code,refresh_token --id <storefront-id> --secret <storefront-secret-key> --response-types token,code --token-endpoint-auth-method client_secret_post --config /etc/config/config.yaml --endpoint http://<hydra-admin-service>.<namespace>:4445
kubectl exec -it <hydra-pod-name> -n <namespace> -- clients create --callbacks "https://<admin-url>/callback","https://<admin-url>/silent_callback" --grant-types authorization_code --id reaction-admin --secret <admin-secret-key> --response-types code --config /etc/config/config.yaml --endpoint http://<hydra-admin-service>.<namespace>:4445
The secret key for admin isn’t relevant as admin uses a different validation strategy, however for the storefront the key needs to match the OAUTH2_CLIENT_SECRET environment variable.
Identity
Has a separate ingress for /account this is to accommodate reaction routing. We chose to run the Identity component on the same url as the storefront but with the /idp path, this requires a rewrite target in the ingress part of the values.yaml. For the HYDRA_ADMIN_URL env variable we can run the following command:
kubectl get svc -n <namespace>
The connection string will be :
http://<hydra-admin-service>.<namespace>:4445
Reaction(api)
We run this on the same url as the storefront with /api/ at the end of the url. The ingress rewrites this to the root url to prevent issues. For the env variable HYDRA_OAUTH2_INTROSPECT_URL we do the same as with identity and fill in:
http://<hydra-admin-service>.<namespace>:4445/oauth2/introspect
Storefront & Admin
The storefront and Admin run on different urls to prevent Reaction routing causing issues.