# Installation on Google Kubernetes Engine (GKE)

# Prerequisites

# Cluster Setup

These steps only need to be completed once per cluster.

  1. Log in to your Google Cloud account at https://cloud.google.com/ (opens new window)

  2. Go to Kubernetes EngineClusters

  3. Select an existing project or create a new one

  4. Click Enable for the Kubernetes Engine API

  5. Once the API is enabled, click Create to create a cluster

  6. Click the Configure button for the GKE Standard option. Unless otherwise indicated, you do not need to change the default configuration options. (Choosing the GKE Autopilot option typically results in a cluster with too few initial resources and can prolong the startup process as the cluster adds resources on demand.)

  7. In the left menu, select default-poolNodes

  8. Select "e2-standard-2" as the Machine Type if you are setting up a basic test cluster for a single Entando Application. Additional CPU and memory may be required for a shared cluster containing multiple Entando Applications or to improve performance. Refer to Appendix A for details on clustered storage.

  9. Click Create. It may take a few minutes for the cluster to initialize.

  10. Click Connect

  11. Click Run in Cloud Shell. Alternatively, connect your local kubectl to the GKE cluster.

  12. Run kubectl get node to verify your connection. The output should list the nodes in your cluster.

# Install the NGINX Ingress Controller

The following steps install the NGINX Ingress Controller to manage the ingresses for Entando services deployed by the operator. These are the minimum instructions to prepare the NGINX ingress using the Google Cloud Shell, which is a simple and adaptable configuration for most users and environments.

Users who require the GKE Ingress controller (this is rare) can follow the integration instructions provided by GKE (opens new window) and then customize the service definition created by the Entando Operator.

For installation using your local kubectl or to vary other settings, refer to the NGINX Ingress Controller documentation (opens new window) or the GCE-GKE tutorial (opens new window).

TIP

If you created a Private Cluster, you need to configure your firewall accordingly. Refer to the NGINX Ingress Controller documentation (opens new window) and the Adding firewall rules for specific use cases (opens new window) GKE guide.

  1. Initialize your user as a cluster-admin
kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin \
--user $(gcloud config get-value account)
  1. Install the ingress controller pods
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud/deploy.yaml
  1. Once the ingress-nginx pods are running, enter the following command to return the external IP address of your ingress controller. Use Ctrl+C to exit after the EXTERNAL-IP value for nginx-ingress-controller is displayed.
kubectl get service -n ingress-nginx --watch

TIP

NGINX is working correctly if a 404 Not Found NGINX error page is generated when accessing the EXTERNAL-IP from your browser. For a more complete test, you can set up a simple test application using your local kubectl. You can also customize the NGINX ingress to optimize the configuration for Entando.

# Install the Entando Custom Resources

  1. Download and apply the custom resource definitions (CRDs) to the cluster. This must be done once per cluster.
kubectl apply -f https://raw.githubusercontent.com/entando/entando-releases/v7.1.6/dist/ge-1-1-6/namespace-scoped-deployment/cluster-resources.yaml
  1. Create a namespace for the Entando Application. If you choose a name other than "entando," update the following commands wherever a namespace is provided.
kubectl create namespace entando
  1. Download the entando-operator-config template to configure the Entando Operator
curl -sLO "https://raw.githubusercontent.com/entando/entando-releases/v7.1.6/dist/ge-1-1-6/samples/entando-operator-config.yaml"
  1. Edit the entando-operator-config.yaml to add two properties
data:
   entando.requires.filesystem.group.override: "true"
   entando.ingress.class: "nginx"
  1. Apply the ConfigMap
kubectl apply -f entando-operator-config.yaml -n entando
  1. Install the namespace-scoped resources
kubectl apply -n entando -f https://raw.githubusercontent.com/entando/entando-releases/v7.1.6/dist/ge-1-1-6/namespace-scoped-deployment/namespace-resources.yaml
  1. Use kubectl get pods -n entando --watch to observe the base pods initialize. Exit this command via Ctrl+C.
$ kubectl get pods -n entando
NAME                                   READY   STATUS    RESTARTS   AGE
entando-k8s-service-86f8954d56-mphpr   1/1     Running   0          95s
entando-operator-5b5465788b-ghb25      1/1     Running   0          95s

# Configure the Entando Application

  1. Download the entando-app.yaml template
curl -sLO "https://raw.githubusercontent.com/entando/entando-releases/v7.1.6/dist/ge-1-1-6/samples/entando-app.yaml"
  1. Edit entando-app.yaml. Replace YOUR-HOST-NAME with EXTERNAL-IP + .nip.io. See the EntandoApp custom resource overview for additional options.
spec:
  ingressHostName: YOUR-HOST-NAME

e.g. ingressHostName: 20.120.54.243.nip.io

# Deploy Your Entando Application

You can now deploy your application to your GKE cluster.

  1. Deploy the Entando Application
kubectl apply -n entando -f entando-app.yaml
  1. It can take 10 minutes or more for the application to fully deploy. You can watch the pods warming up with the command below. Use Ctrl+C to exit.
kubectl get pods -n entando --watch
  1. Once all the pods are in a running state, access the Entando App Builder at the following address
http://YOUR-HOST-NAME/app-builder/

See the Getting Started guide for helpful login instructions and next steps.

# Appendix: Configuring Clustered Storage

In order to scale an Entando Application across multiple nodes, you must provide a storage class that supports a ReadWriteMany access policy, e.g. by using a dedicated storage provider like GlusterFS.

The example below provides clustered storage via GCP Cloud Filestore. However, it is best practice to expose an existing clustered file solution as a StorageClass.

TIP

You do not need clustered storage to scale an Entando Application if you schedule all instances to the same node via taints on other nodes and a ReadWriteOnce (RWO) policy. Be aware of the impact to node resource allocation and to recovery, should your application fail or become unreachable. Note that if the node fails or is shutdown, your application will be unresponsive while Kubernetes reschedules the pods to a different node.

# Clustered Storage Using GCP Cloud Filestore

  1. In the left menu of the GCP portal, find the Storage section and select Filestore -> Instances

  2. Enable the Filestore, if you haven't already

  3. Select Create Instance

  4. Adjust the field values from the defaults as needed. Take note of your instance ID.

  5. Once the instance is created on the Filestore main page, note the IP address of your NFS

  6. Install the provisioner that creates the StorageClass to enable deployment of Entando Applications. Use the commands below, replacing YOUR-NFS-IP and YOUR-NFS-PATH with your instance ID and the IP address of your cluster.

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=YOUR-NFS-IP \
    --set nfs.path=YOUR-NFS-PATH

Learn about the provisioner and additional configuration options at https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner (opens new window)

  1. Verify that your client provisioned successfully. This is indicated by the presence of the storage class nfs-client in the output of the following command.
kubectl get sc

Example output:

NAME                 PROVISIONER                                     RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
nfs-client           cluster.local/nfs-subdir-external-provisioner   Delete          Immediate              true                   37m
premium-rwo          pd.csi.storage.gke.io                           Delete          WaitForFirstConsumer   true                   27h
standard (default)   kubernetes.io/gce-pd                            Delete          Immediate              true                   27h
standard-rwo         pd.csi.storage.gke.io                           Delete          WaitForFirstConsumer   true                   27h
  1. Add the variables below to your operator ConfigMap
entando.k8s.operator.default.clustered.storage.class: "nfs-client"
entando.k8s.operator.default.non.clustered.storage.class: "standard"
  1. Deploy your Entando Application using the instructions above. The server instances will automatically use the clustered storage.