Nuxeo CMS on Openshift origin 3.7

Abstract

My Openshift Origin 3.7 cluster host some great apps I use everyday for my needs. One of that great apps is Nuxeo, an open source CMS system : I use Nuxeo to manage (store, index, ..) my family documents and other documents I want to keep for work purposes.

This post is created for my knowledge base (as usual) but I found interesting to publish it because it was not so easy to bring this app up with all services running in a prod-like environment.

Please refer to this blog post for the requirements before the following process.

Step by step..

Create our (empty) project on Openshift

On your openshift admin console, create a new project named Nuxeo

See original image

The project is created. click on it..

See original image

Now connect to a management node, for example cm0.os.int.intra, log on openshift and select your new project:

[root@cm0 ~]# oc login
Authentication required for https://cm.os.int.intra:443 (openshift)
Username: admin
Password: 
Login successful.

You have access to the following projects and can switch between them with 'oc project <projectname>':

    default
    kube-public
    kube-service-catalog
    kube-system
    logging
    management-infra
    nuxeo
    openshift
    openshift-ansible-service-broker
    openshift-infra
    openshift-node
  * test

Using project "test".
    [root@cm0 ~]# oc project nuxeo
Now using project "nuxeo" on server "https://cm.os.int.intra:443".
[root@cm0 ~]# 

Create your persistent volumes

For my needs, I use two storage classes : the first one on a standard HDD ceph pool, the second one on a fast SSD pool.

Create an openshift secret used to create physical volumes on a ceph cluster :

ceph-kube-secret.yaml :

apiVersion: v1
kind: Secret
metadata:
  name: ceph-kube-secret
  namespace: kube-system
data:
  key: QVFDUEV872JlPOuYd2MDJVNkYwMFA1bi9QS1ZkZ0E9PQ==
type:
  kubernetes.io/rbd

Create it :
[root@cm0 ~]# oc create ceph-kube-secret.yaml

The two YAML files I’ve used are provided below (use your own mon IP addresses and variables…)

storageclass.yaml
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: dynamic
annotations:
storageclass.beta.kubernetes.io/is-default-class: “true”
provisioner: kubernetes.io/rbd
parameters:
monitors: 10.0.0.210:6789,10.0.0.214:6789,10.0.0.218:6789
adminId: kube
adminSecretName: ceph-kube-secret
adminSecretNamespace: kube-system
pool: kube
userId: kube
userSecretName: ceph-kube-secret

storageclass-ssd.yaml
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: dynamic-ssd
annotations:
storageclass.beta.kubernetes.io/is-default-class: “false”
provisioner: kubernetes.io/rbd
parameters:
monitors: 10.0.0.210:6789,10.0.0.214:6789,10.0.0.218:6789
adminId: kube
adminSecretName: ceph-kube-secret
adminSecretNamespace: kube-system
pool: kubessd
userId: kube
userSecretName: ceph-kube-secret

create the two classes :
[root@cm0 ~]# oc create -f storageclass.yaml
[root@cm0 ~]# oc create -f storageclass-ssd.yaml
[root@cm0 ~]# oc get storageclass
NAME TYPE
dynamic (default) kubernetes.io/rbd
dynamic-ssd kubernetes.io/rbd

Finally, create a secret for your nuxeo project that will be used to mount the physical volumes :

ceph-kube-secret-nuxeo.yaml

apiVersion: v1
kind: Secret
metadata:
  name: ceph-kube-secret
  namespace: nuxeo
data:
  key: QVFDUEV872JlPOuYd2MDJVNkYwMFA1bi9QS1ZkZ0E9PQ==
type:
  kubernetes.io/rbd

And import it

[root@cm0 ~]# oc create -f ceph-kube-secret-nuxeo.yaml
secret "ceph-kube-secret" created

Create an elasticsearch container

For this part of the work, you will need a recent elastic docker container. Elastic.co supports docker and provides up to date containers.

But you might want to customize them to your environment in order to allow Nuxeo to use elastic.

Fortunately, openshift provides the toolbox to make it possible, including a smart embedded docker registry.

We will use a master node to do this work. Always on cm0.os.int.intra, first download a recent docker container image :

[root@cm0 ~]#  docker pull docker.elastic.co/elasticsearch/elasticsearch:5.6.5
Trying to pull repository docker.elastic.co/elasticsearch/elasticsearch ... 
5.6.5: Pulling from docker.elastic.co/elasticsearch/elasticsearch

85432449fd0f: Downloading [================>                                  ]  24.3 MB/73.43 MB
e0cbfc3faa3e: Downloading [================>                                  ] 23.21 MB/68.79 MB
d2c62af14bd0: Download complete 
429fdbedf602: Downloading [===============================>                   ] 21.35 MB/33.82 MB
c8d14fb24468: Waiting 
8c7507908328: Waiting 
073ffc56cfbb: Waiting 
7e46ec36fb55: Waiting 
51f1e02ed125: Waiting 
eac8f57db5d3: Waiting 

Second : set openshift to allow “fixed” UID into containers. (Openshift default uses random UIDs)

[root@cm0 ~]# oc adm policy add-scc-to-group anyuid system:authenticated
scc "anyuid" added to groups: ["system:authenticated"]

Then for the need of nuxeo, I will customize this elastic container with the following dockerfile :

I will use a very simple elasticsearch config file (elasticsearch.yaml)
(the following shows only uncommented lines)

[root@cm0 5.6.5]# grep -v '^ *#' elasticsearch.yml 
cluster.name: nuxeo
node.name: ec01 
node.attr.rack: r1
network.host: 0.0.0.0
thread_pool.bulk.queue_size: 500

put your elastic yaml config file in a work directory, with the following (very simple) Dockerfile file :

[root@cm0 5.6.5]# cat Dockerfile
FROM docker.elastic.co/elasticsearch/elasticsearch:5.6.5
USER root
COPY elasticsearch.yml /usr/share/elasticsearch/config/
RUN chown elasticsearch:elasticsearch /usr/share/elasticsearch/config/elasticsearch.yml
USER elasticsearch

Build the custom container:

[root@cm0 5.6.5]# docker build --tag=elasticsearch-5.6.5-custom  .
Sending build context to Docker daemon 12.29 kB
Step 1 : FROM docker.elastic.co/elasticsearch/elasticsearch:5.6.5
 ---> 6ffc97b39054
Step 2 : USER root
 ---> Running in c64a8f01836c
 ---> de907f5dd68c
Removing intermediate container c64a8f01836c
Step 3 : COPY elasticsearch.yml /usr/share/elasticsearch/config/
 ---> 839eb882c112
Removing intermediate container 7a7d2de751f6
Step 4 : RUN chown elasticsearch:elasticsearch /usr/share/elasticsearch/config/elasticsearch.yml
 ---> Running in 00a7333100c9
 ---> 3f47b5f294d9
Removing intermediate container 00a7333100c9
Step 5 : USER elasticsearch
 ---> Running in 002fe9d51ee8
 ---> 5d828e985436
Removing intermediate container 002fe9d51ee8
Successfully built 5d828e985436

The following command shows the work :

[root@cm0 5.6.5]# docker images
REPOSITORY                                      TAG                 IMAGE ID            CREATED             SIZE
elasticsearch-5.6.5-custom                      latest              78b0148a9731        4 seconds ago       555.1 MB
docker.io/openshift/origin-service-catalog      latest              88397ed6c5eb        8 days ago          283.7 MB
docker.elastic.co/elasticsearch/elasticsearch   5.6.5               6ffc97b39054        5 weeks ago         555.1 MB
docker.io/openshift/origin-pod                  v3.7.0              73b7557fbb3a        6 weeks ago         218.4 MB
[root@cm0 5.6.5]# 

Then you will upload the custom container to the openshift-embedded image registry :

register your elastic custom container

To do this part of the work, you will have to login to your registry

Connect to the registry console :

This can be done with an URL provided by open shift. The route is available on the default project :

See original image

Use the registry console URL, and connect using your cluster admin creds.

On the overview pane, openshift tells you the two commands to run on your master node (su docker login… and oc login –token…). Run these two commands on your master node, and you should be connected.

tag your image

[root@cm0 5.6.5]# docker tag elasticsearch-5.6.5-custom docker-registry-default.apps.os.int.intra/nuxeo/elasticsearch-5.6.5-custom

this command should return nothing.

push the custom elastic image to your registry
[root@cm0 5.6.5]# docker push docker-registry-default.apps.os.int.intra/nuxeo/elasticsearch-5.6.5-custom
The push refers to a repository [docker-registry-default.apps.os.int.intra/nuxeo/elasticsearch-5.6.5-custom]
432ec0861ba3: Mounted from test/elasticsearch-5.6.5-custom 
a459bbc01c5d: Mounted from test/elasticsearch-5.6.5-custom 
6573437ea578: Mounted from test/elasticsearch-5.6.5-custom 
f6db30a6d5da: Mounted from test/elasticsearch-5.6.5-custom 
b0ec2cfecedf: Mounted from test/elasticsearch-5.6.5-custom 
9f3ec9794aef: Mounted from test/elasticsearch-5.6.5-custom 
ed586282be90: Mounted from test/elasticsearch-5.6.5-custom 
274df55c1a7e: Mounted from test/elasticsearch-5.6.5-custom 
99e22142836e: Mounted from test/elasticsearch-5.6.5-custom 
620127729ace: Mounted from test/elasticsearch-5.6.5-custom 
d1be66a59bc5: Mounted from test/elasticsearch-5.6.5-custom 
latest: digest: sha256:8cae781492dc00412875cf5e74ecefeb10699cb6716bdb7189eec3829bb4097a size: 2617
[root@cm0 5.6.5]# 

Check the results on the registry console web ui :

See original image

Deploy your custom elastic container

Connect to the cluster admin console, select the nuxeo project and choose add to project/deploy image :

See original image

You will see that your custom container is available in the image streams :

See original image

No additional config is required -> choose “deploy” (you will have to provide a name without special characters)

See original image

Voila !

But at this point, your new container uses non persistent storage. if you stop the container, all the data is lost.

provide elasticsearch with a persistent ceph volume

In the admin console, go to the storage tab and click “create storage”
You will be asked to choose between the storage classes you had created before, for a PV size..

See original image

Click “create” and the PV should be created …

See original image

Now add a persistent volume to your elastic deployment. go to the deployment tab, select easticsearch and in the configuration pane, select “add storage”.

See original image

Fill in the form as follows :

See original image

Click ok then check the deployment config, it should be ok.

stop and restart the pod.

You will see back off restarts. It’s due to the security context we have chosen for this pod : remember that we have disable the anyuid SCC (security context) feature that disallow a fixed user id to run into the container.

Consequently to this action, the RBD physical volume is not mounted with the correct user and group id.

we can adjust that at this point : on the master node, export the pod to a yaml file so we can edit it.

First, export the pod definition : on the master :

[root@cm0 5.6.5]# oc get dc -o yaml > elasticdc.yaml

Then go to the deployment tab, click on the elasticsearch deployement and then on action/delete (button on the top/right side of the web ui)

edit the file elasticdc.yaml :

replace

securityContext: {}

by :

securityContext:
  runAsUser: 1000
  fsGroup: 1000

This will force the ceph rbd device to mount with the right permissions

Import the deployment :

[root@cm0 5.6.5]# oc create -f elasticdc.yaml 
deploymentconfig "elasticsearch" created

then you will see the container run, mount the PV. check the container :
(the default password for elastic applied by x-pack is changeme. we have not overrided yet)

[root@cm0 5.6.5]# curl -u elastic -XGET 'elasticsearch.nuxeo.svc:9200/?pretty'
Enter host password for user 'elastic':
{
  "name" : "ec01",
  "cluster_name" : "nuxeo",
  "cluster_uuid" : "Lq0QyhqzSNybgOLhWHFkLA",
  "version" : {
    "number" : "5.6.5",
    "build_hash" : "6a37571",
    "build_date" : "2017-12-04T07:50:10.466Z",
    "build_snapshot" : false,
    "lucene_version" : "6.6.1"
  },
  "tagline" : "You Know, for Search"
}

Create a postgresql container

Another time, go to the console and add an image to the project : search for “centos/postgresql-94-centos7” in the “image name” field and click on the magnify icon to search for the image

add the following variables :

See original image

And deploy. it will work.

Once again, we will want to add persistent storage : define a PV :

See original image

And scale down to 0 pod the postgresql deployment :

See original image
See original image

Remove the non persistent volume : click on “remove” in the volume area :

See original image

And mount the new storage :

See original image

as above, export the deployment config and adjust the UID and GROUP so that the RBD device will be mounted with the correct permissions.

replace the following

securityContext: {}

by

securityContext:
  runAsUser: 1000
  fsGroup: 1000

(1000 for ex)

then destroy the deployment…

and recreate it :

[root@cm0 postgresql]# oc create -f postgresql-94-centos7.yaml 
deploymentconfig "postgresql-94-centos7" created

then resume the rollouts (actions button)

and scale the replicas to 1 (0 since we added the PV)

The pod will raise a container and postgresql is up and running.

Good.