Blog Main Image

Build Highly-Available PaaS on Redhat Openshift Container Platform using Ansible

By Bikram Singh / Nov 13,2018


In this blog we will purely focus on building a highly-available PaaS based on Redhat Openshift Platform (Origin – Community edition) . We will not cover what openshift is and its concepts, Please learn more about Openshift Architecture at Openshift Concepts . We will be using Docker as container runtime engine. and Etcd as datastore to store the kubernetes cluster state data and calico networking . All the servers used are Virtual machines running on Openstack.

Cluster Components

  • Openshift Origin v3.6.1
  • Kubernetes v1.6.1
  • Docker v1.12.6
  • Openvswitch Multi-tenant Networking
  • etcd v3.2.7


To complete this article, you will need below Infrastructure:

  • CentOS x86_64_7.3.1611-minimal for all the Openshift  nodes.
  • 3x Openshift masters
  • 3x Openshift worker nodes (minions)
  • 3x Infra nodes(router & registry).
  • Docker installed on all the nodes
  • NetworkManager service running on all the Nodes

Openshift Master nodes -- -- --

Openshift Worker nodes -- -- --

Openshift Infra nodes --- --- ---

Openshift Architecture

To build a highly available openshift cluster we need to configure multiple components in distributed fashion to avoid any single point of failure and make them scalable. Below diagram shows different openshift infrastructure components.

Build Highly-Available PaaS on Redhat Openshift Container Platform using Ansible

Cluster Architecture

Below is the high level architecture of our cluster which we are going to build from scratch. We will be using  3 Masters and 3 Infra node. We will use Infra nodes to run openshift router and registry . To achieve HA for Master and Infra nodes we will put an external load-balancer in front of them, any load-balancer (Virtual or physical) can be used, I will be using Openstack based Load-balancer. I have created 2 VIPs on load balancer  Master ( and Infra ( Openshift relies on DNS so  we also need to create corresponding DNS entries. You can use any number of supported nodes I am using 3 in our cluster which will run the applications (Docker containers)

Build Highly-Available PaaS on Redhat Openshift Container Platform using Ansible

Configuration and host preparation

We will install all the openshift packages on ros-master-1 node and install all these packages to rest of the nodes using Ansible playbook

Create SSH Key and copy to all the other nodes for password-less access which will be needed by Ansible . I already have a PEM file which I will use because its already installed on all the other nodes , However you can create a new key using ssh-keygen and copy to all the other nodes using ssh-copy-id command

[[email protected] ~]$ ssh-agent $SHELL
[[email protected] ~]$ ssh-add cloud.pem
Identity added: cloud.pem (cloud.pem)

Lets verify if we are able to login to other nodes without password

[[email protected] ~]$ ssh ros-master-2
Last login: Fri Dec 1 01:40:25 2017 from
[[email protected] ~]$ exit
Connection to ros-master-2 closed.

[[email protected] ~]$ ssh ros-master-3
Last login: Fri Dec 1 01:46:57 2017 from
[[email protected] ~]$ exit
Connection to ros-master-3 closed.
[[email protected] ~]$

Install Openshift Repo, Packages and Dependencies

Rub below commands on ros-master-1

sudo yum install -y centos-release-openshift-origin 

sudo yum install -y origin

sudo yum install -y atomic-openshift-utils

Install Docker on all the openshift nodes using below commands

sudo yum install -y docker

sudo systemctl enable docker

sudo systemctl start docker

Create Ansible host Inventory file

Ansible will use inventory file with openshift specific variables to build openshift cluster and install all the necessary components and software packages on all the nodes. Inventory file can we tweaked with settings you need for your environment. Refer to this Sample Ansible Inventory file which has all the possible options which you can change based on your needs. Below is the Inventory file we will be using.


openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 
'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/htpasswd'}] openshift_docker_options='--selinux-enabled --insecure-registry' openshift_router_selector='region=infra' openshift_registry_selector='region=infra' openshift_master_api_port=443 openshift_master_console_port=443 openshift_disable_check=memory_availability,docker_storage openshift_portal_net= osm_cluster_network_cidr= [nodes] ros-master-1 openshift_schedulable=True ansible_connection=local ansible_become=yes ros-master-2 openshift_schedulable=True ros-master-3 openshift_schedulable=True ros-infra-1 openshift_node_labels="{'region': 'infra'}" openshift_schedulable=True ros-infra-2 openshift_node_labels="{'region': 'infra'}" openshift_schedulable=True ros-infra-3 openshift_node_labels="{'region': 'infra'}" openshift_schedulable=True ros-node-1 openshift_node_labels="{'region': 'node'}" openshift_schedulable=True ros-node-2 openshift_node_labels="{'region': 'node'}" openshift_schedulable=True ros-node-3 openshift_node_labels="{'region': 'node'}" openshift_schedulable=True [masters] ros-master-1 ansible_connection=local ansible_become=yes ros-master-2 ros-master-3 [etcd] ros-master-1 ansible_connection=local ansible_become=yes ros-master-2 ros-master-3

Note:  I have used openshift_disable_check=memory_availability,docker_storage because my VMs have 4GB RAM each and recommended is 8GB, ansible will fail if RAM is not 8GB.

By default docker uses storage using loopback devices which is not recommended for production use so the check will fail. Either you configure other docker storage driver supported by openshift or in my case I have disabled the check because its a Lab environment.

Note : I am using htpasswd for web authentication which requires us to create user/pass in htpasswd file. Create user and pass with below command on all 3 openshift masters post ansible-playbook completion.

sudo htpasswd -c /etc/origin/htpasswd admin

Also if you like run below command to make admin user as cluster administrator

oc adm policy add-cluster-role-to-user cluster-admin admin

Run the Ansible playbook

Run the ansible playbook from ros-master-1

ansible-playbook -i hosts /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml

Ansible playbook will take approx. 10 to 15 mins depending upon the servers and when completed you will see all the PLAY completed with zero failures.

PLAY RECAP ************************************************
localhost : ok=13 changed=0 unreachable=0 failed=0
ros-infra-1 : ok=234 changed=32 unreachable=0 failed=0
ros-infra-2 : ok=234 changed=32 unreachable=0 failed=0
ros-infra-3 : ok=244 changed=56 unreachable=0 failed=0
ros-master-1 : ok=614 changed=81 unreachable=0 failed=0
ros-master-2 : ok=437 changed=51 unreachable=0 failed=0
ros-master-3 : ok=437 changed=51 unreachable=0 failed=0
ros-node-1 : ok=263 changed=33 unreachable=0 failed=0
ros-node-2 : ok=263 changed=33 unreachable=0 failed=0
ros-node-3 : ok=263 changed=33 unreachable=0 failed=0


[[email protected] ~]$ oc version
oc v3.6.1+008f2d5
kubernetes v1.6.1+5115d708d7
features: Basic-Auth GSSAPI Kerberos SPNEGO

openshift v3.6.1+008f2d5
kubernetes v1.6.1+5115d708d7
[[email protected] ~]$

[[email protected] ~]$ oc get nodes
NAME STATUS AGE VERSION  Ready 14m v1.6.1+5115d708d7  Ready 14m v1.6.1+5115d708d7  Ready 14m v1.6.1+5115d708d7 Ready 14m v1.6.1+5115d708d7 Ready 14m v1.6.1+5115d708d7 Ready 14m v1.6.1+5115d708d7   Ready 14m v1.6.1+5115d708d7   Ready 14m v1.6.1+5115d708d7   Ready 14m v1.6.1+5115d708d7
[[email protected] ~]$

[[email protected] ~]$ oc status
In project default on server (passthrough) (svc/docker-registry)
 dc/docker-registry deploys
 deployment #1 deployed 23 minutes ago - 1 pod

svc/kubernetes - ports 443, 53->8053, 53->8053 (passthrough) (svc/registry-console)
 dc/registry-console deploys
 deployment #1 deployed 21 minutes ago - 1 pod

svc/router - ports 80, 443, 1936
 dc/router deploys
 deployment #1 deployed 26 minutes ago - 3 pods

View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.
[[email protected] ~]$

[[email protected] ~]$ oc get all
is/registry-console docker-registry.default.svc:5000/default/registry-console latest 22 minutes ago

dc/docker-registry 1 1 1 config
dc/registry-console 1 1 1 config
dc/router 1 3 3 config

rc/docker-registry-1 1 1 1 24m
rc/registry-console-1 1 1 1 22m
rc/router-1 3 3 3 26m

routes/docker-registry docker-registry <all> passthrough None
routes/registry-console registry-console <all> passthrough None

svc/docker-registry <none> 5000/TCP 24m
svc/kubernetes <none> 443/TCP,53/UDP,53/TCP 15h
svc/registry-console <none> 9000/TCP 22m
svc/router <none> 80/TCP,443/TCP,1936/TCP 26m

po/docker-registry-1-rrpt9 1/1 Running 0 23m
po/registry-console-1-qhswt 1/1 Running 0 21m
po/router-1-1xn06 1/1 Running 0 25m
po/router-1-bjbr1 1/1 Running 0 25m
po/router-1-p4flt 1/1 Running 0 25m
[[email protected] ~]$

Lets verify on web console

Build Highly-Available PaaS on Redhat Openshift Container Platform using Ansible

Build Highly-Available PaaS on Redhat Openshift Container Platform using Ansible

Build Highly-Available PaaS on Redhat Openshift Container Platform using Ansible

Build Highly-Available PaaS on Redhat Openshift Container Platform using Ansible

Build Highly-Available PaaS on Redhat Openshift Container Platform using Ansible

Let me know if you run into any issues . I will write another blog on openshift advanced topics like SDN integration and using persistent storage for stateful applications. Thanks

Main Logo