This type of installation is done by using advanced method. This is the major type of installation that will be normally used in organizations. This requires ansible and multiple machines. In this article we will see how to do a advanced installation and what are the main points. We will be using a single machine to install all our Ose components.
Let's take 2 machines with the following Ip address,
173.31.40.1 ( ansible control machine )
173.31.3.125 ( all in one ose machine )
1.Install Ansible and Docker on both machines. Make sure ansible version is more than 2.4.2 ( OSE requires ansible to be more than 2.4.2 )
2. Configure Public and Private keys on “173.31.40.1”. Run the ssh-keygen command as below,
[root@ip-172-31-40-1 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
The key's randomart image is:
+---[RSA 2048]----+
| ..+. =o .|
| o +o.%.=o.|
| . + .X.O +.|
| o =E.* . .|
| . S+++ |
| o .o.. |
| .o o.. . |
| o *.. o |
| + o.. |
+----[SHA256]-----+
Now id_rsa.pub and id_rsa files will be created in the /root/.ssh location.
3. Configure the all-in-Ose machine with the public keys of the ansible control machine. Copy the id_rsa.pub contents from the “173.31.40.1/root/.ssh” location to the /root/.ssh/authorized_keys file in Ose machine.
4. Create the hosts file and test the connection between the ansible control machine and Ose all-in-one machine.
5. First create a directory and inside the directory create 2 files as,
[root@ip-172-31-40-1 ~]# cat ansible.cfg
[defaults]
hostfile=hosts
[root@ip-172-31-40-1 ~]# cat hosts
[dev]
173.31.3.125 ansible_ssh_user=root
6. Now test the connect using “ansible -i hosts dev -m ping”
[root@ip-172-31-40-1 ~]# ansible -i hosts dev -m ping
172.31.3.125 | SUCCESS => {
"changed": false,
"ping": "pong"
}
7. Make sure to disable the selinux.
[root@ip-172-31-3-125 .ssh]# getenforce
Enforcing
[root@ip-172-31-3-125 .ssh]# setenforce 0
[root@ip-172-31-3-125 .ssh]# getenforce
Permissive
8. Install git and wget using “yum install -y wget git”
9. Clone the Openshift-ansible repository using,
Cloning into 'openshift-ansible'...
remote: Enumerating objects: 44, done.
remote: Counting objects: 100% (44/44), done.
remote: Compressing objects: 100% (38/38), done.
remote: Total 130683 (delta 14), reused 20 (delta 3), pack-reused 130639
Receiving objects: 100% (130683/130683), 34.56 MiB | 20.85 MiB/s, done.
Resolving deltas: 100% (82147/82147), done.
10. Change the Ose branch to version 3.9
[root@ip-172-31-40-1 openshift-ansible]# git branch -r
origin/HEAD -> origin/master
origin/devel-40
origin/master
origin/openshift-ansible-3.7-fix
origin/release-1.1
origin/release-1.2
origin/release-1.3
origin/release-1.4
origin/release-1.5
origin/release-3.10
origin/release-3.11
origin/release-3.6
origin/release-3.6-hotfix
origin/release-3.7
origin/release-3.7.0day
origin/release-3.8
origin/release-3.9
origin/stage
origin/stage-130
origin/stage-131
origin/stage-132
origin/stage-132a
[root@ip-172-31-40-1 openshift-ansible]# git checkout release-3.9
Branch release-3.9 set up to track remote branch release-3.9 from origin.
Switched to a new branch 'release-3.9'
[root@ip-172-31-40-1 openshift-ansible]# git branch
master
* release-3.9
We switched specifically to the 3.9 release because the master branch actually tracks the development version.
11. modify the ansible.cfg that we created previously into the Openshift-ansible folder with
hostfile=hosts
deprecation_warnings=False
12. Now create the hosts file in the same folder with the below contents,
[OSEv3:children]
masters
nodes
etcd
[masters]
172.31.3.125
[nodes]
172.31.3.125 openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_schedulable=true
[etcd]
172.31.3.125
[OSEv3:vars]
openshift_deployment_type=origin
openshift_disable_check=memory_availability,disk_availability
openshift_ip=172.31.3.125
ansible_service_broker_install=false
openshift_master_cluster_hostname=172.31.3.125
openshift_master_cluster_public_hostname=172.31.3.125
openshift_hostname=172.31.3.125
openshift_public_hostname=172.31.3.125
openshift_release=v3.9.0
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=root
In the above file , the section include,
Masters - This section contains hosts for installation of master services like API server etc. Can be a single node or odd number of hosts for high availability
Nodes - this section contains Hosts for the installation of node components like kubelet. These are the machines where application containers work. One important things is that one of the node should be labeled as infra node to install registry, router servers into that machine. This labeling is mandatory.
Etcd - this section contains hosts that we use to install etcd service. This can contain host that are same as master or it can be an odd number of new hosts
OSEv3:vars - This section contain global variables for configuring various aspects of OpenShift such as authentication, registry placement etc
OSEv3:children - This section lists all of the groups that are specified in the rest of the file.
Contents of the OSEv3:vars - The following are the variables defined under the OSEv3:vars section,
Ansible_ssh_user - The user account used by Ansible to connect to hosts via SSH.
Openshift_master_identity_providers - Authentication backends. By default, OpenShift uses AllowAllPasswordIdentityProvider, effectively accepting all credentials, which is insecure and unacceptable in enterprise environments.
Deployment_type - OpenShift distribution to install. Acceptable values are enterprise for the Red Hat OpenShift Container Platform and origin for OpenShift Origin.
Openshit_master_default_subdomain - The subdomain for exposed services. By default, it is . svc.cluster.local.
Openshift_clock_enabled - OpenShift relies on timestamp so that it can sync the updates through etcd etc and this can be achieved by NTP. this setting controls whether the chronyd daemon must be activated or not.
Openshift_node_labels - Using this labels can be assigned to the nodes. We already discussed that we should have at least 1 node with {‘region’:’infra’} node
13. Make sure we have the NetworkManager installed and running. Run the below commands,
yum install NetworkManager
Service NetworkManager restart
14. Run the pre requisite playbook as below,
[root@ip-172-31-40-1 openshift-ansible]# ansible-playbook playbooks/prerequisites.yml
This will take some time and will give you the below details
INSTALLER STATUS ******************************************************************************
Initialization : Complete (0:00:08)
Wednesday 21 November 2018 08:50:19 +0000 (0:00:00.019) 0:00:40.616 ****
==========================================================
os_firewall : need to pause here, otherwise the iptables service starting can sometimes cause ssh to fail
os_firewall : Install iptables packages --------------------------------------------------------- 7.14s
container_runtime : Start the Docker service ------------------------------------------------ 2.77s
Ensure openshift-ansible installer package deps are installed ----------------------------- 2.43s
openshift_excluder : Install docker excluder - yum ----------------------------------------- 1.90s
Gathering Facts ----------------------------------------------------------------------------------- 1.03s
Gather Cluster facts ------------------------------------------------------------------------------- 0.90s
openshift_repos : Configure origin gpg keys -------------------------------------------------- 0.53s
Initialize openshift.node.sdn_mtu --------------------------------------------------------------- 0.51s
container_runtime : Configure Docker service unit file --------------------------------------- 0.43s
Run variable sanity checks ------------------------------------------------------------------------ 0.41s
openshift_repos : Configure correct origin release repository ------------------------------- 0.41s
os_firewall : Start and enable iptables service ------------------------------------------------- 0.40s
container_runtime : Get current installed Docker version ------------------------------------ 0.39s
openshift_repos : refresh cache ------------------------------------------------------------------ 0.39s
os_firewall : Ensure firewalld service is not enabled ------------------------------------------ 0.38s
openshift_repos : Ensure libselinux-python is installed --------------------------------------- 0.36s
openshift_repos : Remove openshift_additional.repo file ------------------------------------- 0.31s
container_runtime : Set various Docker options ------------------------------------------------ 0.31s
Query DNS for IP address of 172.31.3.125 ------------------------------------------------------ 0.30s
PLAY RECAP *********************************************************************************
172.31.3.125 : ok=64 changed=16 unreachable=0 failed=0
localhost : ok=11 changed=0 unreachable=0 failed=0
Now run the Original installer script using,
[root@ip-172-31-40-1 openshift-ansible]# ansible-playbook playbooks/deploy_cluster.yml
PLAY [Management Install Checkpoint End] *************************************************************************************
TASK [Set Management install 'Complete'] *************************************************************************************
Wednesday 21 November 2018 09:11:59 +0000 (0:00:00.047) 0:05:58.531 ****
skipping: [172.31.3.125]
PLAY RECAP *****************************************************************************
172.31.3.125 : ok=511 changed=137 unreachable=0 failed=0
localhost : ok=11 changed=0 unreachable=0 failed=0
INSTALLER STATUS *****************************************************************************
Initialization : Complete (0:00:14)
Health Check : Complete (0:00:10)
etcd Install : Complete (0:00:17)
Master Install : Complete (0:01:14)
Master Additional Install : Complete (0:00:17)
Node Install : Complete (0:01:13)
Hosted Install : Complete (0:01:10)
Web Console Install : Complete (0:00:27)
Service Catalog Install : Complete (0:00:51)
Wednesday 21 November 2018 09:11:59 +0000 (0:00:00.022) 0:05:58.553 ****
==========================================================
openshift_hosted : Ensure OpenShift pod correctly rolls out (best-effort today) ---------- 27.26s
openshift_web_console : Verify that the console is running ---------------------------------- 21.37s
openshift_node : Install Node package, sdn-ovs, conntrack packages ---------------------- 17.21s
template_service_broker : Verify that TSB is running ------------------------------------------ 11.69s
openshift_service_catalog : oc_process ---------------------------------------------------------- 10.36s
Run health checks (install) - EL --------------------------------------------------------------------- 9.59s
restart master api -------------------------------------------------------------------------------------- 8.62s
openshift_master : restart master api -------------------------------------------------------------- 8.33s
openshift_hosted : Create OpenShift router ------------------------------------------------------- 7.63s
openshift_node : Install Ceph storage plugin dependencies ------------------------------------ 7.33s
openshift_hosted : Ensure OpenShift pod correctly rolls out (best-effort today) ------------ 6.41s
openshift_node : Install iSCSI storage plugin dependencies ------------------------------------ 5.80s
openshift_version : Get available RPM version ---------------------------------------------------- 5.71s
openshift_excluder : Install docker excluder - yum ---------------------------------------------- 4.91s
openshift_manageiq : Configure role/user permissions ----------------------------------------- 3.87s
openshift_node : Install GlusterFS storage plugin dependencies ------------------------------ 3.35s
openshift_hosted : Add the secret to the registry's pod service accounts -------------------- 3.08s
openshift_hosted : Create default projects -------------------------------------------------------- 3.04s
openshift_node : Setting sebool container_manage_cgroup ------------------------------------ 2.38s
openshift_service_catalog : oc_process ------------------------------------------------------------ 2.17s
This will take couple of minutes to download and run the containers. Once done, Login into the master node and run the validation checks,
[root@ip-172-31-3-125 ~]# oc status
https://docker-registry-default.router.default.svc.cluster.local (passthrough) (svc/docker-registry)
dc/docker-registry deploys docker.io/openshift/origin-docker-registry:v3.9.0
deployment #1 deployed 4 minutes ago - 1 pod
svc/kubernetes - 172.30.0.1 ports 443->8443, 53->8053, 53->8053
https://registry-console-default.router.default.svc.cluster.local (passthrough) (svc/registry-console)
dc/registry-console deploys docker.io/cockpit/kubernetes:latest
deployment #1 deployed 3 minutes ago - 1 pod
svc/router - 172.30.157.112 ports 80, 443, 1936
dc/router deploys docker.io/openshift/origin-haproxy-router:v3.9.0
deployment #1 deployed 4 minutes ago - 1 pod
View details with 'oc describe /' or list everything with ' oc get all'.
[root@ip-172-31-3-125 ~]# oc get node
NAME STATUS ROLES AGE VERSION
172.31.3.125 Ready compute,master 6m v1.9.1+a0ce1bc657
[root@ip-172-31-3-125 ~]# oc get po -n default
NAME READY STATUS RESTARTS AGE
docker-registry-1-hdh7w 1/1 Running 0 5m
registry-console-1-5qp54 1/1 Running 0 4m
router-1-jtbdq 1/1 Running 0 5m
We can access the web console from https://172.31.3.125:8443 and login with any user name and password
No comments :
Post a Comment