Projects - Projects are like room to applications. If we want to run an application containers we need to first have projects and then we create applications inside the projects. We can create any number of applications in the project. We can also define who can access the project as well as applications.
Login to the Server using,
[root@testing-machine vagrant]# oc login
Username: system
Password:
Login successful.
I used “system” and “admin” to login to the server.
Get a List of existing projects -
[root@manja17-I12931 /]# oc get projects
NAME DISPLAY NAME STATUS
default Active
You don't have any projects. You can try to create a new project, by running
oc new-project <projectname>
Create a Project -
[root@manja17-I12931 /]# oc new-project demo-testing
You can add applications to this project with the 'new-app' command. For example, try:
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git
to build a new example application in Ruby.
[root@testing-machine vagrant]# oc get projects
NAME DISPLAY NAME STATUS
demo-testing Active
Before creating applications in a project, we need to make sure to move to the project space first. In this case move to the “demo-testing” using,
[root@manja17-I12931 /]# oc project demo-testing
Applications - Create a Application in the Above Project -
[root@manja17-I12931 /]# oc new-app --docker-image=docker.io/jagadesh1982/testing-service
--> Found Docker image 60c1b93 (7 weeks old) from docker.io for "docker.io/jagadesh1982/testing-service"
* An image stream will be created as "testing-service:latest" that will track this image
* This image will be deployed in deployment config "testing-service"
* The image does not expose any ports - if you want to load balance or send traffic to this component
you will need to create a service with 'expose dc/testing-service --port=[port]' later
* WARNING: Image "docker.io/jagadesh1982/testing-service" runs as the 'root' user which may not be permitted by your cluster administrator
--> Creating resources ...
imagestream "testing-service" created
deploymentconfig "testing-service" created
--> Success
Run 'oc status' to view your app.
Check the status using,
[root@testing-machine vagrant]# oc status
dc/testing-service deploys istag/testing-service:latest
deployment #1 deployed 53 seconds ago - 1 pod
We are creating a pod using the docker image from docker hub. The image name is testing-service ( public image ) and expose a port 9876.We have the pod up and running. Now there will 2 another components that gets created, deployment and pods. Lets get those details using,
[root@manja17-I12931 /]# oc get dc
NAME REVISION DESIRED CURRENT TRIGGERED BY
testing-service 1 1 1 config,image(testing-service:latest)
[root@manja17-I12931 /]# oc get pods
NAME READY STATUS RESTARTS AGE
testing-service-1-sp7wr 1/1 Running 0 20s
If we want to create a application or perform some oc command on any other project we can always pass the --namespace element to the oc command as below,
[root@testing-machine vagrant]# oc get pods --namespace=demo-testing
NAME READY STATUS RESTARTS AGE
testing-service-1-xh6cq 1/1 Running 0 2m
In the above snippet, we ran a get pods command on the namespace ( project ) demo-testing. Similarly we can run some other commands on the different namespaces
Service - Now that we have created the Pod with our application, we need to access that right. In order to access that we need to create a service.
A Kubernetes service serves as an internal load balancer. It identifies a set of replicated pods in order to proxy the connections it receives to them. Backing pods can be added to or removed from a service arbitrarily while the service remains consistently available, enabling anything that depends on the service to refer to it at a consistent internal address.
[root@testing-machine vagrant]# oc expose pod testing-service-1-xh6cq --port=8180 --target-port=9876 --name=myservice --generator=service/v1
service "myservice" exposed
Iam creating a service named “myservice” for the pod with port 8180 and linking this to the port opened on the container 9876. This means when we access our service on port 8180 , the request will be sent to the container running on port 9876.
[root@testing-machine vagrant]# oc get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myservice ClusterIP 172.30.40.125 8180/TCP 4s
Now that we have our service available, lets access the service with the IP address provided as below,
[root@testing-machine vagrant]# curl 172.30.40.125:8180/info
{"host": "172.30.40.125:8180", "version": "0.5.0", "from": "10.0.2.15"}
We are able to successfully access the application running in the Pod.
Routes - We know that by using service we can access the application running as a container in a pod. OpenShift provides us with a way to access the application from outside of the openShift cluster by using routes. Services allow to access the application in the OpenShift cluster and routes allow us to access that application out side of the OpenShift cluster
OpenShift provides an external hostname mapping to a load balancer that distributes traffic among OpenShift services:
To create a route,
[root@testing-machine vagrant]# oc expose service myservice -l name=myroute --name=frontdomain
route "frontdomain" exposed
So in the previous section we have created the service “myservice”. Now we are creating a route “myroute” to that service.
Check the existing routes using,
[root@testing-machine vagrant]# oc get routes
NAME HOST/PORT PATH SERVICES PORT
frontdomain frontdomain-demo-testing.testing.xip.io myservice default
Now one thing over here is that we cant access the route directly. Because there is no DNS record for our route “frontdomain-demo-testing.testing.xip.io”. One easiest way to it add a static DNS record to the hosts file. Open the /etc/hosts file in the machine where we want to access the above route, and add the lines as below,
jagadishwork$Fri Nov 23@ cat /etc/hosts
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
172.16.202.96 frontdomain-demo-testing.testing.xip.io
We have added the Ip address of the machine where our openShift cluster is running and pointing that to the route url as above.
Now open the browser and access the “frontdomain-demo-testing.testing.xip.io” url,
This is how the relationship between pods, services and routes look,