Integration of Jenkins Dynamic Cluster With Kubernetes

Manish Verma

--

Hello Everyone,

This time I am back with another interesting article on DevOps Automation using some tools like Git, GitHub, Jenkins Dynamic Cloud Cluster, Docker, and Kubernetes. This article explains one DevOps task in which I have performed to deploy a website on HTTPD Apache webserver, which is running as a deployment on the Kubernetes and available for the outside world.

Let us understand these different tools.

Git is a distributed version-control system for tracking changes in source code during software development. It is designed for coordinating work among programmers, but it can be used to track changes in any set of files. Its goals include speed, data integrity, and support for distributed, non-linear workflows.

GitHub offers the distributed version control and source code management (SCM) functionality of Git, plus its own features. It provides access control and several collaboration features such as bug tracking, feature requests, task management, and wikis for every project.

Jenkins is a free and open source automation server. It helps automate the parts of software development related to building, testing, and deploying, facilitating continuous integration and continuous delivery. It is a server-based system that runs in servlet containers such as Apache Tomcat.

Docker is a set of platform as a service products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels.

As while running Jenkins job, it requires resources but when we have large no of jobs then it’s impossible to run on master so we can use clustering in Jenkins. Through this, we can run those jobs in different slaves. In this, we can use different cloud platforms, Docker containers, etc which help in building these jobs.

From making docker container as a slave there are two types of clusters:

  • Static cluster
  • Dynamic cluster

Static cluster: For a static cluster, we can use any either a virtual machine or a docker container. But we have to manage it manually so that containers or virtual machines work with Jenkins. In this process, the container or machine will be running continuously and also consuming resources, whether it is working or not.

Dynamic cluster:Then comes to a dynamic cluster that helps in automation. It can automatically run a docker container when the job starts building and terminate that container after successfully building that job. So, it doesn’t consume many resources.

Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery.

Setting up Dynamic Cluster of Jenkins:

The requirements are as follow:

  • RedHat Enterprise Linux Version 8
  • Docker Container Engine
  • Jenkins
  • Kubernetes
  • Git and GitHub

For setting our Dynamic cluster there are some prerequisites like:

  • Two virtual machines(VM) either CLI or GUI with RedHat installed, yum must be configured for installation of local available software and docker.
  • Jenkins installed in one VM, which will act as a Master node and Docker installed in another VM, which will act as a Slave Node.
  • Kubernetes installed, here I am using Minikube to run Kubernetes locally.
  • We need to install some plugins in Jenkins like GitHub, Build Pipeline, Docker, Yet Another Docker.

Note: This setup is done on the local system with, 8GB Ram at least is recommended for this entire setup.

Till here we are done with, requirements and prerequisites. Now its time to setup.

First of all, launch your Slave Node, start and enable the docker service using this command.

systemctl start docker

systemctl enable docker

Now to expose the docker service for the master node, we need to edit its configuration file. So open the highlighted file using any text editor.

Here I am using VIM text editor available in RedHat GUI for CLI we can also use VI editor. Now add this snippet “-H tcp://0.0.0.0:4243” after “/usr/bin/dockerd -H fd://” as highlighted.

Now come to our master node and provide the IP Address and port where the docker service is running. So use this code snippet in the master node.

Launch the Jenkins server in the master node and open its WebUI using the IP Address of the managed node and port as “8080”. Now go to the “Manage Jenkins” option then click on “Manage Nodes and Clouds”.

Click on “Configure Clouds” option provide on the left side.

We will land to this page called Configure Clouds, where we tell Jenkins about our slave node by providing its IP address and port. Hit test connection to check if a slave is active or not.

Here I have used my docker image, which will be launch on the slave to execute the job if any job created. This image has SSH server running.

Till here, our master node and slave node of the Jenkins Dynamic Cluster is ready to use.

Now I am going to give some description of my task, which is as follow:

Steps to proceed as:

1.Create one container image that’s has Linux and other basic configuration required to run Slave for Jenkins. ( example here we require kubectl to be configured ).

2. When we launch the job it should automatically starts job on slave based on the label provided for dynamic approach.

3. Create a job chain of job1 & job2 using build pipeline plugin in Jenkins.

4. Job1 : Pull the GitHub repository automatically when some developers push repository to GitHub and perform the following operations as:

1.Create the new image dynamically for the application and copy the application code into that corresponding docker image

2. Push that image to the docker hub (Public repository)
( GitHub code contain the application code and Docker file to create a new image )

5. Job2 ( Should be run on the dynamic slave of Jenkins configured with Kubernetes kubectl command): Launch the application on the top of Kubernetes cluster performing following operations:

1. If launching first time then create a deployment of the pod using the image created in the previous job. Else if deployment already exists then do rollout of the existing pod making zero downtime for the user.

2. If Application created first time, then Expose the application. Else don’t expose it.

Now its time to do some hands-on with codes and try to perform this task step by step using the description.

Step 1. I have created one container image which has Centos Linux, Kubectl, and SSH server installed and configured.

Image creating
Image created and tagged successfully

After creating this image, I have uploaded it to the Docker Hub Click here to see.

As a developer, I have created on Html page and Docker file for testing this setup and pushed it to my GitHub profile. This GitHub repository automatically pulled by Job1.

Initializing GitHub repository and adding file and remote url
creating post-commit file so that code will automatically push when we commit
All files pushed to the GitHub
GitHub part completed

Here as per description I have created two jobs.

Jobs created

Job1 : Pull the GitHub repository automatically, create a new image dynamically for deploying the webpage on a web server running on Kubernetes by slave node. Pushing the image to the Docker Hub

Description of Job1
Providing the GitHub url for cloning the developers code
Setting the Build Trigger
Build Action to be executed

Here we are cloning the GitHub repository to the workspace directory.

The details provided for creating a new docker image, which also contains the developer codes.

Codes for creating the Docker file for the webserver
GitHub repository pulled successfully
Creating the Docker image

Docker image successfully built and pushed to the Docker Hub.

Job 2. Launching the webserver on the top of Kubernetes cluster by dynamic slave of Jenkins then creating a deployment of the pod if launching first time using the image created in the previous job and exposed. If deployment already exists then do rollout of the existing pod making zero downtime for the user.

Description of Job2

The label of the slave node provided so that job should run on the Kubernetes by the slave node.

Job2 is triggered by Job1

Here we are deploying the developer codes on the webserver using the Docker image created by the previous job, and I am also testing my application if it is running fine or not.

Deployment codes for webserver
Job2 is running on slave node
Docker container created for executing Job2
Deployment created by Kubernetes and also exposed
Testing our webserver

Till here all the jobs have been successfully executed this can be governed by the Build Pipeline of this task.

Build Pipeline

Now, this is the time to check the status of Kubernetes, the final output of the whole setup created here, and the task performed.

Deployment running well on Kubernetes

To check the output of the webpage visit to the url with proper port number on your favourite web browser. So to get the url with port run this command:

minikube service web-service -- url

you will get:

http://192.168.99.100:31000

Now by this fantastic output, we have reached the end of this task, and here we achieve the aim of the task.

Note: For more on configuration of SSH web server click here and for Kubernetes click here.

— — — — — — — — — — Task completed — — — — — — — — — —

Some words for my mentor:

I am very grateful to my mentor “Mr. Vimal Daga” Sir for sharing his superb knowledge, and great understanding of these topics with me and lots of other people like me. I just want to say that the name “Mr. Vimal Daga” means “Excellence”.

For coding files and Docker file on visit to my GitHub profile:

Docker file of Kubernetes and SSH server available here:

Docker file of Apache httpd web server and html files available here:

For Docker images visit to my Docker Hub profile:

for Kubernetes and SSH server configured image:

for httpd apache web server configured image:

Queries, Suggestions and Feedback are always welcome!

Find me on LinkedIn as:

Thank you very much for reading this article. Share with other so that lots and lots of people should get benefited from this article.

--

--