Kubernetes has become the de facto container orchestration platform to run applications on. Java applications are no exception to this. When using a PaaS provider to give you a hosted Kubernetes, sometimes that provider also provides a CI/CD solution. However this is not always the case. When hosting Kubernetes yourself, you also need to implement a CI/CD solution.
Jenkins is a popular tool to use when implementing CI/CD solutions. Jenkins can also run quite easily in a Kubernetes environment. When you have Jenkins installed, you need to have a Git repository to deploy your code from, a Jenkins pipeline definition, tools to wrap your Java application in a container, a container registry to deploy your container to and some files to describe how the container should be deployed and run on Kubernetes. In this blog post I'll describe a simple end-to-end solution to deploy a Java service to Kubernetes. This is a minimal example so there is much room for improvement. It is meant to get you started quickly.
The Java application
I've created a simple Spring Boot service. You can find the code
here. I hosted it on GitHub since it was easy to use as source for Jenkins.
I needed something to wrap my Java application inside a container. There are various plug-ins available like for example the Spotify dockerfile-maven plug-in (
here) and the fabric8 docker-maven-plugin (
here). They both require access to a Docker daemon though. This can be complicated, especially when running Jenkins slaves within Kubernetes. There are workarounds but I did not find any that seemed both easy and secure. I decided to go for
Google's Jib to build my containers since it didn't have that requirement.
Docker build flow:
Jib build flow:
The benefits of reducing dependencies for the build process are obvious. In addition Jib also does some smart things splitting the Java application in different container layers. See
here. This reduces the amount of storage required for building and deploying new versions as often some of the layers, such as the dependencies layer, don't change and can be cached. This can also reduce build time. As you can see, Jib does not use a Dockerfile so the logic usually in the Dockerfile can be found in the plugin configuration inside the pom.xml file. Since I did not have a private registry available at the time of writing, I decided to use DockerHub for this. You can find the configuration for using DockerHub inside the pom.xml. It uses environment variables set by the Jenkins build for the credentials (and only in the Jenkins slave which is created and after the build destroyed). This seemed more secure than passing them in the Maven command-line.
Note that
Spring buildpacks could provide similar functionality. I have not looked into them yet though.
Installing Jenkins
For my Kubernetes environment I have used the setup described here. You also need a persistent storage solution as prerequisite for Jenkins. In a SaaS environment, this is usually provided, but if it is not or you are running your own installation, you can consider using OpenEBS. How to install OpenEBS is described here. kubectl (+ Kubernetes configuration .kube/config) and helm need to be installed on the machine from which you are going to perform the Jenkins deployment.
After you have and a storage class, you can continue with the installation of Jenkins.
First create a PersistentVolumeClaim to store the Jenkins master persistent data. Again, this is based on the storage class solution described above.
kubectl create ns jenkins
kubectl create -n jenkins -f - <<END
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pv-claim
spec:
storageClassName: openebs-sc-statefulset
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
END
Next install Jenkins. Mind that recently the Jenkins repository for the most up to date Helm charts has moved.
cat << EOF > jenkins-config.yaml
persistence:
enabled: true
size: 5Gi
accessMode: ReadWriteOnce
existingClaim: jenkins-pv-claim
storageClass: "openebs-sc-statefulset"
EOF
helm repo add jenkinsci https://charts.jenkins.io
helm install my-jenkins-release -f jenkins-config.yaml jenkinsci/jenkins --namespace jenkins
Now you will get a message like:
Get your 'admin' user password by running:
printf $(kubectl get secret --namespace jenkins my-jenkins-release -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
Get the Jenkins URL to visit by running these commands in the same shell and login!
export POD_NAME=$(kubectl get pods --namespace jenkins -l "app.kubernetes.io/component=jenkins-master" -l "app.kubernetes.io/instance=my-jenkins-release" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace jenkins port-forward $POD_NAME 8080:8080
Now visit http://localhost:8080 in your browser
And login with admin using the previously obtained password
Configuring Jenkins
The pipeline
Since I'm doing 'configuration as code' I created a declarative Jenkins pipeline and also put it in GitHub next to the service I wanted to deploy. You can find it
here. As you can see, the pipeline has several dependencies.
- presence of tool configuration in Jenkins
- the Kubernetes CLI plugin (withKubeConfig)
This plugin makes Kubernetes configuration available within the Jenkins slaves during the build process - the Pipeline Maven Integration plugin (withMaven)
This plugin archives Maven artifacts created such as test reports and JAR files
Tool configuration
JDK
The default JDK plugin can only download old Java versions from Oracle. JDK 11 for example is not available this way. I added a new JDK as followed:
I specified a download location of the JDK. There are various available such as
AdoptOpenJDK or the one
available from Red Hat or
Azul Systems. Inside the archive I checked in which subdirectory the JDK was put. I specified this subdirectory in the tool configuration.
Please note that downloading the JDK during each build can be slow and prone to errors (suppose the download URL changes). A better way is to make it available as a mount inside the Jenkins slave container. For this minimal setup I didn't do that though.
You also need to define a JAVA_HOME variable pointing to a location like indicated below. Why? Well, you also want Maven to use the same JDK.
Maven
Making the Maven tool available is easy luckily.
The name of the Maven installation is referenced in the Jenkins pipeline like:
tools {
jdk 'jdk-11'
maven 'mvn-3.6.3'
}
stages {
stage('Build') {
steps {
withMaven(maven : 'mvn-3.6.3') {
sh "mvn package"
}
}
}
Kubectl
For kubectl there is no tool definition in the Jenkins configuration available so I did the following:
sh 'curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"'
sh 'chmod u+x ./kubectl'
sh './kubectl apply -f k8s.yaml'
As you can see, a k8s.yaml file is required.
You can generate it as follows. First install a loadbalancer. Mind the IPs; they are specific for my environment. You might need to provide your own. Then create a deployment and service. I've added the Ingress myself. The complete file can be found
here.
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
# On first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
kubectl apply -f - <<END
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.122.150-192.168.122.255
END
kubectl create deployment spring-boot-demo --image=docker.io/maartensmeets/spring-boot-demo --dry-run=client -o=yaml > k8s.yaml
echo --- >> k8s.yaml
kubectl create service loadbalancer spring-boot-demo --tcp=8080:8080 --dry-run=client -o=yaml >> deployment.yaml
Credential configuration
Kubernetes
In order for Jenkins to deploy to Kubernetes, Jenkins needs credentials. An easy way to achieve this is by storing a config file (named 'config') in Jenkins. This file is usually used by kubectl and found in .kube/config. It allows Jenkins to apply yaml configuration to a Kubernetes instance.
The file can then be referenced from a Jenkins pipeline with the Kubernetes CLI plugin like in the snipped below.
withKubeConfig([credentialsId: 'kubernetes-config']) {
sh 'curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"'
sh 'chmod u+x ./kubectl'
sh './kubectl apply -f k8s.yaml'
}
DockerHub
I used DockerHub as my container registry. The pom.xml file references the environment variables DOCKER_USERNAME and DOCKER_PASSWORD but how do we set them from the Jenkins configuration? By storing them as credentials of course!
In the pipeline you can access them as followed:
withCredentials([usernamePassword(credentialsId: 'docker-credentials', usernameVariable: 'DOCKER_USERNAME', passwordVariable: 'DOCKER_PASSWORD')]) {
sh "mvn jib:build"
}
This sample stores credentials directly in Jenkins. You can also use the Jenkins
Kubernetes Credentials Provider to store credentials in Kubernetes as secrets. This provides some benefits in management of the credentials, for example with kubectl it is easy to script changes. A challenge is giving the Jenkins user sufficient but not too much privileges on Kubernetes.
GitHub
In order to access GitHub, also some credentials are required:
Jenkins job configuration
The configuration of the Jenkins job is actually almost the least exciting. The pipeline is defined outside of Jenkins. The only thing Jenkins needs to know is where to find the sources and the pipeline.
Create a Multibranch Pipeline job. Multibranch is quite powerful since it allows you to build multiple branches with the same job configuration.
Open the job configuration and specify the Git source.
The build is based on the Jenkinsfile which contains the pipeline definition.
After you have saved the job it will start building immediately.
Building and running
Webhook configuration
What is lacking here is Webhook configuration. See for example
here. This will cause Jenkins builds to be triggered when branches are created, pull requests are merged, commits happen, etc. Since I'm running Kubernetes locally I do not have a publicly exposed endpoint as target for the GitHub webhook. You can use a simple service like
smee.io to get a public URL and forward it to your local Jenkins. Added benefit is that it generally does not care about things like firewalls (similar to
ngrok but Webhook specific and doesn't require an account).
After you have installed the smee CLI and have the Jenkins port forward running (the thing which makes Jenkins available on port 8080) you can do (of course your URL will differ):
smee -u https://smee.io/z8AyLYwJaUBBDA5V -t http://localhost:8080/github-webhook/
This starts the Webhook proxy and forwards requests to the Jenkins webhook URL. In GitHub you can add a Webhook to call the created proxy and make it trigger Jenkins builds.
Next you can confirm it works from Smee and from GitHub
If I now create a new branch in GitHub
It will appear in Jenkins and start building
If you prefer a nice web-interface, I can recommend Blue Ocean for Jenkins. It can easily be installed by installing the Blue Ocean plugin.
Finally
After you've done all the above, you can access your service running in your Kubernetes environment after a GitHub commit which fires off a webhook which is forwarded by Smee to Jenkins which triggers a Multibranch Pipeline build which builds the Java service, wraps it in a container using Jib and deploys the container to Kubernetes, provides a deployment and a service. The service can be accessed via the MetalLB load-balancer.
No comments:
Post a Comment