Wednesday, September 23, 2020

Kubernetes: Building and deploying a Java service with Jenkins

Kubernetes has become the de facto container orchestration platform to run applications on. Java applications are no exception to this. When using a PaaS provider to give you a hosted Kubernetes, sometimes that provider also provides a CI/CD solution. However this is not always the case. When hosting Kubernetes yourself, you also need to implement a CI/CD solution.

Jenkins is a popular tool to use when implementing CI/CD solutions. Jenkins can also run quite easily in a Kubernetes environment. When you have Jenkins installed, you need to have a Git repository to deploy your code from, a Jenkins pipeline definition, tools to wrap your Java application in a container, a container registry to deploy your container to and some files to describe how the container should be deployed and run on Kubernetes. In this blog post I'll describe a simple end-to-end solution to deploy a Java service to Kubernetes. This is a minimal example so there is much room for improvement. It is meant to get you started quickly.

Friday, July 31, 2020

OpenEBS: cStor storage engine on KVM

OpenEBS provides a Kubernetes native distributed storage solution which is friendly on developers and administrators. It is completely open source and part of the CNCF. Previously I wrote about installing and using OpenEBS, Jiva storage engine, on the Charmed Kubernetes distribution of Canonical. The Jiva storage class uses storage inside managed pods. cStor however can use raw disks attached to Kubernetes nodes. Since I was trying out Kubespray (also a CNCF project) on KVM and it is relatively easy to attach raw storage to KVM nodes, I decided to give cStor a try. cStor (which uses ZFS behind the scenes) is also the more recent and more robust storage engine and suitable for more serious workloads. See here. You can download the scripts I used to setup my Kubernetes environment here.


Thursday, July 30, 2020

Production ready Kubernetes on your laptop. Kubespray on KVM

There are various options to install a production-like Kubernetes distribution on your laptop. Previously I tried out using the Canonical stack (Juju, MAAS, Charmed Kubernetes) for this. This worked nicely but it gave me the feeling that it was a bit Canonical specific and with the recent discussions around Snaps and the Canonical Snap Store, I decided to take a look at another way to install Kubernetes on my laptop in such a way that it would approximate a production environment. Of course first I needed to get my virtual infrastructure ready (KVM hosts) before I could use Kubespray to deploy Kubernetes. My main inspirations for this were two blog posts here and here. Like with Charmed Kubernetes, the installed distribution is bare. It does not contain things like a private registry, distributed storage (read here) or load balancer (read here). You can find my scripts here (which are suitable for Ubuntu 20.04).

Provisioning infrastructure

This time I'm not depending one external tools such as Vagrant or MAAS to provide me with machines but I'm doing it 'manually' with some simple scripts. The idea is relatively simple. Use virt-install to create KVM VMs and install them using a Kickstart script which creates an ansible user and registers a public key so you can login using that user. Kubespray can then login and use ansible to install Kubernetes inside the VMs. 

As indicated before, I mainly used the scripts provided and described here but created my own versions to fix some challenges I encountered. The scripts are thin wrappers around KVM related commands so not much to worry about in terms of maintenance. You can execute the virt-install command multiple times to create more hosts.

What did I change in the scripts I used as base?

  • I used Ubuntu 20.04 as a base OS instead of 18.04 on which the scripts and blog post was based. Specifying the ISO file in the virt-install command in the create-vm script did not work for me. I decided to install by specifying a remote URL which did the trick: 'http://archive.ubuntu.com/ubuntu/dists/focal/main/installer-amd64/'.
  • The Kickstart file contained a public key for which I did not have the private key and an encrypted password of which I did not have an unencrypted version so I inserted my own public key (of course generated specifically for this purpose) and encrypted my own password.
  • It appeared virt-manager (KVM/QEMU GUI) and the virt-install command uses LIBVIRT_DEFAULT_URI="qemu:///system" while virsh commands use "qemu:///session". This caused some of the scripts to fail and VMs not to be visible. I added setting the parameter to qemu:///system in the scripts to avoid this.
  • I've added some additional thin wrapper scripts (like start-vm.sh, call_create_vm.sh, call_delete_vm.sh) to start the machines, create multiple machines with a single command and remove them again. Just to make life a little bit easier.

Creating KVM machines

First install required packages on the host. The below commands work on Ubuntu 18.04 and 20.04. Other OSs require different commands/packages to install KVM/QEMU and some other related things.

Install some packages
 sudo apt-get update  
 sudo apt-get -y install bridge-utils qemu-kvm qemu virt-manager net-tools openssh-server mlocate libvirt-clients libvirt-daemon libvirt-daemon-driver-storage-zfs python3-libvirt virt-manager virtinst

Make sure the user you want to use to create the VMs is in the libvirt group, allowing the user to create and manage VMs.

Clone the scripts

 git clone https://github.com/MaartenSmeets/k8s-prov.git  
 cd k8s-prov  

Create a public and private key pair

 ssh-keygen -t rsa -C ansible@host -f id_rsa

Create an encrypted password for the user ansible

 python encrypt-pw.py

Update ubuntu.ks

Update the ubuntu.ks file with the encrypted password and the generated public key. The Kickstart file already contains a public key for which the private key is provided and an encrypted password of which the plaintext is Welcome01. As indicated, these are for example purposes. Do not use them for production!

Start creating VMs!

Evaluate call_create_vm.sh for the number of VMs to create and resources per VM. By default it creates 4 VMs with each 2 cores assigned and 4Gb of memory. If you change the number of VMs, it is a good idea to also update the call_delete_vm.sh script and start-vm.sh script to reflect the change.

Next execute it.
 call_create_vm.sh

You can monitor progress by opening the virt-manager and specific machines. After the script is complete, the machines will be shutdown.

You can start them with
 start-vm.sh

Installing Kubernetes using Kubespray

Now your infrastructure is ready but how to get Kubernetes on it? Kubespray is a composition of Ansible playbooks, inventory, provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks. Kubespray can be run from various Linux distributions and allows installing Kubernetes on various other distributions. Kubespray comes with Terraform scripts for various cloud environments should you want to use those instead of providing your own KVM infra. Kubespray has quite a lot of Github stars, contributors and has been around for quite a while. It is part of the CNCF (here). I've also seen large customers using it to deploy and maintain their Kubernetes environment.

In order to use Kubespray, you need a couple of things such as some Python packages, a way to access your infrastructure and Ansible but (of course by sheer coincidence), you already fixed that in the previous step.

Clone the repository in a subdirectory of k8s-prov which you created earlier (so the commands can access the keys and scripts)
 git clone https://github.com/kubernetes-sigs/kubespray.git
 cd kubespray
Install requirements
 pip install -r requirements.txt
Create an inventory
 rm -Rf inventory/mycluster/
 cp -rfp inventory/sample inventory/mycluster
Use a script to obtain the KVM host IP addresses. These will be used to generate a hosts.yml file indicating what should be installed where.
 declare -a IPS=($(for n in $(seq 1 4); do ../get-vm-ip.sh node$n; done))
 echo ${IPS[@]}
 CONFIG_FILE=inventory/mycluster/hosts.yml \
   python3 contrib/inventory_builder/inventory.py ${IPS[@]}
Make life easy by letting it generate an admin.conf which can be used as ~/.kube/config 
 echo '  vars:' >>  inventory/mycluster/hosts.yml
 echo '    kubeconfig_localhost: true' >>  inventory/mycluster/hosts.yml
Execute Ansible to provision the machines using the previously generated key.
 export ANSIBLE_REMOTE_USER=ansible
 ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml --private-key=../id_rsa
Create your config file so kubectl can do its thing
 mkdir -p ~/.kube/
 cp -rip inventory/mycluster/artifacts/admin.conf ~/.kube/config
Install kubectl (for Kubernetes)
 sudo snap install kubectl --classic
The dashboard URL. First do kubectl proxy to be able to access it at localhost:8001
 http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#/login
Allow the kube-system:clusterrole-aggregation-controller to access the dashboard
 kubectl create clusterrolebinding dashboard-admin -n default --clusterrole=cluster-admin --serviceaccount=kube-system:clusterrole-aggregation-controller
Get a token to access the dashboard
 kubectl -n kube-system describe secrets `kubectl -n kube-system get secrets | awk '/clusterrole-aggregation-controller/ {print $1}'` | awk '/token:/ {print $2}'
Login and enjoy!


Monday, July 27, 2020

Scanning container images for vulnerabilities using Anchore Engine

Applications nowadays, are usually deployed inside containers. A container consists of libraries and tools which allow the application to run inside. Since there can be exploitable vulnerabilities, it is not only important to keep your application up to date but also the container it runs in. There are various tools available to scan container images for those vulnerabilities. Having little experience with them, but recognizing the importance of having such a tool, I decided to give Anchore Engine a try. Why? Because it appeared popular when looking for tools, it has an open source variant which I can appreciate and it was very easy to get started with. In addition, it provides several integration options which make using it easy, such as a Jenkins plugin and a Kubernetes Admission Controller.


Thursday, June 25, 2020

OBS Studio + Snap Camera: Putting yourself in your presentation live for free!

When giving online presentations, it helps for personal marketing if people can see you on the screen. Various tools provide features which help you achieve that, for example Microsoft Teams. Sometimes though you do not have that available or want to be able to do more than what the tool you are using provides. Using OBS Studio (free) with Snap Camera (free) or ChromaCam ($29.99 lifetime license) you can easily put yourself in your own presentations in such a way that it will work on almost any medium you would like to present on without having to invest in a green screen. Want to know how? Read on! 


Friday, May 22, 2020

OpenEBS: Create persistent storage in your Charmed Kubernetes cluster

I previously wrote a blog about using StorageOS as persistent storage solution for Kubernetes here. StorageOS is dependent on etcd. I was having difficulties getting etcd up again after a reboot. Since I wanted to get a storage solution working quickly and not focus too much on external dependencies so I decided to give OpenEBS a try. In this blog I'll describe a developer installation on Charmed Kubernetes (the environment described here). I used openebs-jiva-default as storage class. This is unsuitable for production scenario's. OpenEBS also provides cStor. Most of the development effort goes there. cStor however requires a mounted block device. I have not tried this yet in my environment.

Thursday, May 21, 2020

StorageOS: Create persistent storage in your Charmed Kubernetes cluster

If you want to experiment with a multi node Kubernetes cluster locally as a developer, you need a distributed persistent storage solution to approximate real production scenario's. StorageOS is one of those solutions. In this blog I describe a developer installation StorageOS. For production scenario's check out the best practices mentioned on the StorageOS site.