If you want to experiment with a multi node Kubernetes cluster locally as a developer, you need a distributed persistent storage solution to approximate real production scenario's. StorageOS is one of those solutions. In this blog I describe a developer installation StorageOS. For production scenario's check out the best practices mentioned on the StorageOS site.
The environment
I've used the following Kubernetes environment (Charmed Kubernetes deployed locally using Juju and MaaS on KVM) which is described here. This is a production-like environment consisting of several hosts. I had also tried getting StorageOS to work on the LXC/LXD install of Charmed Kubernetes but that failed.
For choosing storage solutions for Kubernetes there are various options available. For Charmed Kubernetes, CephFS is mentioned. However when using Charms, new hosts are created and I was out of cores on my laptop so I decided to go for a solution which was Charm independent. In addition StorageOS had a nice GUI and is free for developers with some limits.
StorageOS requirements
Kernel modules
StorageOS requires some modules to be loaded by the hosts running the containers. I was running Ubuntu 18.04 and used juju for provisioning so I did the following to get my kubernetes-worker nodes ready.
Install the required kernel modules
juju run "sudo apt -y update && sudo apt -y install linux-modules-extra-$(uname -r) && sudo apt-get clean" --application kubernetes-worker
Allow containers to run privileged
juju config kubernetes-master allow-privileged=true
Make sure the kernel modules load on startup
juju run "echo target_core_mod >> /etc/modules" --application kubernetes-worker
juju run "echo tcm_loop >> /etc/modules" --application kubernetes-worker
juju run "echo target_core_file >> /etc/modules" --application kubernetes-worker
juju run "echo configfs >> /etc/modules" --application kubernetes-worker
juju run "echo target_core_user >> /etc/modules" --application kubernetes-worker
juju run "echo uio >> /etc/modules" --application kubernetes-worker
Reboot the worker nodes (one after each other since we're running in a highly available environment)
juju run "reboot" --unit kubernetes-worker/0
juju run "reboot" --unit kubernetes-worker/1
juju run "reboot" --unit kubernetes-worker/1
Install etcd
StorageOS requires etcd to run. They recommend not to use the Kubernetes etcd here so we install a new one.
git clone https://github.com/coreos/etcd-operator.git
export ROLE_NAME=etcd-operator
export ROLE_BINDING_NAME=etcd-operator
export NAMESPACE=etcd
kubectl create namespace $NAMESPACE
./etcd-operator/example/rbac/create_role.sh
kubectl -n $NAMESPACE create -f - <<END
apiVersion: apps/v1
kind: Deployment
metadata:
name: etcd-operator
spec:
selector:
matchLabels:
app: etcd-operator
replicas: 1
template:
metadata:
labels:
app: etcd-operator
spec:
containers:
- name: etcd-operator
image: quay.io/coreos/etcd-operator:v0.9.4
command:
- etcd-operator
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
END
Now the etcd operator is deployed. We need to label our nodes for operator to allow creating pods on the right hosts.
Determine nodes: kubectl get nodes. In the below example
valued-calf and valid-thrush
kubectl label nodes valued-calf etcd-cluster=storageos-etcd
kubectl label nodes valid-thrush etcd-cluster=storageos-etcd
kubectl label nodes valid-thrush etcd-cluster=storageos-etcd
Create the etcd cluster
kubectl -n etcd create -f - <<END
apiVersion: "etcd.database.coreos.com/v1beta2"
kind: "EtcdCluster"
metadata:
name: "storageos-etcd"
spec:
size: 1
version: "3.4.7"
pod:
etcdEnv:
- name: ETCD_QUOTA_BACKEND_BYTES
value: "2147483648" # 2 GB
- name: ETCD_AUTO_COMPACTION_RETENTION
value: "100" # Keep 100 revisions
- name: ETCD_AUTO_COMPACTION_MODE
value: "revision" # Set the revision mode
resources:
requests:
cpu: 200m
memory: 300Mi
securityContext:
runAsNonRoot: true
runAsUser: 9000
fsGroup: 9000
tolerations:
- operator: "Exists"
END
apiVersion: "etcd.database.coreos.com/v1beta2"
kind: "EtcdCluster"
metadata:
name: "storageos-etcd"
spec:
size: 1
version: "3.4.7"
pod:
etcdEnv:
- name: ETCD_QUOTA_BACKEND_BYTES
value: "2147483648" # 2 GB
- name: ETCD_AUTO_COMPACTION_RETENTION
value: "100" # Keep 100 revisions
- name: ETCD_AUTO_COMPACTION_MODE
value: "revision" # Set the revision mode
resources:
requests:
cpu: 200m
memory: 300Mi
securityContext:
runAsNonRoot: true
runAsUser: 9000
fsGroup: 9000
tolerations:
- operator: "Exists"
END
Install the operator
kubectl create -f https://github.com/storageos/cluster-operator/releases/download/v2.0.0/storageos-operator.yaml
Create a secret
cat << EOF > storageos_secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: "storageos-api"
namespace: "storageos-operator"
labels:
app: "storageos"
type: "kubernetes.io/storageos"
data:
# echo -n '<secret>' | base64
apiUsername: c3RvcmFnZW9z
apiPassword: c3RvcmFnZW9z
# CSI Credentials
csiProvisionUsername: c3RvcmFnZW9z
csiProvisionPassword: c3RvcmFnZW9z
csiControllerPublishUsername: c3RvcmFnZW9z
csiControllerPublishPassword: c3RvcmFnZW9z
csiNodePublishUsername: c3RvcmFnZW9z
csiNodePublishPassword: c3RvcmFnZW9z
EOF
kubectl apply -f storageos_secret.yaml
Create a storage cluster
kubectl create -f - <<END
apiVersion: storageos.com/v1
kind: StorageOSCluster
metadata:
name: example-storageoscluster
namespace: "storageos-operator"
spec:
secretRefName: "storageos-api"
secretRefNamespace: "storageos-operator"
k8sDistro: "upstream" # Set the Kubernetes distribution for your cluster (upstream, eks, aks, gke, rancher, dockeree)
# storageClassName: fast # The storage class creates by the StorageOS operator is configurable
csi:
enable: true
deploymentStrategy: "deployment"
enableProvisionCreds: true
enableControllerPublishCreds: true
enableNodePublishCreds: true
kvBackend:
address: "storageos-etcd-client.etcd.svc.cluster.local:2379"
END
In order to use StorageOS for a longer time, you need to register. You can do this by opening the dashboard and creating an account. First determine where the GUI is running. Open the storageos service in the kube-system namespace.
Next determine the endpoints. The GUI is running on those. In my case on 10.20.81.46 and 10.20.81.47.
Open the GUI in your browser by opening http://endpointIP. You can login with user and password storageos
In the License screen (click it in the left menu) you can create an account. After you've confirmed your e-mail address, your developer license becomes active.
Testing it out
Create a persistent volume claim
kubectl create -f - <<END
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pv-claim
spec:
storageClassName: "fast" # StorageOS StorageClass
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
END
Install Jenkins
helm repo update
cat << EOF > jenkins-config.yaml
persistence:
enabled: true
size: 5Gi
accessMode: ReadWriteOnce
existingClaim: jenkins-pv-claim
storageClass: "fast"
EOF
helm install my-jenkins-release -f jenkins-config.yaml stable/jenkins
Get your 'admin' user password by running:
printf $(kubectl get secret --namespace default my-jenkins-release -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
Get the Jenkins URL to visit by running these commands in the same shell and login!
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/component=jenkins-master" -l "app.kubernetes.io/instance=my-jenkins-release" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 8080:8080
No comments:
Post a Comment