Kubernetes Without Scale: Setting up a Personal Cluster, Part 2
In Part 1, we discussed some good reasons to run your own Kubernetes cluster, even for software that doesn’t need high scalability and reliability. We showed that installing and managing third-party applications such as Ghost becomes as simple as helm install, and that strong support for infrastructure as code makes Kubernetes a great way maintain your “production” environment.
Typically, Kubernetes runs across several different machines, which allows applications to scale up and down with a high degree of resiliency. But here, we’re just looking to self-host some third-party software and personal projects. So we’ll set up Kubernetes on a single t2.small EC2 instance running Ubuntu, which, with 2GB of memory and a single CPU, runs about $15/month.
Unfortunately there’s no helm install for Kubernetes itself. To get up and running, we’ll use KIND, a project that runs Kubernetes inside of Docker, to install Kubernetes. Once that’s done, we’ll install a few crucial add-ons that will help us with DNS and provisioning SSL certificates. Finally, to show off our setup, we’ll install HackMD, an open source collaborative markdown editor, which you can use as a replacement for Google Docs.
Set Up a Machine
We’re going to set up our machine using AWS EC2, but you should be able to adapt these instructions to your cloud provider of choice.
On the EC2 page, click “Launch Instance” and when prompted for an operating system, choose Ubuntu 18.04:Then, when prompted to choose an instance type, choose t2.small:
This will run at about $15/month. We wouldn’t recommend choosing a smaller instance (running Kubernetes comes with a good amount of resource overhead), but if you plan to run any resource-intensive applications, you may want to choose a larger instance type.
Configure the rest of your machine as you see fit. We recommend increasing the default amount of storage if you plan on running data-intensive applications. Be sure to save your SSH private key somewhere, and be sure to open up port 22 so you can log in.
Next, while your instance is starting up, we’ll associate an elastic IP with it. This will give your EC2 instance a permanent IP address, which we’ll use to create DNS records.
From the EC2 page, click “Elastic IPs,” then “Allocate a new address,” and choose your new EC2 instance.
Finally, you can head to Route 53 to register a new domain name (or manage an existing one). You should create a new A Record pointing *.example.com to the elastic IP address you created above. You can also create a similar A Record for example.com if you want the apex domain to point to your cluster as well.
Once that’s done, we’re ready to build the cluster! Connect to your machine via SSH and follow the instructions below to start serving traffic.
Install Dependencies
First, let’s install Docker. Docker lets us run containerized operating systems within our environment. Kubernetes itself will run inside a Docker image, as will all the software we install in the cluster.
sudo apt install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
sudo apt update
sudo apt install -y docker-ce
sudo usermod -aG docker $USER
newgrp docker
Next, we’ll install KIND, a project that runs Kubernetes clusters inside Docker on a single machine.
curl -Lo ./kind https://github.com/kubernetes-sigs/kind/releases/download/v0.5.1/kind-$(uname)-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin
Then let’s install kubectl, a CLI for interacting with Kubernetes clusters. This will be our main way of talking to the cluster we create with KIND.
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
Next up is Helm, a package manager for Kubernetes. Helm will give us one-command installs for third-party software such as WordPress, Ghost, MySQL and GitLab.
sudo snap install helm --classic
Finally, to help us manage our Helm packages, we’ll use Reckoner, which lets us declare multiple charts in a single YAML file.
sudo apt-get install python3 python3-pip
export PATH=$PATH:~/.local/bin/
pip3 install reckoner
That’s all the prerequisite software we’ll need. To recap:
- Docker runs applications in virtual operating systems, known as containers.
- KIND runs Kubernetes—a container orchestration platform—inside of Docker.
- kubectl lets us interact with our Kubernetes cluster.
- Helm helps us install software onto our cluster.
- Reckoner helps us manage our Helm charts.
Building the Cluster
Now we’ll use KIND to build our Kubernetes cluster. To do this, we’ll create a YAML file, cluster.yaml, which will store the cluster configuration.
In the Kubernetes world (and in all modern DevOps environments), we have a strong bias toward storing all our infrastructure in version control. So now might be a good time to start a git repository where you can keep your code.
Here’s the configuration for our KIND cluster:
cluster.yaml
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
- role: control-plane
- role: worker
extraMounts:
- containerPath: /opt/local-path-provisioner
hostPath: /home/ubuntu/kind-disk
extraPortMappings:
- containerPort: 80
hostPort: 80
listenAddress: "0.0.0.0"
- containerPort: 443
hostPort: 443
listenAddress: "0.0.0.0"
There are a few things to note here:
- We’re creating two roles: one that runs the control plane (the Kubernetes API) and one that runs the workers (where our apps will run).
- We’re mounting the directory /home/ubuntu/kind-disk as a place for persistent storage. Any databases that run in our cluster will put their data here. You can configure it to whatever you want.
- We’re exposing ports 80 and 443, which will allow us to connect to our cluster from the outside world.
To create the cluster, run:
kind create cluster --config cluster.yaml
export KUBECONFIG="$(kind get kubeconfig-path --name='kind')"
Now you should be able to run:
kubectl get nodes
And see two nodes:
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready master 109s v1.15.3
kind-worker Ready 73s v1.15.3
You may have to wait a minute or two for the nodes to show up as Ready.
Setting up the Cluster
We’ll need to install some software onto our cluster to start installing and managing software.
First, let’s set up Tiller, the part of Helm that runs inside the Kubernetes cluster. We’ll start by creating an RBAC role for tiller, which gives it admin access to the cluster.
tiller.rbac.yaml
apiVersion: v1
kind: Namespace
metadata:
name: tiller
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: tiller
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
name: tiller
namespace: tiller
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: ""
Then we’ll initialize Tiller on the cluster:
kubectl apply -f tiller.rbac.yaml
export TILLER_NAMESPACE=tiller
helm init --service-account tiller
We also need to set up local storage, so that any saved data gets put in /home/ubuntu/kind-disk
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false", "storageclass.beta.kubernetes.io/is-default-class":"false"}}}'
kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true", "storageclass.beta.kubernetes.io/is-default-class":"true"}}}'
Finally, we’ll set up cert-manager and nginx-ingress. cert-manager will help us provision SSL certificates from Let’s Encrypt, and nginx-ingress will help us route traffic from particular domain names to the correct application.
First, we’ll create issuer.yaml, which will tell cert-manager to use Let’s Encrypt to provision certificates (you could also use another issuer). Be sure to replace [email protected] with your own email address.
issuer.yaml
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
Note that you won’t be able to kubectl apply this manifest yet; first, we’ll need to install cert-manager.
To install the Helm charts for nginx-ingress and cert-manager, we’ll create core.course.yaml. Down the line, if there’s any other infrastructure we want to add to our core stack, we can add it here.
core.course.yaml
repositories:
jetstack:
url: https://charts.jetstack.io
namespace: default
charts:
cert-manager:
hooks:
pre_install:
- kubectl apply --validate=false -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.11/deploy/manifests/00-crds.yaml
post_install:
- sleep 30 && kubectl apply -f issuer.yaml
namespace: cert-manager
repository: jetstack
nginx-ingress:
namespace: nginx-ingress
values:
controller:
hostNetwork: true
service:
type: LoadBalancer
Now run reckoner plot core.course.yaml
to install both cert-manager and nginx-ingress on your cluster. If you look at the post_install hook for cert-manager, you’ll notice this also adds issuer.yaml to your cluster.
To check that everything is working as expected, run:
$ kubectl get pods -n nginx-ingress
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-664b77978-gbd59 1/1 Running 0 14s
nginx-ingress-default-backend-576b86996d-fjb4k 1/1 Running 0 14s
$ kubectl get pods -n cert-manager
NAME READY STATUS RESTARTS AGE
cert-manager-6b78b7c997-mblt4 1/1 Running 0 18s
cert-manager-cainjector-54c4796c5d-psdgt 1/1 Running 0 18s
cert-manager-webhook-77ccf5c8b4-8tkwt 1/1 Running 0 17s
You may have to wait a minute to see every pod in the Running state.
Install an Application
To test out our new cluster, we’ll install HackMD, an open source collaborative markdown editor. It’s a good, lightweight replacement for Google Docs, especially if you want to host all your own data.
First, we’ll create an Ingress for the app, which will tell nginx-ingress to route traffic from hackmd.example.com to the HackMD application (be sure to replace both instances of example.com with your own domain):
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/issuer: letsencrypt-prod
kubernetes.io/tls-acme: "true"
name: hackmd
namespace: hackmd
spec:
rules:
- host: hackmd.example.com
http:
paths:
- backend:
serviceName: hackmd
servicePort: 3000
path: /
tls:
- secretName: tls-prod-cert
hosts:
- hackmd.example.com
Then we’ll create apps.course.yaml. Later on, if we want to add other applications to our cluster, we can add them here.
apps.course.yaml
namespace: default
charts:
hackmd:
namespace: hackmd
hooks:
post_install:
- kubectl apply -f ./ingress.yaml
Again, note the post_install hook, which takes care of adding ingress.yaml for us.
Now we just need to run reckoner plot apps.course.yaml
, and we’re off and running! You should be able to see HackMD running at https://hackmd.example.com. You’ll also see the data being stored inside the directory at /home/ubuntu/kind-disk.
In the future, if you want to install another app, it should be as easy as adding a few lines to apps.course.yaml, creating another ingress.yaml, and running reckoner plot apps.course.yaml
. And, if you’ve been saving all these files to a Git repository, you can easily migrate your cluster to a new machine or rebuild it from scratch.
Hopefully we’ve convinced you that Kubernetes is a great way to install and manage third-party software. But there are still a lot of moving parts here, so if you run into any trouble, let us know in the comments!
Pingback:Links 24/10/2019: LabPlot 2.7, ExTiX 19.10, Pop!_OS 19.10 | Techrights