Kubernetes Cluster on Oracle Cloud

Kubernetes Cluster on Oracle Cloud

Steps:

  1. Create an OCI account

  2. Create instances

  3. Get a free domain

  4. Clean system firewalls

  5. Hosts file

  6. Install kubeadm on all instances

  7. Set up the control plane with kubeadm

  8. Add the pod-network

  9. Connect all the nodes together

  10. Lens

  11. MetalLB

  12. Nginx Ingress

  13. Cert Manager with LetsEncrypt

  14. Test Page

  15. Conclusion

1. Create an OCI account:

Go to https://www.oracle.com/cloud/ and make an account, fairly straightforward.

2. Create instances

Creating instances is fairly straightforward. I used Ubuntu for all 4 nodes. The 2 free AMD instances have to be VM.Standard.E2.1.Micro. For the ARM instances, I made 2 instances, each with 2CPU and 12GB.

Oracle Cloud Infrastructure (OCI): Create a Compute VM

This article shows how to create a compute virtual machine under Oracle Cloud Infrastructure (OCI). The screens change…

oracle-base.com

Ubuntu Full (Not Minimal)

Adjustable Ampere ARM Instances

Note on assigning IP addresses: By default, a new instance will have an ephemeral public address. To switch between ephemeral and public IP addresses, in the VNIC section, the NO PUBLIC IP has first to be selected and updated, then the public IP choices should be available. I used reserved addresses for the ARM machines and ephemeral ones for the AMD machines.

A bit unintuitive IP address management

3. Get a free domain

While the cluster can be accessed through their IP addresses, it is much better to assign a domain to them, so we can change the IP addresses later. Freenom provides free domains that we can use.

Freenom - A Name for Everyone

Edit description

www.freenom.com

Simple DNS configuration

In my DNS settings, I made 2 A records, with identical names and pointing to my 2 reserved IP addresses.

4. Clean system firewalls

The Oracle instances have pre-installed firewall rules that can block some networking functions needed for Kubernetes. To fix it, these commands should be applied to each machine.

## save existing rules
sudo iptables-save > ~/iptables-rules## modify rules, remove drop and reject linesgrep -v "DROP" iptables-rules > tmpfile && mv tmpfile iptables-rules-modgrep -v "REJECT" iptables-rules-mod > tmpfile && mv tmpfile iptables-rules-mod## apply the modifications
sudo iptables-restore < ~/iptables-rules-mod## check
sudo iptables -L## save the changes
sudo netfilter-persistent save
sudo systemctl restart iptables

5. Hosts file

The host files on each machine should be modified to allow nodes to communicate with each other using DNS.

## hosts file
sudo nano /etc/hosts## add these lines (change the values to match cluster)
private.ip.arm.machine1 your.freenomdomain.tk
private.ip.arm.machine2 your.freenomdomain.tk

6. Install kubeadm on all instances

SSH into each machine, and follow all the steps very carefully in the official kubeadm guide. The procedure should be identical for all the nodes.

Installing kubeadm

This page shows how to install the kubeadm toolbox. For information on how to create a cluster with kubeadm once you have…

kubernetes.io

For container runtime, use Docker — scroll all the way down in this guide below, and only follow the instructions under “Docker”. Make sure to also follow the steps on setting up systemd

Container runtimes

You need to install a container runtime into each node in the cluster so that Pods can run there. This page outlines…

kubernetes.io

7. Set up the control plane with kubeadm

A detailed official guide is available, again with many options to choose from.

Creating a cluster with kubeadm

Using kubeadm, you can create a minimum viable Kubernetes cluster that conforms to best practices. In fact, you can use…

kubernetes.io

kubeadm init

The kubeadmin init command will set up the machine as a control plane. A bit of research was needed to figure out the proper command, as running it without any arguments may not work, or may build a cluster with unwanted features.

For this cluster, kubeadm with the args below worked best.

## start k8s control planeCERTKEY=$(kubeadm certs certificate-key)echo $CERTKEY## save your CERTKEY for future use## replace the addresses with your ownsudo kubeadm init --apiserver-cert-extra-sans=your.freenomdomain.tk,your.reserved.public.ip1,your.reserved.public.ip2 --pod-network-cidr=10.32.0.0/12 --control-plane-endpoint=your.freenomdomain.tk --upload-certs --certificate-key=$CERTKEY

When complete, kubeadm will output some instructions like below.

Your Kubernetes control-plane has initialized successfully!
...

8. Add the pod-network

There are a lot of pod-network add-ons to choose from. Weave worked the best for this cluster.

Integrating Kubernetes via the Addon

The following topics are discussed: Before installing Weave Net, you should make sure the following ports are not…

www.weave.works

There are a lot of details on installing a weave, but this single command should suffice.

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

It is best to stick with one pod-network. Installing and uninstalling different pod-networks multiple times can break the networking on a cluster, as not all components are automatically removed when uninstalled, and can create conflicts.

9. Connect all the nodes together

The provided instructions from kubeadm init output can be used to join the other nodes. They should look like below.

## Connect ARM machines as control planekubeadm join your.freenomdomain.tk:6443 --token xxxxxxxxxxx \
--discovery-token-ca-cert-hash sha256:yyyyyyyyyyyy \
--control-plane --certificate-key zzzzzzzzzzz## Connect AMD machineskubeadm join your.freenomdomain.tk:6443 --token xxxxxxxxxxx \
--discovery-token-ca-cert-hash sha256:yyyyyyyyyyyy \

I had to remake my cluster several times to get my desired configuration (ex. dns). I used these commands to redo my cluster (commands need to be run on each affected machine)

## remove clustersudo kubeadm reset
sudo rm -rf /etc/kubernetes
sudo rm -rf /etc/cni/net.d
sudo rm -rf /var/lib/kubelet
sudo rm -rf /var/lib/etcd
sudo rm -rf $HOME/.kube

10. Lens

Lens is a GUI to manage the kubernetes cluster, and is a great complement to kubectl commands. I installed it on my local machine.

Lens | The Kubernetes IDE

Hotbars. The main navigation, allowing users to build their “workflows” and “automation” within the desktop…

k8slens.dev

Lens UI

At this point, the Kubernetes cluster should be in good shape. The rest of the steps below are to enable hosting web apps.

11. MetalLB

MetalLB

MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols. MetalLB…

metallb.universe.tf

MetalLB is a free load balancer unrelated to the cloud provider load balancer. To setup MetalLb, a custom config yaml file should be created.

# layer2metallb.yamlapiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - arm.public.ip1/32 ## replace with your instance's IP
      - arm.public.ip2/32 ## replace with your instance's IP

MetalLb can then be installed with the following commands.

## metallbkubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/namespace.yamlkubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.6/manifests/metallb.yamlkubectl apply -f layer2metallb.yamlkubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

The video below provides more detail on MetalLb.

MetalLB and Nginx Ingress

12. Nginx Ingress

Nginx ingress is used together with MetalLB to enable public access to webapps on the cluster.

Helm can be used to install nginx-ingress. The guide below shows how to install helm.

Quickstart Guide

This guide covers how you can quickly get started using Helm. The following prerequisites are required for a successful…

helm.sh

A customized yaml file is needed to make nginx work with metallb. The default yaml should first be downloaded.

helm show values ingress-nginx/ingress-nginx > ngingress-metal-custom.yaml

These specific lines in the yaml should be modified

hostNetwork: true ## change to false#...hostPort:
  enabled: false ## change to true#...kind: Deployment ## change to DaemonSet#...externalIPs: [] ## change to belowexternalIPS:
  - arm.public.ip1/32 ## replace with your instance's IP
  - arm.public.ip2/32 ## replace with your instance's IP#...loadBalancerSourceRanges: [] ## change to belowloadBalancerSourceRanges:
  - arm.public.ip1/32 ## replace with your instance's IP
  - arm.public.ip2/32 ## replace with your instance's IP

Nginx ingress can then be installed with below commands.

kubectl create ns ingress-nginxhelm repo add ingress-nginx https://kubernetes.github.io/ingress-nginxhelm repo updatehelm install helm-ngingress ingress-nginx/ingress-nginx -n ingress-nginx --values ngingress-metal-custom.yaml

More detailed information can be found in the official NGINX ingress guide.

Welcome - NGINX Ingress Controller

This is the documentation for the NGINX Ingress Controller. It is built around the Kubernetes Ingress resource, using a…

kubernetes.github.io

13. Cert Manager with LetsEncrypt

Cert-manager can automate the process of upgrading http sites to https. The setup is fairly generic. These commands worked for this cluster.

## install w manifestskubectl create ns cert-managerkubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.3.1/cert-manager.yaml

It is recommended to also complete the Verifying the Installation portion of the official guide below.

Kubernetes

cert-manager runs within your Kubernetes cluster as a series of deployment resources. It utilizes…

cert-manager.io

After the verification, the official guide has many options on what to do next. For this particular cluster, only the following yaml file is needed to create the LetsEncrypt issuer, and complete the setup.

#prod-issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
  namespace: cert-manager
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: your@email.address
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
    - selector: {}
      http01:
        ingress:
          class: nginx

The file should be applied with kubectl

kubectl create -f prod-issuer.yaml

14. Test Page

A second domain should be created from freenom.com to host the webapps. This domain should have A records that point to the public IP addresses of the control plane nodes.

A simple deployment can be created to test the cluster. In the test-tls-deploy.yml below, the host addresses should be replaced to match the newly created addresses.

#test-tls-deploy.yml
apiVersion: v1
kind: Namespace
metadata:
  name: test-tls-deploy
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: test
  name: test-tls-deploy
  labels:
    app: test
spec:
  selector:
    matchLabels:
      app: test
  replicas: 4
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  template:
    metadata:
      labels:
        app: test
    spec:
      containers:
        - name: test
          image: nginx
          ports:
            - containerPort: 80
              name: test
          resources:
            requests:
              memory: "100Mi"
              cpu: "100m"
            limits:
              memory: "500Mi"
              cpu: "500m"
      affinity:
        podAntiAffinity: ## spread pods across nodes
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: app
                    operator: In
                    values:
                      - test
              topologyKey: "kubernetes.io/hostname"
---
apiVersion: v1
kind: Service
metadata:
  namespace: test
  name: test-tls-service
  labels:
    app: test
spec:
  selector:
    app: test
  type: NodePort
  ports:
    - port: 80 ## match with ingress
      targetPort: 80 ## match with deployment
      protocol: TCP
      name: test
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  namespace: test
  name: test-tls-ingress
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  tls:
    - hosts:
        - www.yourwebappdomain.tk
      secretName: test-tls-secret
  rules:
    - host: www.yourwebappdomain.tk
      http:
        paths:
          - pathType: Prefix
            path: /
            backend:
              service:
                name: test-tls-service
                port:
                  number: 80

Use kubectl to apply the deployment

kubectl apply -f test-tls-deploy.yml

If all went well, the simple nginx website should be accessible from the newly created web address.

Done!

15. Conclusion

At this point, the cluster should work like the pre-packaged cloud provider clusters. The cluster is predominately ARM-based, so containers should be built to run on ARM64 architecture, meaning that the base docker images should be ARM-based, and any installed binaries should also be ARM-based. The deployment yaml files should also be configured for select ARM machines. There are 2 AMD machines in the cluster, which can run AMD64 containers. These machines are fairly limited in computing (1GB each, compared to 12GB for ARMs) so only very non-intensive containers should be launched in those instances.