Kubernetes Cluster Installation

 Execute following script on all Kubernetes Nodes (Control Node and Worker Nodes)

Login to every node and run following (I am using Ubuntu 20.04)

cat > k8s.sh

sudo apt update
sudo apt -y full-upgrade
[ -f /var/run/reboot-required ] && sudo reboot -f

swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

echo $1 > /etc/hostname
hostnamectl set-hostname $1
echo -e "overlay\nbr_netfilter" > /etc/modules-load.d/containerd.conf
modprobe overlay
modprobe br_netfilter
echo -e "net.bridge.bridge-nf-call-iptables = 1\nnet.ipv4.ip_forward = 1\nnet.bridge.bridge-nf-call-ip6tables = 1" > /etc/sysctl.d/99-kubernetes-cri.conf
sysctl --system

apt-get update && apt-get install -y containerd
mkdir /etc/containerd && containerd config default | tee /etc/containerd/config.toml
systemctl restart containerd.service
systemctl enable containerd
systemctl status  containerd

apt-get install -y apt-transport-https curl
curl  -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -y kubelet=1.23.0-00 kubeadm=1.23.0-00 kubectl=1.23.0-00
## Do not update or change below packages and put on hold
apt-mark hold kubelet kubeadm kubectl
systemctl enable kubelet



if [ "$1" == "control" ]
then
    ## Do not update or change below packages and put on hold
    apt-mark hold kubelet kubeadm kubectl

    ## Setup cluster on Control Plane

    kubeadm init --pod-network-cidr 192.168.0.0/16 --kubernetes-version 1.23.0
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
    kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml
fi


This script requires 1 Argument (NO validation has been done in script) which is used to update HOSTNAME on servers like (control, worker1, worker2)


NOTE: I have used CALICO for CNI network plugin


For Control Plane node

bash k8s.sh control

For Worker nodes

bash k8s.sh worker1


Now login to Control node and run following commands to verify cluster setup correctly

kubectl get nodes

NAME      STATUS     ROLES                  AGE   VERSION

control   NotReady   control-plane,master   54s   v1.23.0


kubectl get pods --all-namespaces

calico-system     calico-kube-controllers-69cfd64db4-5q6w8   1/1     Running   0          44s

calico-system     calico-node-s82tv                          1/1     Running   0          44s

calico-system     calico-typha-7785554894-6grh4              1/1     Running   0          44s

kube-system       coredns-64897985d-wzd4s                    1/1     Running   0          55s

kube-system       coredns-64897985d-xq59q                    1/1     Running   0          55s

kube-system       etcd-control                               1/1     Running   0          64s

kube-system       kube-apiserver-control                     1/1     Running   0          64s

kube-system       kube-controller-manager-control            1/1     Running   0          65s

kube-system       kube-proxy-gkdv7                           1/1     Running   0          55s

kube-system       kube-scheduler-control                     1/1     Running   0          65s

tigera-operator   tigera-operator-7d8c9d4f67-7dfnb           1/1     Running   0          55s


Run following command to get the K8s cluster token

kubeadm token create --print-join-command

kubeadm join 172.31.20.221:6443 --token 743g1x.3shw8jgfxb70l7xd --discovery-token-ca-cert-hash sha256:8d16a1aa8afbd56bb4eaa8dfbf25fcb5147207550e39268efa4a04f1a2483f77

###Copy the output of above command

###Login to Worker nodes and execute copied command (cluster join command)

###Once done with above command, GOTO CONTROL Server and check the output of nodes if WORKER Nodes Joined the cluster

root@control:~# kubectl get nodes

NAME      STATUS     ROLES                  AGE   VERSION

control   Ready      control-plane,master   13m   v1.23.0

node1     NotReady   <none>                 17s   v1.23.0

node2     NotReady   <none>                 1s    v1.23.0

root@control:~# kubectl get nodes

NAME      STATUS   ROLES                  AGE   VERSION

control   Ready    control-plane,master   14m   v1.23.0

node1     Ready    <none>                 83s   v1.23.0

node2     Ready    <none>                 67s   v1.23.0


Your cluster is Ready!!

Comments

Popular posts from this blog

Resume