Showing posts with label calico cni.. Show all posts
Showing posts with label calico cni.. Show all posts

Thursday, 1 August 2024

Kubernetes easy installation guide

 Install Kubernetes guide:



After lots of research, I've found a easy to follow tutorial to install Kubernetes cluster. Here's the link to the video: 

https://www.youtube.com/watch?v=I9goyp8mWfs

https://www.itsgeekhead.com/tuts/kubernetes-129-ubuntu-22-04-3/



UBUNTU SERVER LTS 24.04.0 - https://ubuntu.com/download/server

KUBERNETES 1.30.1         - https://kubernetes.io/releases/

CONTAINERD 1.7.18         - https://containerd.io/releases/

RUNC 1.2.0-rc.1               - https://github.com/opencontainers/runc/releases

CNI PLUGINS 1.5.0         - https://github.com/containernetworking/plugins/releases

CALICO CNI 3.28.0         - https://docs.tigera.io/calico/3.27/getting-started/kubernetes/quickstart


3 NODES, 2 vCPU, 8 GB RAM, 50GB Disk EACH

k8s-control   10.10.10.2

k8s-01         10.10.10.3

k8s-02         10.10.10.4




### ALL:


sudo su


printf "\n10.10.10.2 k8s-control\n10.10.10.3 k8s-1\n10.10.10.4 k8s-1\n\n" >> /etc/hosts


printf "overlay\nbr_netfilter\n" >> /etc/modules-load.d/containerd.conf


modprobe overlay

modprobe br_netfilter


printf "net.bridge.bridge-nf-call-iptables = 1\nnet.ipv4.ip_forward = 1\nnet.bridge.bridge-nf-call-ip6tables = 1\n" >> /etc/sysctl.d/99-kubernetes-cri.conf


sysctl --system


wget https://github.com/containerd/containerd/releases/download/v1.7.18/containerd-1.7.18-linux-amd64.tar.gz -P /tmp/

tar Cxzvf /usr/local /tmp/containerd-1.7.18-linux-amd64.tar.gz

wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service -P /etc/systemd/system/

systemctl daemon-reload

systemctl enable --now containerd


wget https://github.com/opencontainers/runc/releases/download/v1.2.0-rc.1/runc.amd64 /tmp/

install -m 755 /tmp/runc.amd64 /usr/local/sbin/runc


wget https://github.com/containernetworking/plugins/releases/download/v1.5.0/cni-plugins-linux-amd64-v1.5.0.tgz -P /tmp/

mkdir -p /opt/cni/bin

tar Cxzvf /opt/cni/bin /tmp/cni-plugins-linux-amd64-v1.5.0.tgz


mkdir -p /etc/containerd

containerd config default | tee /etc/containerd/config.toml   <<<<<<<<<<< manually edit and change SystemdCgroup to true (not systemd_cgroup)

vi /etc/containerd/config.toml

systemctl restart containerd



swapoff -a  <<<<<<<< just disable it in /etc/fstab instead


apt-get update

apt-get install -y apt-transport-https ca-certificates curl gpg


mkdir -p -m 755 /etc/apt/keyrings

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list



apt-get update


reboot


sudo su


  apt-get update

  apt-get install -y kubelet kubeadm kubectl

  apt-mark hold kubelet kubeadm kubectl


# check swap config, ensure swap is 0

free -m



### ONLY ON CONTROL NODE .. control plane install:

                kubeadm init --pod-network-cidr 10.10.0.0/16 --kubernetes-version 1.30.1 --node-name k8s-control


                export KUBECONFIG=/etc/kubernetes/admin.conf


                # add Calico 3.28.0 CNI

                kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/tigera-operator.yaml

                wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/custom-resources.yaml

                vi custom-resources.yaml <<<<<< edit the CIDR for pods if its custom

                kubectl apply -f custom-resources.yaml


                # get worker node commands to run to join additional nodes into cluster

                kubeadm token create --print-join-command

                ###



### ONLY ON WORKER nodes

Run the command from the token create output above

Create a Directory from existing disk in proxmox server

Scenario:   I have an NVMe disk with an ext4 partition that was previously used as a directory in a Proxmox server. After reinstalling Proxm...