Followers

Creating a single control-plane cluster with kubeadm with Calico Pod Network


Setup Kubernetes Cluster using Vagrant

Install Kubernetes on Ubuntu 22.04

The kubeadm tool helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices.

The kubeadm tool is good if you need:

  • A simple way for you to try out Kubernetes, possibly for the first time.
  • A way for existing users to automate setting up a cluster and test their application.
  • A building block in other ecosystem and/or installer tools with a larger scope.

Before you begin

To follow this guide, you need:

  • One or more machines running a deb/rpm-compatible Linux OS; for example: Ubuntu or CentOS.
  • 2 GiB or more of RAM per machine--any less leaves little room for your apps.
  • At least 2 CPUs on the machine that you use as a control-plane node.
  • Full network connectivity among all machines in the cluster. You can use either a public or a private network.

Check required ports

Control-plane node(s)

Protocol

Direction

Port Range

Purpose

Used By

TCP

Inbound

6443*

Kubernetes API server

All

TCP

Inbound

2379-2380

etcd server client API

kube-apiserver, etcd

TCP

Inbound

10250

Kubelet API

Self, Control plane

TCP

Inbound

10251

kube-scheduler

Self

TCP

Inbound

10252

kube-controller-manager

Self

Worker node(s)

Protocol

Direction

Port Range

Purpose

Used By

TCP

Inbound

10250

Kubelet API

Self, Control plane

TCP

Inbound

30000-32767

NodePort Services†

All

 

Installing runtime

 

By default, Kubernetes uses the Container Runtime Interface (CRI) to interface with your chosen container runtime.

If you don't specify a runtime, kubeadm automatically tries to detect an installed container runtime by scanning through a list of well known Unix domain sockets.

Runtime

Path to Unix domain socket

Docker

/var/run/docker.sock

containerd

/run/containerd/containerd.sock

CRI-O

/var/run/crio/crio.sock


If both Docker and containerd are detected, Docker takes precedence. This is needed because Docker 18.09 ships with containerd and both are detectable even if you only installed Docker. If any other two or more runtimes are detected, kubeadm exits with an error.

Installing kubeadm, kubelet and kubectl

·       kubeadm: the command to bootstrap the cluster.

·       kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.

·       kubectl: the command line util to talk to your cluster.

Infrastructure

Lets Create 3 VirtualMachines(VMs) (1 Master Node and 2 Worker node). There must be network connectivity among these VMs

Installation on Ubuntu (Both on Master and Worker Nodes)

sudo apt-get install -y apt-transport-https ca-certificates curl gpg


mkdir /etc/apt/keyrings/


curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg


echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list


sudo apt-get update

sudo apt-get install -y kubelet kubeadm kubectl

sudo apt-mark hold kubelet kubeadm kubectl


Create Master Server

On master machine run the below command

1.  kubeadm init --apiserver-advertise-address=<<Master ServerIP>> --pod-network-cidr=192.168.0.0/16

 

2.  mkdir -p $HOME/.kube

3.  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

4.  sudo chown $(id -u):$(id -g) $HOME/.kube/config

5. Run the join command on workernodes to connect these on kubernetes cluster.

Install Calico (run it only on master node)


# kubectl create -f https://docs.projectcalico.org/v3.18/manifests/calico.yaml


kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/tigera-operator.yaml

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/custom-resources.yaml


kubectl get nodes

Wait for above command and run again it may take a minute or so to get all the nodes in ready state.




Installation on RHEL/CentOS (Both on Master and Worker Nodes)

In case if you are using CentOS/RHEL

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg \
   https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
 
# Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
 
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
 
sudo systemctl enable --now kubelet

 


 

 Next Create Pod


COMMENTS

BLOGGER: 12
  1. Hi Raman,
    I did the setup as explained above. I could see the nodes created
    ==============
    root@myCPMaster:/home/ubuntu# kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    mycpmaster Ready control-plane,master 14m v1.20.0
    worker01 Ready 10m v1.20.0
    worker02 Ready 12m v1.20.0
    root@myCPMaster:/home/ubuntu#

    The next day I stopped and restarted the aws instances(master,worker nodes). Now when I go to the master and say kubectl get nodes
    On master kubectl get pods
    root@myCPMaster:/home/ubuntu# kubectl get pods
    No resources found in default namespace.----------------> it is showing this
    root@myCPMaster:/home/ubuntu#

    I thought i'll try to join the worker nodes again
    So when I ran the join command on the worker nodes it gave me
    ==========
    On worker nodes I thought I will join the master node again
    root@worker01:/home/ubuntu# kubeadm join 172.31.17.197:6443 --token 3j7va6.h325yawyg1mrcfva --discovery-token-ca-cert-hash sha256:74fcd00d34a89f340a9a8fb5d6de2562e7d33a83d6c92305f07e956aaa3b149a
    [preflight] Running pre-flight checks
    [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
    [ERROR Port-10250]: Port 10250 is in use
    [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
    [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
    To see the stack trace of this error execute with --v=5 or higher
    root@worker01:/home/ubuntu#
    ===============================
    Can you help

    ReplyDelete
    Replies
    1. After removing /etc/kubernetes/kubelet.conf and /etc/kubernetes/pki/ca.crt, u can restart kubelet, this issue will be resolved.
      rm - rf /etc/kubernetes/kubelet.conf /etc/kubernetes/pki/ca.crt
      sudo systemctl restart kubelet.
      Also u need to enable docker service.
      sudo systemctl enable docker

      Delete
    2. Good Thanks for posting this.

      Delete
  2. Hi Raman,

    I am getting below issue when going into POD and executing apt update command.


    root@pod2:/# apt update
    Err:1 http://security.debian.org/debian-security buster/updates InRelease
    Temporary failure resolving 'security.debian.org'
    Err:2 http://deb.debian.org/debian buster InRelease
    Temporary failure resolving 'deb.debian.org'
    Err:3 http://deb.debian.org/debian buster-updates InRelease
    Temporary failure resolving 'deb.debian.org'
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    All packages are up to date.
    W: Failed to fetch http://deb.debian.org/debian/dists/buster/InRelease Temporary failure resolving 'deb.debian.org'
    W: Failed to fetch http://security.debian.org/debian-security/dists/buster/updates/InRelease Temporary failure resolving 'security.debian.org'
    W: Failed to fetch http://deb.debian.org/debian/dists/buster-updates/InRelease Temporary failure resolving 'deb.debian.org'
    W: Some index files failed to download. They have been ignored, or old ones used instead.
    root@pod2:/#

    ReplyDelete
    Replies
    1. This could related to some network issue. please create the pod again and try to disconnect if you are connected with VPN.

      Delete
  3. root@ip-172-31-34-32:/home/ubuntu# systemctl status kubelet
    ● kubelet.service - kubelet: The Kubernetes Node Agent
    Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
    └─10-kubeadm.conf
    Active: activating (auto-restart) (Result: exit-code) since Wed 2021-08-11 10:25:14 UTC; 4s ago
    Docs: https://kubernetes.io/docs/home/
    Process: 5421 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
    Main PID: 5421 (code=exited, status=1/FAILURE)

    ReplyDelete
    Replies
    1. please run kubeadm reset command first and try kubeadm init command again

      Delete
    2. This comment has been removed by the author.

      Delete
  4. Raman,

    I'm getting this error continuously when I run the "vagrant up" command to create the master and worker nodes.

    schannel: next initializesecuritycontext failed "(0x80090326" )

    Tried reinstalling vagrant,virtualbox and git. Nothing is helping. Tried almost all solutions from Google. Still no way around.

    Could you please help? I need to have the nodes to try and prepare for the Internal Certification exam.

    ReplyDelete
  5. Hi raman i get this error message,
    * minikube v1.30.1 on Ubuntu 22.04
    * Automatically selected the docker driver. Other choices: virtualbox, none, ssh
    * The "docker" driver should not be used with root privileges. If you wish to continue as root, use --force.
    * If you are running minikube within a VM, consider using --driver=none:
    * https://minikube.sigs.k8s.io/docs/reference/drivers/none/

    X Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.

    ReplyDelete
  6. what is the command you have used is it minikube start...

    ReplyDelete

Name

Ansible,6,AWS,1,Azure DevOps,1,Containerization with docker,2,DevOps,2,Docker Quiz,1,Docker Swarm,1,DockerCompose,1,ELK,2,git,2,Jira,1,Kubernetes,1,Kubernetes Quiz,5,SAST DAST Security Testing,1,SonarQube,3,Splunk,2,vagrant kubernetes,1,YAML Basics,1,
ltr
static_page
DevOpsWorld: Creating a single control-plane cluster with kubeadm with Calico Pod Network
Creating a single control-plane cluster with kubeadm with Calico Pod Network
DevOpsWorld
https://www.devopsworld.co.in/p/creating-single-control-plane-cluster.html
https://www.devopsworld.co.in/
https://www.devopsworld.co.in/
https://www.devopsworld.co.in/p/creating-single-control-plane-cluster.html
true
5997357714110665304
UTF-8
Loaded All Posts Not found any posts VIEW ALL Readmore Reply Cancel reply Delete By Home PAGES POSTS View All RECOMMENDED FOR YOU LABEL ARCHIVE SEARCH ALL POSTS Not found any post match with your request Back Home Sunday Monday Tuesday Wednesday Thursday Friday Saturday Sun Mon Tue Wed Thu Fri Sat January February March April May June July August September October November December Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec just now 1 minute ago $$1$$ minutes ago 1 hour ago $$1$$ hours ago Yesterday $$1$$ days ago $$1$$ weeks ago more than 5 weeks ago Followers Follow THIS PREMIUM CONTENT IS LOCKED STEP 1: Share to a social network STEP 2: Click the link on your social network Copy All Code Select All Code All codes were copied to your clipboard Can not copy the codes / texts, please press [CTRL]+[C] (or CMD+C with Mac) to copy Table of Content