1. 모든 노드에 docker 설치
2. 모든 노드에 kubeadm 설치 : bootstrap
3. Control Plane 에서 Initialize & POD Network 확인
4. Node의 조인
👌 Master : CentOS7, 2CPU, 3GB RAM, 20GB HDD
👌 Node 1, 2 : CentOS7, 2 CPU, 2GB RAM, 20GB HDD
👌 각 노드 간 원활한 네트워크 통신
👌 각 노드별 고유 hostaname, MAC address, UUID
👌 swap 비활성화 (필수! kubelet 오류발생하지않도록)
👌 방화벽 포트 오픈
kubernetes.io/ko/docs/tasks/tools/install-kubectl/
자동설치 스크립트 명령어를 통한 도커 설치
# curl -s https://get.docker.com | sudo sh # sudo systemctl enable --now docker
쿠버네티스 설치 및 kubectl 명령어를 돌릴 쿠버네티스 전용 계정을 생성하여 sudo 권한을 사용할 수 있게 설정하고 도커 또한 설정할 수 있게 한다.
(root)# useradd kube (root)# passwd --stdin kube (root)# echo "kube ALL=(ALL) ALL" >> /etc/sudoers.d/kube (root)# usermod -aG docker kube (root)# su - kube (root)# sudo systemctl enable --now docker
기 설치된 패키지를 최신으로 업그레이드
$ sudo yum -y update $ sudo yum -y upgrade
서버 간 NTP 시간 동기화 확인 필요
$ sudo timedatectl set-timezone Asia/Seoul $ sudo yum -y install ntp $ sudo systemctl enable --now ntpd
$ sudo setenforce 0 $ sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
모든 노드에서 메모리 스왑을 비활성화함으로써 컨테이너 성능의 일관성을 확립
$ sudo swapoff -a $ vi /etc/fstab "swap 정의된 라인 주석 처리(영구적 비활성화)"
iptables가 bridged traffic을 바라보게 만들자.
1. br_netfilter 모듈이 로드되었는지 확인한다
2. sysctl conf에서 net.bridge.bridge-nf-call-iptables 값을 1로 설정한다.
현 상태까지 초기 설정을 마친서버를 잠시 shut down 후, full clone 기능을( VMWorkstation 사용 중 ) 통해 node1, node2 서버 두 개를 추가 생성한다.
그리고 RAM 부분만 2G로 수정하고, 각 서버의 호스트명과 네트워크를 원본 서버와 다르게 설정한다. 각 서버의 호스트명과 ip주소에 따라/etc/hosts 파일 내용을 수정하고 각 서버간 ping 테스트를 진행한다.
Control Plane 및 Node 별로 필요한 포트를 개방
@ Master Node
$ sudo firewall-cmd --add-port=6443/tcp --zone=public --permanent $ sudo firewall-cmd --add-port=2379-2380/tcp --zone=public --permanent $ sudo firewall-cmd --add-port=10250/tcp --zone=public --permanent $ sudo firewall-cmd --add-port=10251/tcp --zone=public --permanent $ sudo firewall-cmd --add-port=10252/tcp --zone=public --permanent $ sudo firewall-cmd --reload
@ Worker Node
10250/tcp : kubelet API
30000-32767/tcp : NodePort
179/tcp : Calico
패키지 서버 설치 후 패키지 설치
$ cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl= https://packages.cloud.google.com/yum/repos/kubernetes-el7- \$basearch enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey= https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kubelet kubeadm kubectl EOF $ sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes $ sudo systemctl enable --now kubelet
💡 failed to load Kubelet config file /var/lib/kubelet/ config.yaml, e rror failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
kubelet 서비스는 kubeadm init 또는 join을 기다리며 매초마다 재실행되고 있기 때문에
kubeadm 실행 전까지는 계속 service loaded가 출력된다.
즉, kubeadm init이 행해져야 kubelet 서비스 관련 파일이 populate될 수 있기 때문이다.
kubeadm은 root 권한으로 실행되어야 한다.
$ sudo kubeadm init --pod-network-cidr=10.224.0.0/16
--pod-network-cidr : pod의 네트워크, 즉 컨테이너의 서브넷을 지정하는 옵션 (docker 설치 시 기본적으로 지정되는 서브넷은 172.17.0.0/16)
[init] Using Kubernetes version: v1.20.4 [preflight] Running pre-flight checks [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03 [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [control kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.192.200] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [control localhost] and IPs [192.168.192.200 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [control localhost] and IPs [192.168.192.200 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 64.004842 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node control as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)" [mark-control-plane] Marking the node control as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: pxh9uj.gr3wlmzqswixc04m [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy
💡 후속 지시 사항 확인
일반계정으로서 kubectl 명령어를 수행하기 위해서 홈 디렉토리 안에 conf 파일을 복사하여 소유권 설정하기 필요
해당 설정을 하지 않고 kubectl 을 사용하면 거부된다.
클러스터에 pod network 플러그인을 추가 해야 한다 : 다양한 오픈소스 플러그인이 있다.
해당 실습에서는 Calico를 설치하고자 한다.
$ kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
$ sudo yum -y install wget $ wget https://docs.projectcalico.org/manifests/custom-resources.yaml
$ vi custom-resources.yaml
해당 yaml에서 pod-network-cidr 에 맞추어 수정
# This section includes base Calico installation configuration. # For more information, see: https://docs.projectcalico.org/v3.18/reference/installation/api#operator.tigera.io/v1.Installation apiVersion: operator.tigera.io/v1 kind: Installation metadata: name: default spec: # Configures Calico networking. calicoNetwork: # Note: The ipPools section cannot be modified post-install. ipPools: - blockSize: 26 cidr: 192.168.0.0/16 "해당부분 수정 필요" encapsulation: VXLANCrossSubnet natOutgoing: Enabled nodeSelector: all()
Master 노드의 taint를 삭제하여 Master 위에 pod를 올릴 수 있게 설정한다. (조그만 클러스터를 꾸리고 있는 경우 추천)
[kube@node1 ~]$ sudo kubeadm join 192.168.192.200:6443 --token pxh9uj.gr3wlmzqswixc04m --discovery-token-ca-cert-hash sha256:50c4ec1a3297c25a34c7d19765fc57c199e3b6406b5e6fde0a331c9e24c43a7f [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
조금 시간이 지난 뒤, get nodes를 하면 READY 상태를 확인할 수 있다.
kubeadm init
This command initializes a Kubernetes control-plane node. Synopsis Run this command in order to set up the Kubernetes control plane The "init" command executes the following phases: preflight Run pre-flight checks certs Certificate generation /ca Generate
kubernetes.io
kubectl 치트 시트
이 페이지는 일반적으로 사용하는 kubectl 커맨드와 플래그에 대한 목록을 포함한다. Kubectl 자동 완성 BASH source <(kubectl completion bash) # bash-completion 패키지를 먼저 설치한 후, bash의 자동 완성을 현재
docs.projectcalico.org/getting-started/kubernetes/quickstart
Quickstart for Calico on Kubernetes
Install Calico on a single-host Kubernetes cluster for testing or development in under 15 minutes.
docs.projectcalico.org
kubelet 서비스 config 파일에 " --cgroup-driver=cgroupfs "옵션 추가
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs"
현재, --cgroup-driver 옵션은 비추, 관련 환경변수 파일이나 설정 파일(var/lib/kubelet/kubeadm-flags.env or /etc/default/kubelet(/etc/sysconfig/kubelet) 은 삭제하고 KubeletConfiguration 야믈 파일을 활용하자
cgroup 관련 설정? 왜 필요?
도커를 사용할 때, kubeadm 은 자동으로 kubelet을 위한 cgoup driver을 감지하고 이를 runtime 동안 /var/lib/kubelet/config.yaml 파일에 set한다.
만약에 다른 CRI를 사용한다면 cgroupDriver 값을 kubeadm init에 넘겨주어야 한다