![]() |
冷冷的茄子
1 年前 |
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem
Following
Kubernetes official installation
instruction for containerd and
kubeadm init
will fail with
unknown service
runtime.v1alpha2.RuntimeService
.
# Commands from https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd
apt-get update && apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
apt-get update && apt-get install -y containerd.io
# Configure containerd
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
systemctl restart containerd
kubeadm init
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2020-09-24T11:49:16Z" level=fatal msg="getting status of runtime failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1
Solution:
rm /etc/containerd/config.toml
systemctl restart containerd
kubeadm init
Versions:
Ubuntu 20.04 (focal)
containerd.io 1.3.7
kubectl 1.19.2
kubeadm 1.19.2
kubelet 1.19.2
alexcpn, frstudent, dflourusso, MJjainam, ufUNnxagpM, ionehouten, dmitry-irtegov, fjammes, jlucktay, agrajm, and 115 more reacted with thumbs up emoji
MRROOX, aRustyDev, JeffMY05, PARKINHYO, georgevlt, israboybrit, Rahulsharma0810, awsdevopro, Neko0258, amirensit, and 9 more reacted with hooray emoji
PARKINHYO, fagidutt, brshark, georgevlt, israboybrit, Rahulsharma0810, awsdevopro, np-ftrwei, LMonty-1, Neko0258, and 16 more reacted with heart emoji
quoc9x, awsdevopro, AnderD0G, Pequillatguilhem, Hatef-Rostamkhani, xnderLAN, dominiksr, CreativeWarlock, ilanssari, bingheGT, and 4 more reacted with rocket emoji
All reactions
Apparently it is my fault this time. My ansible playbook did not override config.toml file as I expected. Am sorry for taking up your time, default installation instructions work great.
In the config.toml
file installed by package containerd.io
there is the line disabled_plugins = ["cri"]
that am guessing creating the issue. That maybe is bad default setting to have in the package containerd.io
but that is for another issue/bug.
Closing.
Apparently it is my fault this time. My ansible playbook did not override config.toml file as I expected. Am sorry for taking up your time, default installation instructions work great.
In the config.toml
file installed by package containerd.io
there is the line disabled_plugins = ["cri"]
that am guessing creating the issue. That maybe is bad default setting to have in the package containerd.io
but that is for another issue/bug.
Closing.
Docker (by default) uses that config for the containerd they install via containerd.io packages. Grr. been causing similar ^ issues for k8s users for years. :-)
I followed the official instructions here https://kubernetes.io/docs/setup/production-environment/container-runtimes/
and I was getting similar error
root@green-1:~# kubeadm init --config=config.yaml
W1125 12:58:32.733485 26426 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.4
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2020-11-25T12:58:32Z" level=fatal msg="getting status of runtime failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
I checked /etc/containerd/config.toml and saw 'disabled_plugins = []'
Note the only thing I changed in the config.toml was to use systemd as true -it was different from the way docs has mentioned
(maybe this was the problem?)
[plugins."io.containerd.grpc.v1.cri"]
systemd_cgroup = false --> to true
from docs
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
deleting this config.toml as given in the first post and restarting containerd service solved and kubeadm could proceed
PrateekJoshi, kriptontr, madsnorgaard, bravosierrasierra, ssypalo-pathwire, LitterQa, milanvaibhav, yellowssi, antoniomalves, Askotion, and 6 more reacted with thumbs up emoji
jlucktay, moonape1226, LauanGuermandi, Jeffwan, coldio, PBTests, kingdonb, over9003, SalathielGenese, Klexx, and 9 more reacted with hooray emoji
NiranjanV12, kriptontr, madsnorgaard, and Askotion reacted with rocket emoji
All reactions
This is definitely a bug. For the worker node I followed the docs exact and I was getting this error
Nov 25 14:01:37 green-3 kubelet[24158]: E1125 14:01:37.781735 24158 kubelet.go:2103] Container runtime network not ready: NetworkReady=false reason:Ne
Nov 25 14:01:40 green-3 kubelet[24158]: W1125 14:01:40.121655 24158 manager.go:1168] Failed to process watch event {EventType:0 Name:/kubepods/burstab
Nov 25 14:01:40 green-3 kubelet[24158]: I1125 14:01:40.161343 24158 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 858e1a9
Nov 25 14:01:41 green-3 kubelet[24158]: I1125 14:01:41.162982 24158 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 858e1a9
Nov 25 14:01:41 green-3 kubelet[24158]: I1125 14:01:41.163235 24158 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: a46854a
Nov 25 14:01:41 green-3 kubelet[24158]: E1125 14:01:41.163527 24158 pod_workers.go:191] Error syncing pod 4dabce76-ceb5-43fb-bef1-1992a3aa124d ("kube-
Nov 25 14:01:41 green-3 kubelet[24158]: W1125 14:01:41.624869 24158 manager.go:1168] Failed to process watch event {EventType:0 Name:/kubepods/burstab
Nov 25 14:01:42 green-3 kubelet[24158]: I1125 14:01:42.164906 24158 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: a46854a
Nov 25 14:01:42 green-3 kubelet[24158]: E1125 14:01:42.165198 24158 pod_workers.go:191] Error syncing pod 4dabce76-ceb5-43fb-bef1-1992a3aa124d ("kube-
Nov 25 14:01:43 green-3 kubelet[24158]: W1125 14:01:43.127317 24158 manager.go:1168] Failed to process watch event {EventType:0 Name:/kubepods/burstab
Nov 25 14:01:55 green-3 kubelet[24158]: I1125 14:01:55.011131 24158 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: a46854a
Nov 25 14:02:17 green-3 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
Nov 25 14:02:17 green-3 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Nov 25 14:02:17 green-3 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Nov 25 14:02:17 green-3 kubelet[25982]: I1125 14:02:17.474559 25982 server.go:411] Version: v1.19.4
till I deleted the config,toml restarted contained and kubelet, after which only the worker joined
sealo init kubernetes 1.20+ error when use yum docker-ce/containerd and is running(default) or Use systemd Cgroup driver
labring/sealos#582
Why not use a real containerd default config with yum to aviod ERROR: runtime failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService
#4956
Apparently it is my fault this time. My ansible playbook did not override config.toml file as I expected. Am sorry for taking up your time, default installation instructions work great.
In the config.toml
file installed by package containerd.io
there is the line disabled_plugins = ["cri"]
that am guessing creating the issue. That maybe is bad default setting to have in the package containerd.io
but that is for another issue/bug.
Closing.
This comment saved my day 👍 . The default docker configuration removes CRI.
Problem Following Kubernetes official installation instruction for containerd and kubeadm init
will fail with unknown service runtime.v1alpha2.RuntimeService
.
# Commands from https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd
apt-get update && apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
apt-get update && apt-get install -y containerd.io
# Configure containerd
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
systemctl restart containerd
kubeadm init
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2020-09-24T11:49:16Z" level=fatal msg="getting status of runtime failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1
Solution:
rm /etc/containerd/config.toml
systemctl restart containerd
kubeadm init
Versions:
Ubuntu 20.04 (focal)
containerd.io 1.3.7
kubectl 1.19.2
kubeadm 1.19.2
kubelet 1.19.2
Thanks! You help-me with this solution!!!
Sagar133, dszortyka, laurijssen, ddcodepl, nguyenhuytan, Jiang1155, aileak, amasam1, ErmiasF, vishnumohanreddy, and 24 more reacted with thumbs up emoji
Sagar133, Cajuteq, omri86, sarastinishi, and AllanAndrade reacted with hooray emoji
bombxdev, np-ftrwei, Bjohnson131, AllanAndrade, and donngchao reacted with heart emoji
donngchao reacted with rocket emoji
All reactions
Heads up, this just happened to me on a clean install of Kubernetes v1.24.0 on Ubunutu 20.04.4 LTS. The original fix helped me as well.
Exception:
[init] Using Kubernetes version: v1.24.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2022-05-16T23:41:59Z" level=fatal msg="getting status of runtime: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
Corrected:
user@k8s-master:~/$ sudo rm /etc/containerd/config.toml
user@k8s-master:~/$ sudo systemctl restart containerd
user@k8s-master:~/$ sudo kubeadm init
[init] Using Kubernetes version: v1.24.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
W0524 16:59:01.427276 22679 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/dockershim.sock". Please update your configuration!
[init] Using Kubernetes version: v1.24.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2022-05-24T16:59:03+08:00" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher
I also encountered such a problem, but I didn't solve the problem according to the above operation steps. How should I solve it?
W0524 16:59:01.427276 22679 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/dockershim.sock". Please update your configuration!
[init] Using Kubernetes version: v1.24.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2022-05-24T16:59:03+08:00" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher
I also encountered such a problem, but I didn't solve the problem according to the above operation steps. How should I solve it?
there is no dockershim in kubernetes v1.24 you'll need to install and configure containerd or cri-o
I have a same issue with kubeadm v1.24.0
in CentOS 7.9
unfortunately the config in the containerd.io package has, since forever, had a bad configuration for kubernetes tools. The bad configuration is they install a version of the config for containerd that is only good for docker. This config needs to be replaced at least with the default containerd config.. and you can modify it from there if you like.
"containerd config default > /etc/containerd/config.toml" will overwrite docker's version of the config and replace it with containerd's version of the config.. which also works just fine for docker. Then restart containerd.
W0524 16:59:01.427276 22679 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/dockershim.sock". Please update your configuration!
[init] Using Kubernetes version: v1.24.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2022-05-24T16:59:03+08:00" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher
I also encountered such a problem, but I didn't solve the problem according to the above operation steps. How should I solve it?
there is no dockershim in kubernetes v1.24 you'll need to install and configure containerd or cri-o
Thank you very much for your reply. I will try it.
I had this same error. "failed to pull images... rpc error ...unknown service...
I had raised a ticket and was redirected to this page.
Referring the steps given on top, it solved my problem for K8s 1.24 on RHEL 8.2
However same steps did not help me on K8s 1.24 on CentOS 7.9. Here I continue to receive same error msg! :(
[root@controlplane1 ~]$ kubeadm config images pull --v=5
I0527 14:59:20.357743 923 initconfiguration.go:117] detected and using CRI socket: unix:///var/run/containerd/containerd.sock
I0527 14:59:20.364564 923 interface.go:432] Looking for default routes with IPv4 addresses
I0527 14:59:20.364594 923 interface.go:437] Default route transits interface "enp5s0"
I0527 14:59:20.367026 923 interface.go:209] Interface enp5s0 is up
I0527 14:59:20.367113 923 interface.go:257] Interface "enp5s0" has 2 addresses :[10.10.32.16/21 fe80::96c6:91ff:fe3c:ad5c/64].
I0527 14:59:20.367340 923 interface.go:224] Checking addr 10.10.32.16/21.
I0527 14:59:20.367358 923 interface.go:231] IP found 10.10.32.16
I0527 14:59:20.367368 923 interface.go:263] Found valid IPv4 address 10.10.32.16 for interface "enp5s0".
I0527 14:59:20.367377 923 interface.go:443] Found active IP 10.10.32.16
I0527 14:59:20.367404 923 kubelet.go:214] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
I0527 14:59:20.370303 923 version.go:186] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.txt
exit status 1
output: time="2022-05-27T14:59:21+05:30" level=fatal msg="pulling image: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.ImageService"
, error
k8s.io/kubernetes/cmd/kubeadm/app/util/runtime.(*CRIRuntime).PullImage
cmd/kubeadm/app/util/runtime/runtime.go:121
k8s.io/kubernetes/cmd/kubeadm/app/cmd.PullControlPlaneImages
cmd/kubeadm/app/cmd/config.go:340
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdConfigImagesPull.func1
cmd/kubeadm/app/cmd/config.go:312
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
vendor/github.com/spf13/cobra/command.go:856
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
vendor/github.com/spf13/cobra/command.go:974
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/cmd/kubeadm/app.Run
cmd/kubeadm/app/kubeadm.go:50
main.main
cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/local/go/src/runtime/proc.go:250
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1571
failed to pull image "k8s.gcr.io/kube-apiserver:v1.24.1"
k8s.io/kubernetes/cmd/kubeadm/app/cmd.PullControlPlaneImages
cmd/kubeadm/app/cmd/config.go:341
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdConfigImagesPull.func1
cmd/kubeadm/app/cmd/config.go:312
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
vendor/github.com/spf13/cobra/command.go:856
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
vendor/github.com/spf13/cobra/command.go:974
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/cmd/kubeadm/app.Run
cmd/kubeadm/app/kubeadm.go:50
main.main
cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/local/go/src/runtime/proc.go:250
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1571
Can someone pls help.
Problem Following Kubernetes official installation instruction for containerd and kubeadm init
will fail with unknown service runtime.v1alpha2.RuntimeService
.
# Commands from https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd
apt-get update && apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
apt-get update && apt-get install -y containerd.io
# Configure containerd
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
systemctl restart containerd
kubeadm init
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2020-09-24T11:49:16Z" level=fatal msg="getting status of runtime failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1
Solution:
rm /etc/containerd/config.toml
systemctl restart containerd
kubeadm init
Versions:
Ubuntu 20.04 (focal)
containerd.io 1.3.7
kubectl 1.19.2
kubeadm 1.19.2
kubelet 1.19.2
Thou Saved The Day
CentOS 7
Linux 5.19.1-1.el7.elrepo.x86_64
Case 1:Kubernetes 1.2x binary installation and reports this error solution:
containerd: v1.6.4
kubelet: 1.24.6
containerd config default > /etc/containerd/config.toml
sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/' /etc/containerd/config.toml
sed -i 's/snapshotter = "overlayfs"/snapshotter = "native"/' /etc/containerd/config.toml
Case 2:kuberspray installs kubernetes 1.25.3 and reports this error solution:
containerd: v1.6.9
kubelet: 1.25.3
sed -i 's@# containerd_snapshotter: "native"@containerd_snapshotter: "native"@g' inventory/mycluster/group_vars/all/containerd.yml
Then rerun kubespray
in my case path of docker installation is wrong and that is causes the same issue .
issue was in kubelet that is exited with status 1
OS-ubuntu 20.04
1 . install docker through k8s guide (command - apt install docker.io does not install latest version of docker and there can
issue in installation of containerd . )
https://docs.docker.com/engine/install/ubuntu/ -- it install latest version of docker as well as containerd.
2. after installtion there will be /etc/containerd/config.toml by default . just delete it
3. systemctl restart containerd
4. systemctl restart kubelet > kubeadm init
Got this error too
[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: E0103 00:14:31.026921 5282 remote_runtime.go:948] "Status from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
time="2023-01-03T00:14:31Z" level=fatal msg="getting status of runtime: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1
The following steps worked for me ->
sudo apt-get update && sudo apt-get install -y containerd
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
# update SystemdCgroup to true in `/etc/containerd/config.toml`
sudo systemctl restart containerd
I just ran apt-get upgrade
and now my control plane and all workers are failing to run containerd
and thus also kubelet
. The logs for sudo service containerd status
show:
Jan 24 23:15:55 kube-master containerd[431]: time="2023-01-24T23:15:55.608380181-06:00" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Jan 24 23:15:55 kube-master containerd[431]: time="2023-01-24T23:15:55.609315014-06:00" level=warning msg="failed to load plugin io.containerd.grpc.v1.cri" error="invalid plugin config
It seems the apt-get upgrade
reverted my changes to /etc/containerd/config.toml
and set SystemdCgroup
back to false
as well as systemd_cgroup
. Why does this keep on reverting? Additionally, why are these defaulted to false
?
Seems like perhaps there should be some enhanced logic in the generate default config that detects if systemd is in use and set those values to true
?
Update + solution: in my case, I had to set SystemdCgroup = true
and systemd_cgroup = false
. Leaving systemd_cgroup = true
resulted in an error on containerd startup.
Problem Following Kubernetes official installation instruction for containerd and kubeadm init
will fail with unknown service runtime.v1alpha2.RuntimeService
.
# Commands from https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd
apt-get update && apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
apt-get update && apt-get install -y containerd.io
# Configure containerd
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
systemctl restart containerd
kubeadm init
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2020-09-24T11:49:16Z" level=fatal msg="getting status of runtime failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1
Solution:
rm /etc/containerd/config.toml
systemctl restart containerd
kubeadm init
Versions:
Ubuntu 20.04 (focal)
containerd.io 1.3.7
kubectl 1.19.2
kubeadm 1.19.2
kubelet 1.19.2
Thank you, I solved the problem using your solution!
I have the same problem when I try to join to master node and my containerd is activate, nothing works
version of kubelet=1.25.5-00
[preflight] Running pre-flight checks
[WARNING SystemVerification]: missing optional cgroups: blkio
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2023-02-24T20:01:08Z" level=fatal msg="validate service connection: CRI v1 runtime API is not implemented for endpoint "unix:///var/run/containerd/containerd.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher
Got the same issue and fixed it by installing the containerd.io package from the docker repository instead of the one from ubuntu's repository.
see: https://docs.docker.com/engine/install/ubuntu/#set-up-the-repository
I have ubuntu 22.04.2 on VMs and raspberry pies
Also it seems there is presently an issue to retrieve the gpg key from https://packages.cloud.google.com/apt/doc/apt-key.gpg
I struggled with this too. The solution for me was to comment this line out in /etc/containerd/config.toml
disabled_plugins = ["cri"]
Refer to Three pages of Kubernetes Doc:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
https://kubernetes.io/docs/setup/production-environment/container-runtimes/
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
Steps :
complete the Pre-requisite ( pages 1 & 2 )
install CRI from page 2 ( For Containered & Docker- https://docs.docker.com/engine/install/ubuntu/ )
Make changes in /etc/containerd/config.toml ( page 2 ) then restart containerd
Install kubectl, kubeadm, kubelet ( page 1 ) then systemctl daemon-reload
kubeadm init ( page 3 )
install CNI ( for weave net - https://www.weave.works/docs/net/latest/kubernetes/kube-addon/)
systemctl daemon-reload && kubectl get nodes
OR can use single shell script for both master-node & worker node configuration
script Link - https://github.com/nivaran/Install_k8s_using_shell_script
…Service.
when trying to install the kube mast on a debian image you get " unknown service runtime.v1alpha2.RuntimeService." due to the fact that the installation of 'containerd' from the default debian repository is not updated (for referance - containerd/containerd#4581). To fix it I added the rpository of containrd and installed from their, a documentation of how to do it as a referance can be found here - https://docs.docker.com/engine/install/debian/#uninstall-old-versions.
my bash history from the new control plane is below, it works.
18 apt update
19 apt upgrade -y
20 reboot
21 cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
22 sudo modprobe overlay
23 sudo modprobe br_netfilter
24 cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
25 sudo sysctl --system
26 sudo apt-get update
27 wget https://github.com/containerd/containerd/releases/download/v1.7.1/containerd-1.7.1-linux-amd64.tar.gz
28 tar Cxzvf /usr/local containerd-1.7.1-linux-amd64.tar.gz
29 systemctl daemon-reload
30 systemctl enable --now containerd
31 nano /usr/local/lib/systemd/system/containerd.service
32 systemctl status sshd
33 ls -lh /lib/systemd/system/
34 nano /lib/systemd/system/containerd.service
35 systemctl daemon-reload
36 systemctl status containerd.service
37 nano /lib/systemd/system/containerd.service
38 ls /usr/local/bin/containerd
39 systemctl enable --now containerd
40 wget https://github.com/opencontainers/runc/releases/download/v1.1.7/runc.amd64
41 install -m 755 runc.amd64 /usr/local/sbin/runc
42 wget https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz
43 tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.3.0.tgz
44 systemctl status containerd.service
45 systemctl restart containerd.service
46 systemctl status containerd.service
47 containerd -v
48 sudo mkdir -p /etc/containerd
49 sudo containerd config default | sudo tee /etc/containerd/config.tom
50 sudo containerd config default | sudo tee /etc/containerd/config.toml
51 ls /etc/containerd/
52 rm /etc/containerd/config.tom
53 sudo systemctl restart containerd
54 sudo systemctl status containerd
55 sudo swapoff -a
56 nano /etc/fstab
57 sudo apt-get update && sudo apt-get install -y apt-transport-https curl
58 curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
59 cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
60 sudo apt-get update
61 sudo apt-get install -y kubelet=1.26.0-00 kubeadm=1.26.0-00 kubectl=1.26.0-00
62 sudo apt-mark hold kubelet kubeadm kubectl
63 sudo kubeadm init --pod-network-cidr 192.168.0.0/16 --kubernetes-version 1.26.0
It got fixed for me finally:
Update contained to the latest and fix toml file with below changes:
disabled_plugins = ["cri"] -->> disabled_plugins = [""]
Done.
Thnak me later ;)