OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": the server could not find the requested resource.
E0427 01:08:34.346036 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": the server could not find the requested resource, Header: map[Content-Length:[86] Content-Type:[text/plain] Date:[Wed, 27 Apr 2022 01:08:34 GMT] X-Content-Type-Options:[nosniff]]
I0427 01:08:34.346050 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
E0427 01:10:34.343692 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 500, Body: Internal Server Error: "/openapi/v2": the server could not find the requested resource, Header: map[Content-Length:[86] Content-Type:[text/plain] Date:[Wed, 27 Apr 2022 01:10:34 GMT] X-Content-Type-Options:[nosniff]]
kube-controller logs
W0426 09:25:35.005422 1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:an error on the server ("Internal Server Error: \"/apis/metrics.k8s.io/v1beta1\": the server could not find the requested resource") has prevented the request from succeeding]
What you expected to happen:
HPA functionality
Anything else we need to know?:
Metrics-server is fetching metrics correctly for pods and nodes with the top
command
kubectl top pods -A
NAMESPACE NAME CPU(cores) MEMORY(bytes)
default hpa-demo-deployment-75f866567d-rwkpx 1m 7Mi
kube-system coredns-64897985d-5vkk5 2m 17Mi
kube-system etcd-vlab033302 9m 278Mi
kube-system kube-apiserver-vlab033302 26m 264Mi
kube-system kube-controller-manager-vlab033302 8m 55Mi
kube-system kube-flannel-ds-26cdw 1m 22Mi
kube-system kube-flannel-ds-lblbx 2m 35Mi
kube-system kube-proxy-bgb7q 1m 18Mi
kube-system kube-proxy-trgrt 1m 26Mi
kube-system kube-scheduler-vlab033302 2m 20Mi
kube-system metrics-server-77ffdc8fbc-l55wn 1m 7Mi
kubectl get --raw /apis/metrics.k8s.io/v1beta1/namespaces/default/pods
{"kind":"PodMetricsList","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/default/pods"},"items":[{"metadata":{"name":"hpa-demo-deployment-75f866567d-lvl2s","namespace":"default","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/hpa-demo-deployment-75f866567d-lvl2s","creationTimestamp":"2022-04-27T11:37:39Z"},"timestamp":"2022-04-27T11:37:05Z","window":"30s","containers":[{"name":"hpa-demo-deployment","usage":{"cpu":"24707n","memory":"5884Ki"}}]}]}
kubectl get --raw /apis/metrics.k8s.io/v1beta1/
{"kind":"APIResourceList","apiVersion":"v1","groupVersion":"metrics.k8s.io/v1beta1","resources":[{"name":"nodes","singularName":"","namespaced":false,"kind":"NodeMetrics","verbs":["get","list"]},{"name":"pods","singularName":"","namespaced":true,"kind":"PodMetrics","verbs":["get","list"]}]}
However, when I am running kubectl describe hpa
the current cpu usage comes back as / 20% with the warnings:
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): <unknown> / 20%
Min replicas: 1
Max replicas: 10
Deployment pods: 1 current / 0 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: an error on the server ("Internal Server Error: \"/apis/metrics.k8s.io/v1beta1/namespaces/default/pods?labelSelector=name%!D(MISSING)hpa-demo-deployment\": the server could not find the requested resource") has prevented the request from succeeding (get pods.metrics.k8s.io)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedGetResourceMetric 58s (x261 over 66m) horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: an error on the server ("Internal Server Error: \"/apis/metrics.k8s.io/v1beta1/namespaces/default/pods?labelSelector=name%3Dhpa-demo-deployment\": the server could not find the requested resource") has prevented the request from succeeding (get pods.metrics.k8s.io)
kube api-server
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=10.204.108.18
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379/
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-issuer=https://kubernetes.default.svc.cluster.local/
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
- --runtime-config=api/all=true
Environment:
OS: CentOS 7.8
Kubernetes distribution (GKE, EKS, Kubeadm, the hard way, etc.): kubeadm
Container Network Setup (flannel, calico, etc.): Tried on both types facing same issue on both of them
Kubernetes version (use kubectl version
):
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", GitCommit:"c285e781331a3785a7f436042c65c5641ce8a9e9", GitTreeState:"clean", BuildDate:"2022-03-16T15:58:47Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", GitCommit:"c285e781331a3785a7f436042c65c5641ce8a9e9", GitTreeState:"clean", BuildDate:"2022-03-16T15:52:18Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
Metrics Server manifest
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.6
imagePullPolicy: IfNotPresent
command:
- /metrics-server
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP, ExternalIP, Hostname
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
hostNetwork: true
Kubelet config:
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 3600s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 10m10s
cacheUnauthorizedTTL: 10s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
flushFrequency: 0
options:
json:
infoBufferSize: "0"
verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
Metrics server logs:
I0427 11:03:13.715206 1 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I0427 11:03:14.423638 1 secure_serving.go:116] Serving securely on [::]:4443
E0427 11:03:15.579508 1 webhook.go:196] Failed to make webhook authorizer request: the server could not find the requested resource
E0427 11:03:15.579593 1 errors.go:77] the server could not find the requested resource
E0427 11:03:15.621591 1 webhook.go:196] Failed to make webhook authorizer request: the server could not find the requested resource
E0427 11:03:15.621668 1 errors.go:77] the server could not find the requested resource
Status of Metrics API:
kubectl describe apiservice v1beta1.metrics.k8s.io
Name: v1beta1.metrics.k8s.io
Namespace:
Labels: k8s-app=metrics-server
Annotations: <none>
API Version: apiregistration.k8s.io/v1
Kind: APIService
Metadata:
Creation Timestamp: 2022-04-27T10:58:18Z
Resource Version: 1431305
UID: 46388641-1f14-47f6-8cac-d67f3e7e3b78
Spec:
Group: metrics.k8s.io
Group Priority Minimum: 100
Insecure Skip TLS Verify: true
Service:
Name: metrics-server
Namespace: kube-system
Port: 443
Version: v1beta1
Version Priority: 100
Status:
Conditions:
Last Transition Time: 2022-04-27T11:03:13Z
Message: all checks passed
Reason: Passed
Status: True
Type: Available
Events: <none>
Any help is highly appreciated.
/kind bug
In manifests you can see you run 8s.gcr.io/metrics-server-amd64:v0.3.6
which is very old version that doesn't work newer K8s versions.
Please read https://github.com/kubernetes-sigs/metrics-server#compatibility-matrix
Are you sure the spec.containers.resources.requests.cpu
value is specified inside the deployment?
In order for server-metrics to calculate the % of the cpu used by the pod, it needs to know the amount of cpu that was allocated for that pod. Same goes with memory.
I would try deploying this HPA demo by DigitalOcean to see if the metrics show up:
kubectl apply -f https://docs.digitalocean.com/products/kubernetes/resources/hpa.yaml
Kubernetes metrics doesn't work with K8s v1.30.x and cgroup-driver=cgroupfs (the default)
cri-o/cri-o#8034