玉树临风的汤圆 · 千兆多模光模块SFP-GE-SX是什么?特点 ...· 4 周前 · |
讲道义的热水瓶 · Masters in Artificial ...· 1 月前 · |
怕老婆的大熊猫 · vae许嵩(中国内地男音乐人、词曲作家、歌手 ...· 2 月前 · |
直爽的乒乓球 · 韩国互联网巨头Kakao:通讯软件Kakao ...· 5 月前 · |
爱喝酒的葫芦 · 概率密度函数、概率函数、概率分布函数和积分等 ...· 6 月前 · |
老实的松鼠
10 月前 |
Increase visibility into IT operations to detect and resolve technical issues before they impact your business.
Learn More Go to InsightsEngage with our Red Hat Product Security team, access security updates, and ensure your environments are not exposed to any known security vulnerabilities. Product Security Center
Keep your systems secure with Red Hat's specialized responses to security vulnerabilities. View Responses Expand section "1.3.3. Understanding how to use pod disruption budgets to specify the number of pods that must be up" Collapse section "1.3.3. Understanding how to use pod disruption budgets to specify the number of pods that must be up" Expand section "2.3. Placing pods relative to other pods using affinity and anti-affinity rules" Collapse section "2.3. Placing pods relative to other pods using affinity and anti-affinity rules" Expand section "4.1. Viewing and listing the nodes in your OpenShift Container Platform cluster" Collapse section "4.1. Viewing and listing the nodes in your OpenShift Container Platform cluster" Expand section "4.8. Allocating resources for nodes in an OpenShift Container Platform cluster" Collapse section "4.8. Allocating resources for nodes in an OpenShift Container Platform cluster" Expand section "5.5.2. Understanding how to consume container values using the downward API" Collapse section "5.5.2. Understanding how to consume container values using the downward API" Expand section "5.5.3. Understanding how to consume container resources using the downward API" Collapse section "5.5.3. Understanding how to consume container resources using the downward API" Expand section "5.7. Executing remote commands in an OpenShift Container Platform container" Collapse section "5.7. Executing remote commands in an OpenShift Container Platform container" Expand section "6.1. Viewing system event information in an OpenShift Container Platform cluster" Collapse section "6.1. Viewing system event information in an OpenShift Container Platform cluster" Expand section "6.2. Estimating the number of pods your OpenShift Container Platform nodes can hold" Collapse section "6.2. Estimating the number of pods your OpenShift Container Platform nodes can hold" Expand section "6.4. Configuring cluster memory to meet container memory and risk requirements" Collapse section "6.4. Configuring cluster memory to meet container memory and risk requirements"
kind: Pod apiVersion: v1 metadata: name: example namespace: default selfLink: /api/v1/namespaces/default/pods/example uid: 5cc30063-0265780783bc resourceVersion: '165032' creationTimestamp: '2019-02-13T20:31:37Z' labels: 1 app: hello-openshift annotations: openshift.io/scc: anyuid spec: restartPolicy: Always 2 serviceAccountName: default imagePullSecrets: - name: default-dockercfg-5zrhb priority: 0 schedulerName: default-scheduler terminationGracePeriodSeconds: 30 nodeName: ip-10-0-140-16.us-east-2.compute.internal securityContext: 3 seLinuxOptions: level: 's0:c11,c10' containers: 4 - resources: {} terminationMessagePath: /dev/termination-log name: hello-openshift securityContext: capabilities: drop: - MKNOD procMount: Default ports: - containerPort: 8080 protocol: TCP imagePullPolicy: Always volumeMounts: 5 - name: default-token-wbqsl readOnly: true mountPath: /var/run/secrets/kubernetes.io/serviceaccount terminationMessagePolicy: File image: registry.redhat.io/openshift4/ose-ogging-eventrouter:v4.3 6 serviceAccount: default 7 volumes: 8 - name: default-token-wbqsl secret: secretName: default-token-wbqsl defaultMode: 420 dnsPolicy: ClusterFirst status: phase: Pending conditions: - type: Initialized status: 'True' lastProbeTime: null lastTransitionTime: '2019-02-13T20:31:37Z' - type: Ready status: 'False' lastProbeTime: null lastTransitionTime: '2019-02-13T20:31:37Z' reason: ContainersNotReady message: 'containers with unready status: [hello-openshift]' - type: ContainersReady status: 'False' lastProbeTime: null lastTransitionTime: '2019-02-13T20:31:37Z' reason: ContainersNotReady message: 'containers with unready status: [hello-openshift]' - type: PodScheduled status: 'True' lastProbeTime: null lastTransitionTime: '2019-02-13T20:31:37Z' hostIP: 10.0.140.16 startTime: '2019-02-13T20:31:37Z' containerStatuses: - name: hello-openshift state: waiting: reason: ContainerCreating lastState: {} ready: false restartCount: 0 image: openshift/hello-openshift imageID: '' qosClass: BestEffortPods can be "tagged" with one or more labels, which can then be used to select and manage groups of pods in a single operation. The labels are stored in key/value format in the
metadata
hash. One label in this example is
registry=default
.
The pod restart policy with possible values
Always
,
OnFailure
, and
Never
. The default value is
Always
.
OpenShift Container Platform defines a security context for containers which specifies whether they are allowed to run as privileged containers, run as a user of their choice, and more. The default context is very restrictive but administrators can modify this as needed.
containers
specifies an array of one or more container definitions.
The container specifies where external storage volumes are mounted within the container. In this case, there is a volume for storing access to credentials the registry needs for making requests against the OpenShift Container Platform API.
Each container in the pod is instantiated from its own container image.
Pods making requests against the OpenShift Container Platform API is a common enough pattern that there is a
serviceAccount
field for specifying which service account user the pod should authenticate as when making the requests. This enables fine-grained access control for custom infrastructure components.
The pod defines storage volumes that are available to its container(s) to use. In this case, it provides an ephemeral volume for the registry storage and a
secret
volume containing the service account credentials.
This pod definition does not include attributes that are filled by OpenShift Container Platform automatically after the pod is created and its lifecycle begins. The
Kubernetes pod documentation
has details about the functionality and purpose of pods.
As an administrator, you can view the pods in your cluster and to determine the health of those pods and the cluster as a whole.
OpenShift Container Platform leverages the Kubernetes concept of a pod , which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. Pods are the rough equivalent of a machine instance (physical or virtual) to a container. You can view a list of pods associated with a specific project or view usage statistics about pods.
You can view a list of pods associated with the current project, including the number of replica, the current status, number or restarts and the age of the pod.
Procedure
To view the pods in a project: Change to the project:
$ oc project <project-name>
Run the following command:
$ oc get pods
For example:
$ oc get pods -n openshift-console NAME READY STATUS RESTARTS AGE console-698d866b78-bnshf 1/1 Running 2 165m console-698d866b78-m87pm 1/1 Running 2 165m
Add the
-o wide
flags to view the pod IP address and the node where the pod is located.
$ oc get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE console-698d866b78-bnshf 1/1 Running 2 166m 10.128.0.24 ip-10-0-152-71.ec2.internal <none> console-698d866b78-m87pm 1/1 Running 2 166m 10.129.0.23 ip-10-0-173-237.ec2.internal <none>
You can display usage statistics about pods, which provide the runtime environments for Containers. These usage statistics include CPU, memory, and storage consumption.
Prerequisites
cluster-reader
permission to view the usage statistics.
Metrics must be installed to view the usage statistics.
Procedure
To view the usage statistics: Run the following command:
$ oc adm top pods
For example:
$ oc adm top pods -n openshift-console NAME CPU(cores) MEMORY(bytes) console-7f58c69899-q8c8k 0m 22Mi console-7f58c69899-xhbgg 0m 25Mi downloads-594fcccf94-bcxk8 3m 18Mi downloads-594fcccf94-kv4p6 2m 15Mi
Run the following command to view the usage statistics for pods with labels:
$ oc adm top pod --selector=''
You must choose the selector (label query) to filter on. Supports
=
,
==
, and
!=
.
As an administrator, you can create and maintain an efficient cluster for pods. By keeping your cluster efficient, you can provide a better environment for your developers using such tools as what a pod does when it exits, ensuring that the required number of pods is always running, when to restart pods designed to run only once, limit the bandwidth available to pods, and how to keep pods running during disruptions.
A pod restart policy determines how OpenShift Container Platform responds when Containers in that pod exit. The policy applies to all Containers in that pod.
The possible values are:
Always
- Tries restarting a successfully exited Container on the pod continuously, with an exponential back-off delay (10s, 20s, 40s) until the pod is restarted. The default is
Always
.
OnFailure
- Tries restarting a failed Container on the pod with an exponential back-off delay (10s, 20s, 40s) capped at 5 minutes.
Never
- Does not try to restart exited or failed Containers on the pod. Pods immediately fail and exit.
After the pod is bound to a node, the pod will never be bound to another node. This means that a controller is necessary in order for a pod to survive node failure:
Condition | Controller Type | Restart Policy | ||
---|---|---|---|---|
Pods that are expected to terminate (such as batch computations)
1.3.2. Limiting the bandwidth available to podsYou can apply quality-of-service traffic shaping to a pod and effectively limit its available bandwidth. Egress traffic (from the pod) is handled by policing, which simply drops packets in excess of the configured rate. Ingress traffic (to the pod) is handled by shaping queued packets to effectively handle data. The limits you place on a pod do not affect the bandwidth of other pods. Procedure
To limit the bandwidth on a pod:
Write an object definition JSON file, and specify the data traffic speed using
Limited Pod Object Definition "kind": "Pod", "spec": { "containers": [ "image": "openshift/hello-openshift", "name": "hello-openshift" "apiVersion": "v1", "metadata": { "name": "iperf-slow", "annotations": { "kubernetes.io/ingress-bandwidth": "10M", "kubernetes.io/egress-bandwidth": "10M" Create the pod using the object definition: $ oc create -f <file_or_dir_path> 1.3.3. Understanding how to use pod disruption budgets to specify the number of pods that must be up
A
pod disruption budget
is part of the
Kubernetes
API, which can be managed with
$ oc get poddisruptionbudget --all-namespaces NAMESPACE NAME MIN-AVAILABLE SELECTOR another-project another-pdb 4 bar=foo test-project my-pdb 2 foo=bar
The
1.3.3.1. Specifying the number of pods that must be up with pod disruption budgets
You can use a
Procedure To configure a pod disruption budget: Create a YAML file with the an object definition similar to the following: apiVersion: policy/v1beta1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: foo: bar
1.3.4. Preventing pod removal using critical podsThere are a number of core components that are critical to a fully functional cluster, but, run on a regular cluster node rather than the master. A cluster might stop working properly if a critical add-on is evicted. Pods marked as critical are not allowed to be evicted. Procedure
To make a pod critical:
Create a pod specification or edit existing pods to include the
spec:
template:
metadata:
name: critical-pod
priorityClassName: system-cluster-critical 1
1.4. Automatically scaling podsAs a developer, you can use a horizontal pod autoscaler (HPA) to specify how OpenShift Container Platform should automatically increase or decrease the scale of a replication controller or deployment configuration, based on metrics collected from the pods that belong to that replication controller or deployment configuration. 1.4.1. Understanding horizontal pod autoscalersYou can create a horizontal pod autoscaler to specify the minimum and maximum number of pods you want to run, as well as the CPU utilization or memory utilization your pods should target.
Important
Autoscaling for Memory Utilization is a Technology Preview feature only.
After you create a horizontal pod autoscaler, OpenShift Container Platform begins to query the CPU and/or memory resource metrics on the pods. When these metrics are available, the horizontal pod autoscaler computes the ratio of the current metric utilization with the desired metric utilization, and scales up or down accordingly. The query and scaling occurs at a regular interval, but can take one to two minutes before metrics become available.
For replication controllers, this scaling corresponds directly to the replicas of the replication controller. For deployment configurations, scaling corresponds directly to the replica count of the deployment configuration. Note that autoscaling applies only to the latest deployment in the
1.4.1.1. Supported metricsThe following metrics are supported by horizontal pod autoscalers: Table 1.1. Metrics
Important
For memory-based autoscaling, memory usage must increase and decrease proportionally to the replica count. On average: An increase in replica count must lead to an overall decrease in memory (working set) usage per-pod. A decrease in replica count must lead to an overall increase in per-pod memory usage. Use the OpenShift Container Platform web console to check the memory behavior of your application and ensure that your application meets these requirements before using memory-based autoscaling. 1.4.2. Creating a horizontal pod autoscaler for CPU utilization
You can create a horizontal pod autoscaler (HPA) for an existing DeploymentConfig or ReplicationController object that automatically scales the Pods associated with that object in order to maintain the CPU usage you specify.
The HPA increases and decreases the number of replicas between the minimum and maximum numbers to maintain the specified CPU utilization across all Pods.
When autoscaling for CPU utilization, you can use the
Prerequisites
In order to use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics. You can use the
$ oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none> Procedure
To create a horizontal pod autoscaler for CPU utilization:
Perform one of the following one of the following:
To scale based on the percent of CPU utilization, create a
$ oc autoscale dc/<dc-name> \1 --min <number> \2 --max <number> \3 --cpu-percent=<percent> 4
1.4.3. Creating a horizontal pod autoscaler object for memory utilizationYou can create a horizontal pod autoscaler (HPA) for an existing DeploymentConfig or ReplicationController object that automatically scales the Pods associated with that object in order to maintain the average memory utilization you specify, either a direct value or a percentage of requested memory. The HPA increases and decreases the number of replicas between the minimum and maximum numbers to maintain the specified memory utilization across all Pods. For memory utilization, you can specify the minimum and maximum number of Pods and the average memory utilization your Pods should target. If you do not specify a minimum, the Pods are given default values from the OpenShift Container Platform server.
Important
Autoscaling for memory utilization is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/ . Prerequisites
In order to use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics. You can use the
$ oc describe PodMetrics openshift-kube-scheduler-ip-10-0-129-223.compute.internal -n openshift-kube-scheduler Name: openshift-kube-scheduler-ip-10-0-129-223.compute.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: scheduler Usage: Cpu: 2m Memory: 41056Ki Name: wait-for-host-port Usage: Memory: 0 Kind: PodMetrics Metadata: Creation Timestamp: 2020-02-14T22:21:14Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-129-223.compute.internal Timestamp: 2020-02-14T22:21:14Z Window: 5m0s Events: <none> Procedure
To create a horizontal pod autoscaler for memory utilization:
Create a YAML file for one of the following:
To scale for a specific memory value, create a
apiVersion: autoscaling/v2beta2 1 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory 2 namespace: default spec: scaleTargetRef: apiVersion: v1 3 kind: ReplicationController 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: AverageValue 10 averageValue: 500Mi 11
1.4.4. Understanding horizontal pod autoscaler status conditions
You can use the status conditions set to determine whether or not the horizontal pod autoscaler (HPA) is able to scale and whether or not it is currently restricted in any way.
The HPA status conditions are available with the
$ oc describe hpa cm-test
Name: cm-test
Namespace: prom
Labels: <none>
Annotations: <none>
CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000
Reference: ReplicationController/cm-test
Metrics: ( current / target )
"http_requests" on pods: 66m / 500m
Min replicas: 1
Max replicas: 4
ReplicationController pods: 1 current / 1 desired
Conditions: 1
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request
ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range
Events:
Prerequisites
In order to use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics. You can use the
$ oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none> Procedure To view the status conditions on a pod, use the following command with the name of the pod: $ oc describe hpa <pod-name> For example: $ oc describe hpa cm-test
The conditions appear in the
Name: cm-test
Namespace: prom
Labels: <none>
Annotations: <none>
CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000
Reference: ReplicationController/cm-test
Metrics: ( current / target )
"http_requests" on pods: 66m / 500m
Min replicas: 1
Max replicas: 4
ReplicationController pods: 1 current / 1 desired
Conditions: 1
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request
ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range
1.4.5. Additional resourcesFor more information on replication controllers and deployment controllers, see Understanding Deployments and DeploymentConfigs . 1.5. Providing sensitive data to podsSome applications need sensitive information, such as passwords and user names, that you do not want developers to have. As an administrator, you can use Secret objects to provide this information without exposing that information in clear text. 1.5.1. Understanding secrets
The
YAML Secret Object Definition
apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: dmFsdWUtMQ0K 3 password: dmFsdWUtMg0KDQo= stringData: 4 hostname: myapp.mydomain.com 5Indicates the structure of the secret’s key names and values. The allowable format for the keys in the
data
field must meet the guidelines in the
DNS_SUBDOMAIN
value in
the Kubernetes identifiers glossary
.
The value associated with keys in the
data
map must be base64 encoded.
Entries in the
stringData
map are converted to base64 and the entry will then be moved to the
data
map automatically. This field is write-only; the value will only be returned via the
data
field.
The value associated with keys in the
stringData
map is made up of plain text strings.
You must create a secret before creating the pods that depend on that secret.
When creating secrets:
Create a secret object with secret data.
Update the pod’s service account to allow the reference to the secret.
Create a pod, which consumes the secret as an environment variable or as a file (using a
secret
volume).
1.5.1.1. Types of secrets
The value in the
1.5.1.2. Example secret configurationsThe following are sample secret configuration files. YAML Secret That Will Create Four Files
apiVersion: v1 kind: Secret metadata: name: test-secret data: username: dmFsdWUtMQ0K 1 password: dmFsdWUtMQ0KDQo= 2 stringData: hostname: myapp.mydomain.com 3 secret.properties: |- 4 property1=valueA property2=valueBFile contains decoded values. File contains decoded values. File contains the provided string. File contains the provided data. YAML of a Pod Populating Files in a Volume with Secret Data
apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "cat /etc/secret-volume/*" ] volumeMounts: # name must match the volume name below - name: secret-volume mountPath: /etc/secret-volume readOnly: true volumes: - name: secret-volume secret: secretName: test-secret restartPolicy: Never YAML of a Pod Populating Environment Variables with Secret Data
apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "export" ] - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username restartPolicy: Never YAML of a Build Config Populating Environment Variables with Secret Data
apiVersion: v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username 1.5.1.3. Secret data keysSecret keys must be in a DNS subdomain. 1.5.2. Understanding how to create secrets
As an administrator you must create a secret before developers can create the pods that depend on that secret.
When creating secrets:
Create a secret object with secret data.
Update the pod’s service account to allow the reference to the secret.
Create a pod, which consumes the secret as an environment variable or as a file (using a
1.5.2.1. Secret creation restrictions
To use a secret, a pod needs to reference the secret. A secret can be used with a pod in three ways:
To populate environment variables for Containers.
As files in a volume mounted on one or more of its Containers.
By kubelet when pulling images for the pod.
Volume type secrets write data into the Container as a file using the volume mechanism. Image pull secrets use service accounts for the automatic injection of the secret into all pods in a namespaces.
When a template contains a secret definition, the only way for the template to use the provided secret is to ensure that the secret volume sources are validated and that the specified object reference actually points to an object of type
1.5.2.2. Creating an opaque secret
As an administrator, you can create a opaque secret, which allows for unstructured
Procedure
Then:
Update the service account for the pod where you want to use the secret to allow the reference to the secret.
Create the pod, which consumes the secret as an environment variable or as a file (using a
1.5.3. Understanding how to update secrets
When you modify the value of a secret, the value (used by an already running pod) will not dynamically change. To change a secret, you must delete the original pod and create a new pod (perhaps with an identical PodSpec).
Updating a secret follows the same workflow as deploying a new Container image. You can use the
1.5.4. About using signed certificates with secretsTo secure communication to your service, you can configure OpenShift Container Platform to generate a signed serving certificate/key pair that you can add into a secret in a project. A service serving certificate secret is intended to support complex middleware applications that need out-of-the-box certificates. It has the same settings as the server certificates generated by the administrator tooling for nodes and masters. Service pod specification configured for a service serving certificates secret.
apiVersion: v1 kind: Service metadata: name: registry annotations: service.alpha.openshift.io/serving-cert-secret-name: registry-cert1 Specify the name for the certificate Other pods can trust cluster-created certificates (which are only signed for internal DNS names), by using the CA bundle in the /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt file that is automatically mounted in their pod. The signature algorithm for this feature is 1.5.5. Troubleshooting secrets
If a service certificate generation fails with (service’s
secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60
The service that generated the certificate no longer exists, or has a different
$ oc delete secret <secret_name> $ oc annotate service <service_name> service.alpha.openshift.io/serving-cert-generation-error- $ oc annotate service <service_name> service.alpha.openshift.io/serving-cert-generation-error-num-
Note
The command removing annotation has a
1.6. Using device plug-ins to access external resources with podsDevice plug-ins allow you to use a particular device type (GPU, InfiniBand, or other similar computing resources that require vendor-specific initialization and setup) in your OpenShift Container Platform pod without needing to write custom code. 1.6.1. Understanding device plug-insThe device plug-in provides a consistent and portable solution to consume hardware devices across clusters. The device plug-in provides support for these devices through an extension mechanism, which makes these devices available to Containers, provides health checks of these devices, and securely shares them.
Important
OpenShift Container Platform supports the device plug-in API, but the device plug-in Containers are supported by individual vendors.
A device plug-in is a gRPC service running on the nodes (external to the
service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as reseting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} } Example device plug-ins
1.6.1.1. Methods for deploying a device plug-in
1.6.2. Understanding the Device ManagerDevice Manager provides a mechanism for advertising specialized node hardware resources with the help of plug-ins known as device plug-ins. You can advertise specialized hardware without requiring any upstream code changes.
Important
OpenShift Container Platform supports the device plug-in API, but the device plug-in Containers are supported by individual vendors.
Device Manager advertises devices as
Extended Resources
. User pods can consume devices, advertised by Device Manager, using the same
Limit/Request
mechanism, which is used for requesting any other
Extended Resource
.
Upon start, the device plug-in registers itself with Device Manager invoking
1.6.3. Enabling Device ManagerEnable Device Manager to implement a device plug-in to advertise specialized hardware without any upstream code changes. Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plug-ins known as device plug-ins. Obtain the label associated with the static Machine Config Pool CRD for the type of node you want to configure. Perform one of the following steps: View the Machine Config: # oc describe machineconfig <name> For example: # oc describe machineconfig 00-worker
oc describe machineconfig 00-worker
Name: 00-worker
Namespace:
Labels: machineconfiguration.openshift.io/role=worker 1
Procedure
1.7. Including pod priority in pod scheduling decisions
You can enable pod priority and preemption in your cluster. Pod priority indicates the importance of a pod relative to other pods and queues the pods based on that priority. Pod preemption allows the cluster to evict, or preempt, lower-priority pods so that higher-priority pods can be scheduled if there is no available space on a suitable node Pod priority also affects the scheduling order of pods and out-of-resource eviction ordering on the node.
To use priority and preemption, you create priority classes that define the relative weight of your pods. Then, reference a priority class in the pod specification to apply that weight for scheduling.
Preemption is controlled by the
1.7.1. Understanding pod priorityWhen you use the Pod Priority and Preemption feature, the scheduler orders pending pods by their priority, and a pending pod is placed ahead of other pending pods with lower priority in the scheduling queue. As a result, the higher priority pod might be scheduled sooner than pods with lower priority if its scheduling requirements are met. If a pod cannot be scheduled, scheduler continues to schedule other lower priority pods. 1.7.1.1. Pod priority classesYou can assign pods a priority class, which is a non-namespaced object that defines a mapping from a name to the integer value of the priority. The higher the value, the higher the priority. A priority class object can take any 32-bit integer value smaller than or equal to 1000000000 (one billion). Reserve numbers larger than one billion for critical pods that should not be preempted or evicted. By default, OpenShift Container Platform has two reserved priority classes for critical system pods to have guaranteed scheduling. $ oc get priorityclasses NAME CREATED AT cluster-logging 2019-03-13T14:45:12Z system-cluster-critical 2019-03-13T14:01:10Z system-node-critical 2019-03-13T14:01:10Z
1.7.1.2. Pod priority namesAfter you have one or more priority classes, you can create pods that specify a priority class name in a pod specification. The priority admission controller uses the priority class name field to populate the integer value of the priority. If the named priority class is not found, the pod is rejected. 1.7.2. Understanding pod preemption
When a developer creates a pod, the pod goes into a queue. If the developer configured the pod for pod priority or preemption, the scheduler picks a pod from the queue and tries to schedule the pod on a node. If the scheduler cannot find space on an appropriate node that satisfies all the specified requirements of the pod, preemption logic is triggered for the pending pod.
When the scheduler preempts one or more pods on a node, the
1.7.2.1. Pod preemption and other scheduler settingsIf you enable pod priority and preemption, consider your other scheduler settings:
1.7.2.2. Graceful termination of preempted podsWhen preempting a pod, the scheduler waits for the pod graceful termination period to expire, allowing the pod to finish working and exit. If the pod does not exit after the period, the scheduler kills the pod. This graceful termination period creates a time gap between the point that the scheduler preempts the pod and the time when the pending pod can be scheduled on the node. To minimize this gap, configure a small graceful termination period for lower-priority pods. 1.7.3. Configuring priority and preemption
You apply pod priority and preemption by creating a priority class object and associating pods to the priority using the
Sample priority class object
apiVersion: scheduling.k8s.io/v1beta1 kind: PriorityClass metadata: name: high-priority 1 value: 1000000 2 globalDefault: false 3 description: "This priority class should be used for XYZ service pods only." 4The name of the priority class object. The priority value of the object. Optional field that indicates whether this priority class should be used for pods without a priority class name specified. This field is
false
by default. Only one priority class with
globalDefault
set to
true
can exist in the cluster. If there is no priority class with
globalDefault:true
, the priority of pods with no priority class name is zero. Adding a priority class with
globalDefault:true
affects only pods created after the priority class is added and does not change the priorities of existing pods.
Optional arbitrary text string that describes which pods developers should use with this priority class.
Procedure
To configure your cluster to use priority and preemption:
Create one or more priority classes:
Specify a name and value for the priority.
Optionally specify the
Sample pod specification with priority class name
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
priorityClassName: high-priority 1
Specify the priority class to use with this pod.
Create the pod:
$ oc create -f <file-name>.yaml You can add the priority name directly to the pod configuration or to a pod template. 1.7.4. Disabling priority and preemptionYou can disable the pod priority and preemption feature. After the feature is disabled, the existing pods keep their priority fields, but preemption is disabled, and priority fields are ignored. If the feature is disabled, you cannot set a priority class name in new pods.
Important
Critical pods rely on scheduler preemption to be scheduled when a cluster is under resource pressure. For this reason, Red Hat recommends not disabling preemption. DaemonSet pods are scheduled by the DaemonSet controller and not affected by disabling preemption. Procedure
To disable the preemption for the cluster:
Edit the Scheduler Operator Custom Resource to add the
oc edit scheduler cluster apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: '2019-03-12T01:45:02Z' generation: 1 name: example resourceVersion: '1882034' selfLink: /apis/config.openshift.io/v1/schedulers/example uid: 743701e9-4468-11e9-bd34-02a7fe1bf828 spec: disablePreemption: true 1.8. Placing pods on specific nodes using node selectorsA node selector specifies a map of key-value pairs. The rules are defined using custom labels on nodes and selectors specified in pods. For the pod to be eligible to run on a node, the pod must have the indicated key-value pairs as the label on the node. If you are using node affinity and node selectors in the same pod configuration, see the important considerations below. 1.8.1. Using node selectors to control pod placementYou can use node selector labels on pods to control where the pod is scheduled. With node selectors, OpenShift Container Platform schedules the pods on nodes that contain matching labels. To add node selectors to an existing pod, add a node selector to the controlling object for that node, such as a ReplicaSet, Daemonset, or StatefulSet. Any existing pods under that controlling object are recreated on a node with a matching label. If you are creating a new pod, you can add the node selector directly to the pod spec. You can add labels to a node or MachineConfig, but the labels will not persist if the node or machine goes down. Adding the label to the MachineSet ensures that new nodes or machines will have the label. You cannot add a node selector directly to an existing scheduled pod. Prerequisite
To add a node selector to existing pods, determine the controlling object for that pod. For example, the
$ oc describe pod router-default-66d5cf9464-7pwkc Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress Controlled By: ReplicaSet/router-default-66d5cf9464
The web console lists the controlling object under
ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true Procedure
|