PROJECT_NAME
The name of the project. Required.
PROJECT_DISPLAYNAME
The display name of the project. May be empty.
PROJECT_DESCRIPTION
The description of the project. May be empty.
PROJECT_ADMIN_USER
The user name of the administrating user.
PROJECT_REQUESTING_USER
The user name of the requesting user.
Access to the API is granted to developers with the
self-provisioner
role and the
self-provisioners
cluster role binding. This role is available to all authenticated developers by default.
2.3.2. Modifying the template for new projects
As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements.
To create your own custom project template:
Procedure
-
Log in as a user with
cluster-admin
privileges.
Generate the default project template:
$ oc adm create-bootstrap-project-template -o yaml > template.yaml
-
Use a text editor to modify the generated
template.yaml
file by adding objects or modifying existing objects.
The project template must be created in the
openshift-config
namespace. Load your modified template:
$ oc create -f template.yaml -n openshift-config
-
Edit the project configuration resource using the web console or CLI.
Using the web console:
Navigate to the
Administration
→
Cluster Settings
page.
Click
Configuration
to view all configuration resources.
Find the entry for
Project
and click
Edit YAML
.
Using the CLI:
Edit the
project.config.openshift.io/cluster
resource:
$ oc edit project.config.openshift.io/cluster
Update the
spec
section to include the
projectRequestTemplate
and
name
parameters, and set the name of your uploaded project template. The default name is
project-request
.
2.3.3. Disabling project self-provisioning
You can prevent an authenticated user group from self-provisioning new projects.
Procedure
-
Log in as a user with
cluster-admin
privileges.
View the
self-provisioners
cluster role binding usage by running the following command:
$ oc describe clusterrolebinding.rbac self-provisioners
-
If the
self-provisioners
cluster role binding binds the
self-provisioner
role to more users, groups, or service accounts than the
system:authenticated:oauth
group, run the following command:
$ oc adm policy \
remove-cluster-role-from-group self-provisioner \
system:authenticated:oauth
Edit the
self-provisioners
cluster role binding to prevent automatic updates to the role. Automatic updates reset the cluster roles to the default state.
To update the role binding using the CLI:
Run the following command:
$ oc edit clusterrolebinding.rbac self-provisioners
In the displayed role binding, set the
rbac.authorization.kubernetes.io/autoupdate
parameter value to
false
, as shown in the following example:
apiVersion: authorization.openshift.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "false"
# ...
To update the role binding by using a single command:
$ oc patch clusterrolebinding.rbac self-provisioners -p '{ "metadata": { "annotations": { "rbac.authorization.kubernetes.io/autoupdate": "false" } } }'
Log in as an authenticated user and verify that it can no longer self-provision a project:
$ oc new-project test
2.3.4. Customizing the project request message
When a developer or a service account that is unable to self-provision projects makes a project creation request using the web console or CLI, the following error message is returned by default:
You may not request a new project via this API.
Cluster administrators can customize this message. Consider updating it to provide further instructions on how to request a new project specific to your organization. For example:
To request a project, contact your system administrator at
projectname@example.com
.
To request a new project, fill out the project request form located at
https://internal.example.com/openshift-project-request
.
To customize the project request message:
Procedure
-
Edit the project configuration resource using the web console or CLI.
Using the web console:
Navigate to the
Administration
→
Cluster Settings
page.
Click
Configuration
to view all configuration resources.
Find the entry for
Project
and click
Edit YAML
.
Using the CLI:
Log in as a user with
cluster-admin
privileges.
Edit the
project.config.openshift.io/cluster
resource:
$ oc edit project.config.openshift.io/cluster
Update the
spec
section to include the
projectRequestMessage
parameter and set the value to your custom message:
After you save your changes, attempt to create a new project as a developer or service account that is unable to self-provision projects to verify that your changes were successfully applied.
Chapter 3. Creating applications
3.1. Creating applications by using the Developer perspective
The
Developer
perspective in the web console provides you the following options from the
+Add
view to create applications and associated services and deploy them on OpenShift Container Platform:
Getting started resources
: Use these resources to help you get started with Developer Console. You can choose to hide the header using the
Options
menu
Creating applications using samples
: Use existing code samples to get started with creating applications on the OpenShift Container Platform.
Build with guided documentation
: Follow the guided documentation to build applications and familiarize yourself with key concepts and terminologies.
Explore new developer features
: Explore the new features and resources within the
Developer
perspective.
Developer catalog
: Explore the Developer Catalog to select the required applications, services, or source to image builders, and then add it to your project.
All Services
: Browse the catalog to discover services across OpenShift Container Platform.
Database
: Select the required database service and add it to your application.
Operator Backed
: Select and deploy the required Operator-managed service.
Helm chart
: Select the required Helm chart to simplify deployment of applications and services.
Devfile
: Select a devfile from the
Devfile registry
to declaratively define a development environment.
Event Source
: Select an event source to register interest in a class of events from a particular system.
The Managed services option is also available if the RHOAS Operator is installed.
Git repository
: Import an existing codebase, Devfile, or Dockerfile from your Git repository using the
From Git
,
From Devfile
, or
From Dockerfile
options respectively, to build and deploy an application on OpenShift Container Platform.
Container images
: Use existing images from an image stream or registry to deploy it on to the OpenShift Container Platform.
Pipelines
: Use Tekton pipeline to create CI/CD pipelines for your software delivery process on the OpenShift Container Platform.
Serverless
: Explore the
Serverless
options to create, build, and deploy stateless and serverless applications on the OpenShift Container Platform.
Channel
: Create a Knative channel to create an event forwarding and persistence layer with in-memory and reliable implementations.
Samples
: Explore the available sample applications to create, build, and deploy an application quickly.
Quick Starts
: Explore the quick start options to create, import, and run applications with step-by-step instructions and tasks.
From Local Machine
: Explore the
From Local Machine
tile to import or upload files on your local machine for building and deploying applications easily.
Import YAML
: Upload a YAML file to create and define resources for building and deploying applications.
Upload JAR file
: Upload a JAR file to build and deploy Java applications.
Share my Project
: Use this option to add or remove users to a project and provide accessibility options to them.
Helm Chart repositories
: Use this option to add Helm Chart repositories in a namespace.
Re-ordering of resources
: Use these resources to re-order pinned resources added to your navigation pane. The drag-and-drop icon is displayed on the left side of the pinned resource when you hover over it in the navigation pane. The dragged resource can be dropped only in the section where it resides.
Note that certain options, such as
Pipelines
,
Event Source
, and
Import Virtual Machines
, are displayed only when the
OpenShift Pipelines Operator
,
OpenShift Serverless Operator
, and
OpenShift Virtualization Operator
are installed, respectively.
3.1.2. Creating sample applications
You can use the sample applications in the
+Add
flow of the
Developer
perspective to create, build, and deploy applications quickly.
Prerequisites
-
You have logged in to the OpenShift Container Platform web console and are in the
Developer
perspective.
Procedure
-
In the
+Add
view, click the
Samples
tile to see the
Samples
page.
On the
Samples
page, select one of the available sample applications to see the
Create Sample Application
form.
In the
Create Sample Application Form
:
In the
Name
field, the deployment name is displayed by default. You can modify this name as required.
In the
Builder Image Version
, a builder image is selected by default. You can modify this image version by using the
Builder Image Version
drop-down list.
A sample Git repository URL is added by default.
Click
Create
to create the sample application. The build status of the sample application is displayed on the
Topology
view. After the sample application is created, you can see the deployment added to the application.
3.1.3. Creating applications by using Quick Starts
The
Quick Starts
page shows you how to create, import, and run applications on OpenShift Container Platform, with step-by-step instructions and tasks.
Prerequisites
-
You have logged in to the OpenShift Container Platform web console and are in the
Developer
perspective.
Procedure
-
In the
+Add
view, click the
Getting Started resources
→
Build with guided documentation
→
View all quick starts
link to view the
Quick Starts
page.
In the
Quick Starts
page, click the tile for the quick start that you want to use.
Click
Start
to begin the quick start.
Perform the steps that are displayed.
3.1.4. Importing a codebase from Git to create an application
You can use the
Developer
perspective to create, build, and deploy an application on OpenShift Container Platform using an existing codebase in GitHub.
The following procedure walks you through the
From Git
option in the
Developer
perspective to create an application.
Procedure
-
In the
+Add
view, click
From Git
in the
Git Repository
tile to see the
Import from git
form.
In the
Git
section, enter the Git repository URL for the codebase you want to use to create an application. For example, enter the URL of this sample Node.js application
https://github.com/sclorg/nodejs-ex
. The URL is then validated.
Optional: You can click
Show Advanced Git Options
to add details such as:
Git Reference
to point to code in a specific branch, tag, or commit to be used to build the application.
Context Dir
to specify the subdirectory for the application source code you want to use to build the application.
Source Secret
to create a
Secret Name
with credentials for pulling your source code from a private repository.
Optional: You can import a
Devfile
, a
Dockerfile
,
Builder Image
, or a
Serverless Function
through your Git repository to further customize your deployment.
If your Git repository contains a
Devfile
, a
Dockerfile
, a
Builder Image
, or a
func.yaml
, it is automatically detected and populated on the respective path fields.
If a
Devfile
, a
Dockerfile
, or a
Builder Image
are detected in the same repository, the
Devfile
is selected by default.
If
func.yaml
is detected in the Git repository, the
Import Strategy
changes to
Serverless Function
.
Alternatively, you can create a serverless function by clicking
Create Serverless function
in the
+Add
view using the Git repository URL.
To edit the file import type and select a different strategy, click
Edit import strategy
option.
If multiple
Devfiles
, a
Dockerfiles
, or a
Builder Images
are detected, to import a specific instance, specify the respective paths relative to the context directory.
After the Git URL is validated, the recommended builder image is selected and marked with a star. If the builder image is not auto-detected, select a builder image. For the
https://github.com/sclorg/nodejs-ex
Git URL, by default the Node.js builder image is selected.
Optional: Use the
Builder Image Version
drop-down to specify a version.
Optional: Use the
Edit import strategy
to select a different strategy.
Optional: For the Node.js builder image, use the
Run command
field to override the command to run the application.
In the
General
section:
In the
Application
field, enter a unique name for the application grouping, for example,
myapp
. Ensure that the application name is unique in a namespace.
The
Name
field to identify the resources created for this application is automatically populated based on the Git repository URL if there are no existing applications. If there are existing applications, you can choose to deploy the component within an existing application, create a new application, or keep the component unassigned.
The resource name must be unique in a namespace. Modify the resource name if you get an error.
In the
Resources
section, select:
Deployment
, to create an application in plain Kubernetes style.
Deployment Config
, to create an OpenShift Container Platform style application.
Serverless Deployment
, to create a Knative service.
To set the default resource preference for importing an application, go to
User Preferences
→
Applications
→
Resource type
field. The
Serverless Deployment
option is displayed in the
Import from Git
form only if the OpenShift Serverless Operator is installed in your cluster. The
Resources
section is not available while creating a serverless function. For further details, refer to the OpenShift Serverless documentation.
In the
Pipelines
section, select
Add Pipeline
, and then click
Show Pipeline Visualization
to see the pipeline for the application. A default pipeline is selected, but you can choose the pipeline you want from the list of available pipelines for the application.
The
Add pipeline
checkbox is checked and
Configure PAC
is selected by default if the following criterias are fulfilled:
Pipeline operator is installed
pipelines-as-code
is enabled
.tekton
directory is detected in the Git repository
Add a webhook to your repository. If
Configure PAC
is checked and the GitHub App is set up, you can see the
Use GitHub App
and
Setup a webhook
options. If GitHub App is not set up, you can only see the
Setup a webhook
option:
Go to
Settings
→
Webhooks
and click
Add webhook
.
Set the
Payload URL
to the Pipelines as Code controller public URL.
Select the content type as
application/json
.
Add a webhook secret and note it in an alternate location. With
openssl
installed on your local machine, generate a random secret.
Click
Let me select individual events
and select these events:
Commit comments
,
Issue comments
,
Pull request
, and
Pushes
.
Click
Add webhook
.
Optional: In the
Advanced Options
section, the
Target port
and the
Create a route to the application
is selected by default so that you can access your application using a publicly available URL.
If your application does not expose its data on the default public port, 80, clear the check box, and set the target port number you want to expose.
Optional: You can use the following advanced options to further customize your application:
-
Routing
-
By clicking the
Routing
link, you can perform the following actions:
Customize the hostname for the route.
Specify the path the router watches.
Select the target port for the traffic from the drop-down list.
Secure your route by selecting the
Secure Route
check box. Select the required TLS termination type and set a policy for insecure traffic from the respective drop-down lists.
For serverless applications, the Knative service manages all the routing options above. However, you can customize the target port for traffic, if required. If the target port is not specified, the default port of
8080
is used.
-
Domain mapping
-
If you are creating a
Serverless Deployment
, you can add a custom domain mapping to the Knative service during creation.
In the
Advanced options
section, click
Show advanced Routing options
.
If the domain mapping CR that you want to map to the service already exists, you can select it from the
Domain mapping
drop-down menu.
If you want to create a new domain mapping CR, type the domain name into the box, and select the
Create
option. For example, if you type in
example.com
, the
Create
option is
Create "example.com"
.
Health Checks
Click the
Health Checks
link to add Readiness, Liveness, and Startup probes to your application. All the probes have prepopulated default data; you can add the probes with the default data or customize it as required.
To customize the health probes:
Click
Add Readiness Probe
, if required, modify the parameters to check if the container is ready to handle requests, and select the check mark to add the probe.
Click
Add Liveness Probe
, if required, modify the parameters to check if a container is still running, and select the check mark to add the probe.
Click
Add Startup Probe
, if required, modify the parameters to check if the application within the container has started, and select the check mark to add the probe.
For each of the probes, you can specify the request type -
HTTP GET
,
Container Command
, or
TCP Socket
, from the drop-down list. The form changes as per the selected request type. You can then modify the default values for the other parameters, such as the success and failure thresholds for the probe, number of seconds before performing the first probe after the container starts, frequency of the probe, and the timeout value.
Build Configuration and Deployment
Click the
Build Configuration
and
Deployment
links to see the respective configuration options. Some options are selected by default; you can customize them further by adding the necessary triggers and environment variables.
For serverless applications, the
Deployment
option is not displayed as the Knative configuration resource maintains the desired state for your deployment instead of a
DeploymentConfig
resource.
-
Scaling
-
Click the
Scaling
link to define the number of pods or instances of the application you want to deploy initially.
If you are creating a serverless deployment, you can also configure the following settings:
Min Pods
determines the lower limit for the number of pods that must be running at any given time for a Knative service. This is also known as the
minScale
setting.
Max Pods
determines the upper limit for the number of pods that can be running at any given time for a Knative service. This is also known as the
maxScale
setting.
Concurrency target
determines the number of concurrent requests desired for each instance of the application at a given time.
Concurrency limit
determines the limit for the number of concurrent requests allowed for each instance of the application at a given time.
Concurrency utilization
determines the percentage of the concurrent requests limit that must be met before Knative scales up additional pods to handle additional traffic.
Autoscale window
defines the time window over which metrics are averaged to provide input for scaling decisions when the autoscaler is not in panic mode. A service is scaled-to-zero if no requests are received during this window. The default duration for the autoscale window is
60s
. This is also known as the stable window.
Resource Limit
Click the
Resource Limit
link to set the amount of
CPU
and
Memory
resources a container is guaranteed or allowed to use when running.
Labels
Click the
Labels
link to add custom labels to your application.
Click
Create
to create the application and a success notification is displayed. You can see the build status of the application in the
Topology
view.
3.1.5. Deploying a Java application by uploading a JAR file
You can use the web console
Developer
perspective to upload a JAR file by using the following options:
Navigate to the
+Add
view of the
Developer
perspective, and click
Upload JAR file
in the
From Local Machine
tile. Browse and select your JAR file, or drag a JAR file to deploy your application.
Navigate to the
Topology
view and use the
Upload JAR file
option, or drag a JAR file to deploy your application.
Use the in-context menu in the
Topology
view, and then use the
Upload JAR file
option to upload your JAR file to deploy your application.
Prerequisites
-
The Cluster Samples Operator must be installed by a cluster administrator.
You have access to the OpenShift Container Platform web console and are in the
Developer
perspective.
Procedure
-
In the
Topology
view, right-click anywhere to view the
Add to Project
menu.
Hover over the
Add to Project
menu to see the menu options, and then select the
Upload JAR file
option to see the
Upload JAR file
form. Alternatively, you can drag the JAR file into the
Topology
view.
In the
JAR file
field, browse for the required JAR file on your local machine and upload it. Alternatively, you can drag the JAR file on to the field. A toast alert is displayed at the top right if an incompatible file type is dragged into the
Topology
view. A field error is displayed if an incompatible file type is dropped on the field in the upload form.
The runtime icon and builder image are selected by default. If a builder image is not auto-detected, select a builder image. If required, you can change the version using the
Builder Image Version
drop-down list.
Optional: In the
Application Name
field, enter a unique name for your application to use for resource labelling.
In the
Name
field, enter a unique component name for the associated resources.
Optional: Using the
Advanced options
→
Resource type
drop-down list, select a different resource type from the list of default resource types.
In the
Advanced options
menu, click
Create a Route to the Application
to configure a public URL for your deployed application.
Click
Create
to deploy the application. A toast notification is shown to notify you that the JAR file is being uploaded. The toast notification also includes a link to view the build logs.
If you attempt to close the browser tab while the build is running, a web alert is displayed.
After the JAR file is uploaded and the application is deployed, you can view the application in the
Topology
view.
3.1.6. Using the Devfile registry to access devfiles
You can use the devfiles in the
+Add
flow of the
Developer
perspective to create an application. The
+Add
flow provides a complete integration with the
devfile community registry
. A devfile is a portable YAML file that describes your development environment without needing to configure it from scratch. Using the
Devfile registry
, you can use a preconfigured devfile to create an application.
Procedure
-
Navigate to
Developer Perspective
→
+Add
→
Developer Catalog
→
All Services
. A list of all the available services in the
Developer Catalog
is displayed.
Under
Type
, click
Devfiles
to browse for devfiles that support a particular language or framework. Alternatively, you can use the keyword filter to search for a particular devfile using their name, tag, or description.
Click the devfile you want to use to create an application. The devfile tile displays the details of the devfile, including the name, description, provider, and the documentation of the devfile.
Click
Create
to create an application and view the application in the
Topology
view.
3.1.7. Using the Developer Catalog to add services or components to your application
You use the Developer Catalog to deploy applications and services based on Operator backed services such as Databases, Builder Images, and Helm Charts. The Developer Catalog contains a collection of application components, services, event sources, or source-to-image builders that you can add to your project. Cluster administrators can customize the content made available in the catalog.
Procedure
-
In the
Developer
perspective, navigate to the
+Add
view and from the
Developer Catalog
tile, click
All Services
to view all the available services in the
Developer Catalog
.
Under
All Services
, select the kind of service or the component you need to add to your project. For this example, select
Databases
to list all the database services and then click
MariaDB
to see the details for the service.
Click
Instantiate Template
to see an automatically populated template with details for the
MariaDB
service, and then click
Create
to create and view the MariaDB service in the
Topology
view.
3.1.8. Additional resources
3.2. Creating applications from installed Operators
Operators
are a method of packaging, deploying, and managing a Kubernetes application. You can create applications on OpenShift Container Platform using Operators that have been installed by a cluster administrator.
This guide walks developers through an example of creating applications from an installed Operator using the OpenShift Container Platform web console.
Additional resources
-
See the
Operators
guide for more on how Operators work and how the Operator Lifecycle Manager is integrated in OpenShift Container Platform.
3.2.1. Creating an etcd cluster using an Operator
This procedure walks through creating a new etcd cluster using the etcd Operator, managed by Operator Lifecycle Manager (OLM).
Prerequisites
-
Access to an OpenShift Container Platform 4.13 cluster.
The etcd Operator already installed cluster-wide by an administrator.
Procedure
-
Create a new project in the OpenShift Container Platform web console for this procedure. This example uses a project called
my-etcd
.
Navigate to the
Operators → Installed Operators
page. The Operators that have been installed to the cluster by the cluster administrator and are available for use are shown here as a list of cluster service versions (CSVs). CSVs are used to launch and manage the software provided by the Operator.
You can get this list from the CLI using:
$ oc get csv
On the
Installed Operators
page, click the etcd Operator to view more details and available actions.
As shown under
Provided APIs
, this Operator makes available three new resource types, including one for an
etcd Cluster
(the
EtcdCluster
resource). These objects work similar to the built-in native Kubernetes ones, such as
Deployment
or
ReplicaSet
, but contain logic specific to managing etcd.
Create a new etcd cluster:
In the
etcd Cluster
API box, click
Create instance
.
The next page allows you to make any modifications to the minimal starting template of an
EtcdCluster
object, such as the size of the cluster. For now, click
Create
to finalize. This triggers the Operator to start up the pods, services, and other components of the new etcd cluster.
Click the
example
etcd cluster, then click the
Resources
tab to see that your project now contains a number of resources created and configured automatically by the Operator.
Verify that a Kubernetes service has been created that allows you to access the database from other pods in your project.
All users with the
edit
role in a given project can create, manage, and delete application instances (an etcd cluster, in this example) managed by Operators that have already been created in the project, in a self-service manner, just like a cloud service. If you want to enable additional users with this ability, project administrators can add the role using the following command:
$ oc policy add-role-to-user edit <user> -n <target_project>
You now have an etcd cluster that will react to failures and rebalance data as pods become unhealthy or are migrated between nodes in the cluster. Most importantly, cluster administrators or developers with proper access can now easily use the database with their applications.
3.3. Creating applications by using the CLI
You can create an OpenShift Container Platform application from components that include source or binary code, images, and templates by using the OpenShift Container Platform CLI.
The set of objects created by
new-app
depends on the artifacts passed as input: source repositories, images, or templates.
3.3.1. Creating an application from source code
With the
new-app
command you can create applications from source code in a local or remote Git repository.
The
new-app
command creates a build configuration, which itself creates a new application image from your source code. The
new-app
command typically also creates a
Deployment
object to deploy the new image, and a service to provide load-balanced access to the deployment running your image.
OpenShift Container Platform automatically detects whether the pipeline, source, or docker build strategy should be used, and in the case of source build, detects an appropriate language builder image.
To create an application from a Git repository in a local directory:
$ oc new-app /<path to source code>
If you use a local Git repository, the repository must have a remote named
origin
that points to a URL that is accessible by the OpenShift Container Platform cluster. If there is no recognized remote, running the
new-app
command will create a binary build.
To create an application from a remote Git repository:
$ oc new-app https://github.com/sclorg/cakephp-ex
To create an application from a private remote Git repository:
$ oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret
If you use a private remote Git repository, you can use the
--source-secret
flag to specify an existing source clone secret that will get injected into your build config to access the repository.
You can use a subdirectory of your source code repository by specifying a
--context-dir
flag. To create an application from a remote Git repository and a context subdirectory:
$ oc new-app https://github.com/sclorg/s2i-ruby-container.git \
--context-dir=2.0/test/puma-test-app
Also, when specifying a remote URL, you can specify a Git branch to use by appending
#<branch_name>
to the end of the URL:
$ oc new-app https://github.com/openshift/ruby-hello-world.git#beta4
3.3.1.3. Build strategy detection
OpenShift Container Platform automatically determines which build strategy to use by detecting certain files:
If a Jenkins file exists in the root or specified context directory of the source repository when creating a new application, OpenShift Container Platform generates a pipeline build strategy.
The
pipeline
build strategy is deprecated; consider using Red Hat OpenShift Pipelines instead.
If a Dockerfile exists in the root or specified context directory of the source repository when creating a new application, OpenShift Container Platform generates a docker build strategy.
If neither a Jenkins file nor a Dockerfile is detected, OpenShift Container Platform generates a source build strategy.
Override the automatically detected build strategy by setting the
--strategy
flag to
docker
,
pipeline
, or
source
.
$ oc new-app /home/user/code/myapp --strategy=docker
The
oc
command requires that files containing build sources are available in a remote Git repository. For all source builds, you must use
git remote -v
.
3.3.1.4. Language detection
If you use the source build strategy,
new-app
attempts to determine the language builder to use by the presence of certain files in the root or specified context directory of the repository:
Table 3.1. Languages detected by new-app
Language
|
Files
|
dotnet
project.json
,
*.csproj
pom.xml
nodejs
app.json
,
package.json
cpanfile
,
index.pl
composer.json
,
index.php
python
requirements.txt
,
setup.py
Gemfile
,
Rakefile
,
config.ru
scala
build.sbt
golang
Godeps
,
main.go
After a language is detected,
new-app
searches the OpenShift Container Platform server for image stream tags that have a
supports
annotation matching the detected language, or an image stream that matches the name of the detected language. If a match is not found,
new-app
searches the
Docker Hub registry
for an image that matches the detected language based on name.
You can override the image the builder uses for a particular source repository by specifying the image, either an image stream or container specification, and the repository with a
~
as a separator. Note that if this is done, build strategy detection and language detection are not carried out.
For example, to use the
myproject/my-ruby
imagestream with the source in a remote repository:
$ oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git
To use the
openshift/ruby-20-centos7:latest
container image stream with the source in a local repository:
$ oc new-app openshift/ruby-20-centos7:latest~/home/user/code/my-ruby-app
Language detection requires the Git client to be locally installed so that your repository can be cloned and inspected. If Git is not available, you can avoid the language detection step by specifying the builder image to use with your repository with the
<image>~<repository>
syntax.
The
-i <image> <repository>
invocation requires that
new-app
attempt to clone
repository
to determine what type of artifact it is, so this will fail if Git is not available.
The
-i <image> --code <repository>
invocation requires
new-app
clone
repository
to determine whether
image
should be used as a builder for the source code, or deployed separately, as in the case of a database image.
3.3.2. Creating an application from an image
You can deploy an application from an existing image. Images can come from image streams in the OpenShift Container Platform server, images in a specific registry, or images in the local Docker server.
The
new-app
command attempts to determine the type of image specified in the arguments passed to it. However, you can explicitly tell
new-app
whether the image is a container image using the
--docker-image
argument or an image stream using the
-i|--image-stream
argument.
If you specify an image from your local Docker repository, you must ensure that the same image is available to the OpenShift Container Platform cluster nodes.
3.3.2.1. Docker Hub MySQL image
Create an application from the Docker Hub MySQL image, for example:
$ oc new-app mysql
3.3.2.2. Image in a private registry
Create an application using an image in a private registry, specify the full container image specification:
$ oc new-app myregistry:5000/example/myimage
3.3.2.3. Existing image stream and optional image stream tag
Create an application from an existing image stream and optional image stream tag:
$ oc new-app my-stream:v1
3.3.3. Creating an application from a template
You can create an application from a previously stored template or from a template file, by specifying the name of the template as an argument. For example, you can store a sample application template and use it to create an application.
Upload an application template to your current project’s template library. The following example uploads an application template from a file called
examples/sample-app/application-template-stibuild.json
:
$ oc create -f examples/sample-app/application-template-stibuild.json
Then create a new application by referencing the application template. In this example, the template name is
ruby-helloworld-sample
:
$ oc new-app ruby-helloworld-sample
To create a new application by referencing a template file in your local file system, without first storing it in OpenShift Container Platform, use the
-f|--file
argument. For example:
$ oc new-app -f examples/sample-app/application-template-stibuild.json
3.3.3.1. Template parameters
When creating an application based on a template, use the
-p|--param
argument to set parameter values that are defined by the template:
$ oc new-app ruby-helloworld-sample \
-p ADMIN_USERNAME=admin -p ADMIN_PASSWORD=mypassword
You can store your parameters in a file, then use that file with
--param-file
when instantiating a template. If you want to read the parameters from standard input, use
--param-file=-
. The following is an example file called
helloworld.params
:
ADMIN_USERNAME=admin
ADMIN_PASSWORD=mypassword
Reference the parameters in the file when instantiating a template:
$ oc new-app ruby-helloworld-sample --param-file=helloworld.params
3.3.4. Modifying application creation
The
new-app
command generates OpenShift Container Platform objects that build, deploy, and run the application that is created. Normally, these objects are created in the current project and assigned names that are derived from the input source repositories or the input images. However, with
new-app
you can modify this behavior.
Table 3.2. new-app output objects
Object
|
Description
|
BuildConfig
A
BuildConfig
object is created for each source repository that is specified in the command line. The
BuildConfig
object specifies the strategy to use, the source location, and the build output location.
ImageStreams
For the
BuildConfig
object, two image streams are usually created. One represents the input image. With source builds, this is the builder image. With
Docker
builds, this is the
FROM
image. The second one represents the output image. If a container image was specified as input to
new-app
, then an image stream is created for that image as well.
DeploymentConfig
A
DeploymentConfig
object is created either to deploy the output of a build, or a specified image. The
new-app
command creates
emptyDir
volumes for all Docker volumes that are specified in containers included in the resulting
DeploymentConfig
object .
Service
The
new-app
command attempts to detect exposed ports in input images. It uses the lowest numeric exposed port to generate a service that exposes that port. To expose a different port, after
new-app
has completed, simply use the
oc expose
command to generate additional services.
Other
Other objects can be generated when instantiating templates, according to the template.
|
3.3.4.1. Specifying environment variables
When generating applications from a template, source, or an image, you can use the
-e|--env
argument to pass environment variables to the application container at run time:
$ oc new-app openshift/postgresql-92-centos7 \
-e POSTGRESQL_USER=user \
-e POSTGRESQL_DATABASE=db \
-e POSTGRESQL_PASSWORD=password
The variables can also be read from file using the
--env-file
argument. The following is an example file called
postgresql.env
:
POSTGRESQL_USER=user
POSTGRESQL_DATABASE=db
POSTGRESQL_PASSWORD=password
Read the variables from the file:
$ oc new-app openshift/postgresql-92-centos7 --env-file=postgresql.env
Additionally, environment variables can be given on standard input by using
--env-file=-
:
$ cat postgresql.env | oc new-app openshift/postgresql-92-centos7 --env-file=-
Any
BuildConfig
objects created as part of
new-app
processing are not updated with environment variables passed with the
-e|--env
or
--env-file
argument.
3.3.4.2. Specifying build environment variables
When generating applications from a template, source, or an image, you can use the
--build-env
argument to pass environment variables to the build container at run time:
$ oc new-app openshift/ruby-23-centos7 \
--build-env HTTP_PROXY=http://myproxy.net:1337/ \
--build-env GEM_HOME=~/.gem
The variables can also be read from a file using the
--build-env-file
argument. The following is an example file called
ruby.env
:
HTTP_PROXY=http://myproxy.net:1337/
GEM_HOME=~/.gem
Read the variables from the file:
$ oc new-app openshift/ruby-23-centos7 --build-env-file=ruby.env
Additionally, environment variables can be given on standard input by using
--build-env-file=-
:
$ cat ruby.env | oc new-app openshift/ruby-23-centos7 --build-env-file=-
3.3.4.3. Specifying labels
When generating applications from source, images, or templates, you can use the
-l|--label
argument to add labels to the created objects. Labels make it easy to collectively select, configure, and delete objects associated with the application.
$ oc new-app https://github.com/openshift/ruby-hello-world -l name=hello-world
3.3.4.4. Viewing the output without creation
To see a dry-run of running the
new-app
command, you can use the
-o|--output
argument with a
yaml
or
json
value. You can then use the output to preview the objects that are created or redirect it to a file that you can edit. After you are satisfied, you can use
oc create
to create the OpenShift Container Platform objects.
To output
new-app
artifacts to a file, run the following:
$ oc new-app https://github.com/openshift/ruby-hello-world \
-o yaml > myapp.yaml
Edit the file:
$ vi myapp.yaml
Create a new application by referencing the file:
$ oc create -f myapp.yaml
3.3.4.5. Creating objects with different names
Objects created by
new-app
are normally named after the source repository, or the image used to generate them. You can set the name of the objects produced by adding a
--name
flag to the command:
$ oc new-app https://github.com/openshift/ruby-hello-world --name=myapp
3.3.4.6. Creating objects in a different project
Normally,
new-app
creates objects in the current project. However, you can create objects in a different project by using the
-n|--namespace
argument:
$ oc new-app https://github.com/openshift/ruby-hello-world -n myproject
3.3.4.7. Creating multiple objects
The
new-app
command allows creating multiple applications specifying multiple parameters to
new-app
. Labels specified in the command line apply to all objects created by the single command. Environment variables apply to all components created from source or images.
To create an application from a source repository and a Docker Hub image:
$ oc new-app https://github.com/openshift/ruby-hello-world mysql
If a source code repository and a builder image are specified as separate arguments,
new-app
uses the builder image as the builder for the source code repository. If this is not the intent, specify the required builder image for the source using the
~
separator.
3.3.4.8. Grouping images and source in a single pod
The
new-app
command allows deploying multiple images together in a single pod. To specify which images to group together, use the
+
separator. The
--group
command line argument can also be used to specify the images that should be grouped together. To group the image built from a source repository with other images, specify its builder image in the group:
$ oc new-app ruby+mysql
To deploy an image built from source and an external image together:
$ oc new-app \
ruby~https://github.com/openshift/ruby-hello-world \
mysql \
--group=ruby+mysql
Chapter 4. Viewing application composition by using the Topology view
The
Topology
view in the
Developer
perspective of the web console provides a visual representation of all the applications within a project, their build status, and the components and services associated with them.
4.2. Viewing the topology of your application
You can navigate to the
Topology
view using the left navigation panel in the
Developer
perspective. After you deploy an application, you are directed automatically to the
Graph view
where you can see the status of the application pods, quickly access the application on a public URL, access the source code to modify it, and see the status of your last build. You can zoom in and out to see more details for a particular application.
The
Topology
view provides you the option to monitor your applications using the
List
view. Use the
List view
icon (
) to see a list of all your applications and use the
Graph view
icon (
) to switch back to the graph view.
You can customize the views as required using the following:
Use the
Find by name
field to find the required components. Search results may appear outside of the visible area; click
Fit to Screen
from the lower-left toolbar to resize the
Topology
view to show all components.
Use the
Display Options
drop-down list to configure the
Topology
view of the various application groupings. The options are available depending on the types of components deployed in the project:
Expand
group
Virtual Machines: Toggle to show or hide the virtual machines.
Application Groupings: Clear to condense the application groups into cards with an overview of an application group and alerts associated with it.
Helm Releases: Clear to condense the components deployed as Helm Release into cards with an overview of a given release.
Knative Services: Clear to condense the Knative Service components into cards with an overview of a given component.
Operator Groupings: Clear to condense the components deployed with an Operator into cards with an overview of the given group.
Show
elements based on
Pod Count
or
Labels
Pod Count: Select to show the number of pods of a component in the component icon.
Labels: Toggle to show or hide the component labels.
The
Topology
view also provides you the
Export application
option to download your application in the ZIP file format. You can then import the downloaded application to another project or cluster. For more details, see
Exporting an application to another project or cluster
in the
Additional resources
section.
4.3. Interacting with applications and components
In the
Topology
view in the
Developer
perspective of the web console, the
Graph view
provides the following options to interact with applications and components:
Click
Open URL
(
) to see your application exposed by the route on a public URL.
Click
Edit Source code
to access your source code and modify it.
This feature is available only when you create applications using the
From Git
,
From Catalog
, and the
From Dockerfile
options.
Hover your cursor over the lower left icon on the pod to see the name of the latest build and its status. The status of the application build is indicated as
New
(
),
Pending
(
),
Running
(
),
Completed
(
),
Failed
(
), and
Canceled
(
The status or phase of the pod is indicated by different colors and tooltips as:
Running
(
): The pod is bound to a node and all of the containers are created. At least one container is still running or is in the process of starting or restarting.
Not Ready
(
): The pods which are running multiple containers, not all containers are ready.
Warning
(
): Containers in pods are being terminated, however termination did not succeed. Some containers may be other states.
Failed
(
): All containers in the pod terminated but least one container has terminated in failure. That is, the container either exited with non-zero status or was terminated by the system.
Pending
(
): The pod is accepted by the Kubernetes cluster, but one or more of the containers has not been set up and made ready to run. This includes time a pod spends waiting to be scheduled as well as the time spent downloading container images over the network.
Succeeded
(
): All containers in the pod terminated successfully and will not be restarted.
Terminating
(
): When a pod is being deleted, it is shown as
Terminating
by some kubectl commands.
Terminating
status is not one of the pod phases. A pod is granted a graceful termination period, which defaults to 30 seconds.
Unknown
(
): The state of the pod could not be obtained. This phase typically occurs due to an error in communicating with the node where the pod should be running.
After you create an application and an image is deployed, the status is shown as
Pending
. After the application is built, it is displayed as
Running
.
The application resource name is appended with indicators for the different types of resource objects as follows:
CJ
:
CronJob
D
:
Deployment
DC
:
DeploymentConfig
DS
:
DaemonSet
J
:
Job
P
:
Pod
SS
:
StatefulSet
(Knative): A serverless application
Serverless applications take some time to load and display on the
Graph view
. When you deploy a serverless application, it first creates a service resource and then a revision. After that, it is deployed and displayed on the
Graph view
. If it is the only workload, you might be redirected to the
Add
page. After the revision is deployed, the serverless application is displayed on the
Graph view
.
4.4. Scaling application pods and checking builds and routes
The
Topology
view provides the details of the deployed components in the
Overview
panel. You can use the
Overview
and
Details
tabs to scale the application pods, check build status, services, and routes as follows:
Click on the component node to see the
Overview
panel to the right. Use the
Details
tab to:
Scale your pods using the up and down arrows to increase or decrease the number of instances of the application manually. For serverless applications, the pods are automatically scaled down to zero when idle and scaled up depending on the channel traffic.
Check the
Labels
,
Annotations
, and
Status
of the application.
Click the
Resources
tab to:
See the list of all the pods, view their status, access logs, and click on the pod to see the pod details.
See the builds, their status, access logs, and start a new build if needed.
See the services and routes used by the component.
For serverless applications, the
Resources
tab provides information on the revision, routes, and the configurations used for that component.
4.5. Adding components to an existing project
You can add components to a project.
Procedure
-
Navigate to the
+Add
view.
Click
Add to Project
(
) next to left navigation pane or press
Ctrl
+
Space
Search for the component and click the
Start
/
Create
/
Install
button or click
Enter
to add the component to the project and see it in the topology
Graph view
.
Alternatively, you can also use the available options in the context menu, such as
Import from Git
,
Container Image
,
Database
,
From Catalog
,
Operator Backed
,
Helm Charts
,
Samples
, or
Upload JAR file
, by right-clicking in the topology
Graph view
to add a component to your project.
4.6. Grouping multiple components within an application
You can use the
+Add
view to add multiple components or services to your project and use the topology
Graph view
to group applications and resources within an application group.
Prerequisites
-
You have created and deployed minimum two or more components on OpenShift Container Platform using the
Developer
perspective.
Alternatively, you can also add the component to an application as follows:
Click the service pod to see the
Overview
panel to the right.
Click the
Actions
drop-down menu and select
Edit Application Grouping
.
In the
Edit Application Grouping
dialog box, click the
Application
drop-down list, and select an appropriate application group.
Click
Save
to add the service to the application group.
You can remove a component from an application group by selecting the component and using
Shift
+ drag to drag it out of the application group.
4.7. Adding services to your application
To add a service to your application use the
+Add
actions using the context menu in the topology
Graph view
.
In addition to the context menu, you can add services by using the sidebar or hovering and dragging the dangling arrow from the application group.
Procedure
-
Right-click an application group in the topology
Graph view
to display the context menu.
-
Use
Add to Application
to select a method for adding a service to the application group, such as
From Git
,
Container Image
,
From Dockerfile
,
From Devfile
,
Upload JAR file
,
Event Source
,
Channel
, or
Broker
.
Complete the form for the method you choose and click
Create
. For example, to add a service based on the source code in your Git repository, choose the
From Git
method, fill in the
Import from Git
form, and click
Create
.
4.8. Removing services from your application
In the topology
Graph view
remove a service from your application using the context menu.
Procedure
-
Right-click on a service in an application group in the topology
Graph view
to display the context menu.
Select
Delete Deployment
to delete the service.
4.9. Labels and annotations used for the Topology view
The
Topology
view uses the following labels and annotations:
-
Icon displayed in the node
-
Icons in the node are defined by looking for matching icons using the
app.openshift.io/runtime
label, followed by the
app.kubernetes.io/name
label. This matching is done using a predefined set of icons.
-
Link to the source code editor or the source
-
The
app.openshift.io/vcs-uri
annotation is used to create links to the source code editor.
-
Node Connector
-
The
app.openshift.io/connects-to
annotation is used to connect the nodes.
-
App grouping
-
The
app.kubernetes.io/part-of=<appname>
label is used to group the applications, services, and components.
For detailed information on the labels and annotations OpenShift Container Platform applications must use, see
Guidelines for labels and annotations for OpenShift applications
.
4.10. Additional resources
Chapter 5. Exporting applications
As a developer, you can export your application in the ZIP file format. Based on your needs, import the exported application to another project in the same cluster or a different cluster by using the
Import YAML
option in the
+Add
view. Exporting your application helps you to reuse your application resources and saves your time.
-
In the developer perspective, perform one of the following steps:
Navigate to the
+Add
view and click
Export application
in the
Application portability
tile.
Navigate to the
Topology
view and click
Export application
.
Click
OK
in the
Export Application
dialog box. A notification opens to confirm that the export of resources from your project has started.
Optional steps that you might need to perform in the following scenarios:
If you have started exporting an incorrect application, click
Export application
→
Cancel Export
.
If your export is already in progress and you want to start a fresh export, click
Export application
→
Restart Export
.
If you want to view logs associated with exporting an application, click
Export application
and the
View Logs
link.
After a successful export, click
Download
in the dialog box to download application resources in ZIP format onto your machine.
Chapter 6. Connecting applications to services
6.1. Release notes for Service Binding Operator
The Service Binding Operator consists of a controller and an accompanying custom resource definition (CRD) for service binding. It manages the data plane for workloads and backing services. The Service Binding Controller reads the data made available by the control plane of backing services. Then, it projects this data to workloads according to the rules specified through the
ServiceBinding
resource.
With Service Binding Operator, you can:
Bind your workloads together with Operator-managed backing services.
Automate configuration of binding data.
Provide service operators a low-touch administrative experience to provision and manage access to services.
Enrich development lifecycle with a consistent and declarative service binding method that eliminates discrepancies in cluster environments.
The custom resource definition (CRD) of the Service Binding Operator supports the following APIs:
Service Binding
with the
binding.operators.coreos.com
API group.
Service Binding (Spec API)
with the
servicebinding.io
API group.
Some features in the following table are in
Technology Preview
. These experimental features are not intended for production use.
In the table, features are marked with the following statuses:
TP
:
Technology Preview
GA
:
General Availability
Note the following scope of support on the Red Hat Customer Portal for these features:
Table 6.1. Support matrix
Service Binding Operator
|
API Group and Support Status
|
OpenShift Versions
|
Version
binding.operators.coreos.com
servicebinding.io
1.3.3
4.9-4.12
1.3.1
4.9-4.11
4.9-4.11
4.7-4.11
1.1.1
4.7-4.10
4.7-4.10
1.0.1
4.7-4.9
4.7-4.9
|
6.1.2. Making open source more inclusive
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see
Red Hat CTO Chris Wright’s message
.
6.1.3. Release notes for Service Binding Operator 1.3.3
Service Binding Operator 1.3.3 is now available on OpenShift Container Platform 4.9, 4.10, 4.11 and 4.12.
-
Before this update, a security vulnerability
CVE-2022-41717
was noted for Service Binding Operator. This update fixes the
CVE-2022-41717
error and updates the
golang.org/x/net
package from v0.0.0-20220906165146-f3363e06e74c to v0.4.0.
APPSVC-1256
Before this update, Provisioned Services were only detected if the respective resource had the "servicebinding.io/provisioned-service: true" annotation set while other Provisioned Services were missed. With this update, the detection mechanism identifies all Provisioned Services correctly based on the "status.binding.name" attribute.
APPSVC-1204
6.1.4. Release notes for Service Binding Operator 1.3.1
Service Binding Operator 1.3.1 is now available on OpenShift Container Platform 4.9, 4.10, and 4.11.
-
Before this update, a security vulnerability
CVE-2022-32149
was noted for Service Binding Operator. This update fixes the
CVE-2022-32149
error and updates the
golang.org/x/text
package from v0.3.7 to v0.3.8.
APPSVC-1220
6.1.5. Release notes for Service Binding Operator 1.3
Service Binding Operator 1.3 is now available on OpenShift Container Platform 4.9, 4.10, and 4.11.
6.1.5.1. Removed functionality
-
In Service Binding Operator 1.3, the Operator Lifecycle Manager (OLM) descriptor feature has been removed to improve resource utilization. As an alternative to OLM descriptors, you can use CRD annotations to declare binding data.
6.1.6. Release notes for Service Binding Operator 1.2
Service Binding Operator 1.2 is now available on OpenShift Container Platform 4.7, 4.8, 4.9, 4.10, and 4.11.
This section highlights what is new in Service Binding Operator 1.2:
Enable Service Binding Operator to consider optional fields in the annotations by setting the
optional
flag value to
true
.
Support for
servicebinding.io/v1beta1
resources.
Improvements to the discoverability of bindable services by exposing the relevant binding secret without requiring a workload to be present.
-
Currently, when you install Service Binding Operator on OpenShift Container Platform 4.11, the memory footprint of Service Binding Operator increases beyond expected limits. With low usage, however, the memory footprint stays within the expected ranges of your environment or scenarios. In comparison with OpenShift Container Platform 4.10, under stress, both the average and maximum memory footprint increase considerably. This issue is evident in the previous versions of Service Binding Operator as well. There is currently no workaround for this issue.
APPSVC-1200
By default, the projected files get their permissions set to 0644. Service Binding Operator cannot set specific permissions due to a bug in Kubernetes that causes issues if the service expects specific permissions such as,
0600
. As a workaround, you can modify the code of the program or the application that is running inside a workload resource to copy the file to the
/tmp
directory and set the appropriate permissions.
APPSVC-1127
There is currently a known issue with installing Service Binding Operator in a single namespace installation mode. The absence of an appropriate namespace-scoped role-based access control (RBAC) rule prevents the successful binding of an application to a few known Operator-backed services that the Service Binding Operator can automatically detect and bind to. When this happens, it generates an error message similar to the following example:
6.1.7. Release notes for Service Binding Operator 1.1.1
Service Binding Operator 1.1.1 is now available on OpenShift Container Platform 4.7, 4.8, 4.9, and 4.10.
-
Before this update, a security vulnerability
CVE-2021-38561
was noted for Service Binding Operator Helm chart. This update fixes the
CVE-2021-38561
error and updates the
golang.org/x/text
package from v0.3.6 to v0.3.7.
APPSVC-1124
Before this update, users of the Developer Sandbox did not have sufficient permissions to read
ClusterWorkloadResourceMapping
resources. As a result, Service Binding Operator prevented all service bindings from being successful. With this update, the Service Binding Operator now includes the appropriate role-based access control (RBAC) rules for any authenticated subject including the Developer Sandbox users. These RBAC rules allow the Service Binding Operator to
get
,
list
, and
watch
the
ClusterWorkloadResourceMapping
resources for the Developer Sandbox users and to process service bindings successfully.
APPSVC-1135
-
There is currently a known issue with installing Service Binding Operator in a single namespace installation mode. The absence of an appropriate namespace-scoped role-based access control (RBAC) rule prevents the successful binding of an application to a few known Operator-backed services that the Service Binding Operator can automatically detect and bind to. When this happens, it generates an error message similar to the following example:
6.1.8. Release notes for Service Binding Operator 1.1
Service Binding Operator is now available on OpenShift Container Platform 4.7, 4.8, 4.9, and 4.10.
This section highlights what is new in Service Binding Operator 1.1:
Service Binding Options
Workload resource mapping: Define exactly where binding data needs to be projected for the secondary workloads.
Bind new workloads using a label selector.
-
Before this update, service bindings that used label selectors to pick up workloads did not project service binding data into the new workloads that matched the given label selectors. As a result, the Service Binding Operator could not periodically bind such new workloads. With this update, service bindings now project service binding data into the new workloads that match the given label selector. The Service Binding Operator now periodically attempts to find and bind such new workloads.
APPSVC-1083
-
There is currently a known issue with installing Service Binding Operator in a single namespace installation mode. The absence of an appropriate namespace-scoped role-based access control (RBAC) rule prevents the successful binding of an application to a few known Operator-backed services that the Service Binding Operator can automatically detect and bind to. When this happens, it generates an error message similar to the following example:
6.1.9. Release notes for Service Binding Operator 1.0.1
Service Binding Operator is now available on OpenShift Container Platform 4.7, 4.8 and 4.9.
Service Binding Operator 1.0.1 supports OpenShift Container Platform 4.9 and later running on:
IBM Power Systems
IBM Z and LinuxONE
The custom resource definition (CRD) of the Service Binding Operator 1.0.1 supports the following APIs:
Service Binding
with the
binding.operators.coreos.com
API group.
Service Binding (Spec API Tech Preview)
with the
servicebinding.io
API group.
Service Binding (Spec API Tech Preview)
with the
servicebinding.io
API group is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see
Technology Preview Features Support Scope
.
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use.
Technology Preview Features Support Scope
In the table below, features are marked with the following statuses:
TP
:
Technology Preview
GA
:
General Availability
Note the following scope of support on the Red Hat Customer Portal for these features:
Table 6.2. Support matrix
Feature
|
Service Binding Operator 1.0.1
|
binding.operators.coreos.com
API group
servicebinding.io
API group
|
-
Before this update, binding the data values from a
Cluster
custom resource (CR) of the
postgresql.k8s.enterpriesedb.io/v1
API collected the
host
binding value from the
.metadata.name
field of the CR. The collected binding value is an incorrect hostname and the correct hostname is available at the
.status.writeService
field. With this update, the annotations that the Service Binding Operator uses to expose the binding data values from the backing service CR are now modified to collect the
host
binding value from the
.status.writeService
field. The Service Binding Operator uses these modified annotations to project the correct hostname in the
host
and
provider
bindings.
APPSVC-1040
Before this update, when you would bind a
PostgresCluster
CR of the
postgres-operator.crunchydata.com/v1beta1
API, the binding data values did not include the values for the database certificates. As a result, the application failed to connect to the database. With this update, modifications to the annotations that the Service Binding Operator uses to expose the binding data from the backing service CR now include the database certificates. The Service Binding Operator uses these modified annotations to project the correct
ca.crt
,
tls.crt
, and
tls.key
certificate files.
APPSVC-1045
Before this update, when you would bind a
PerconaXtraDBCluster
custom resource (CR) of the
pxc.percona.com
API, the binding data values did not include the
port
and
database
values. These binding values along with the others already projected are necessary for an application to successfully connect to the database service. With this update, the annotations that the Service Binding Operator uses to expose the binding data values from the backing service CR are now modified to project the additional
port
and
database
binding values. The Service Binding Operator uses these modified annotations to project the complete set of binding values that the application can use to successfully connect to the database service.
APPSVC-1073
-
Currently, when you install the Service Binding Operator in the single namespace installation mode, the absence of an appropriate namespace-scoped role-based access control (RBAC) rule prevents the successful binding of an application to a few known Operator-backed services that the Service Binding Operator can automatically detect and bind to. In addition, the following error message is generated:
6.1.10. Release notes for Service Binding Operator 1.0
Service Binding Operator is now available on OpenShift Container Platform 4.7, 4.8 and 4.9.
The custom resource definition (CRD) of the Service Binding Operator 1.0 supports the following APIs:
Service Binding
with the
binding.operators.coreos.com
API group.
Service Binding (Spec API Tech Preview)
with the
servicebinding.io
API group.
Service Binding (Spec API Tech Preview)
with the
servicebinding.io
API group is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see
Technology Preview Features Support Scope
.
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use.
Technology Preview Features Support Scope
In the table below, features are marked with the following statuses:
TP
:
Technology Preview
GA
:
General Availability
Note the following scope of support on the Red Hat Customer Portal for these features:
Table 6.3. Support matrix
Feature
|
Service Binding Operator 1.0
|
binding.operators.coreos.com
API group
servicebinding.io
API group
|
Service Binding Operator 1.0 supports OpenShift Container Platform 4.9 and later running on:
IBM Power Systems
IBM Z and LinuxONE
This section highlights what is new in Service Binding Operator 1.0:
Exposal of binding data from services
Based on annotations present in CRD, custom resources (CRs), or resources.
Based on descriptors present in Operator Lifecycle Manager (OLM) descriptors.
Support for provisioned services
Workload projection
Projection of binding data as files, with volume mounts.
Projection of binding data as environment variables.
Service Binding Options
Bind backing services in a namespace that is different from the workload namespace.
Project binding data into the specific container workloads.
Auto-detection of the binding data from resources owned by the backing service CR.
Compose custom binding data from the exposed binding data.
Support for non-
PodSpec
compliant workload resources.
Security
Support for role-based access control (RBAC).
6.1.11. Additional resources
6.2. Understanding Service Binding Operator
Application developers need access to backing services to build and connect workloads. Connecting workloads to backing services is always a challenge because each service provider suggests a different way to access their secrets and consume them in a workload. In addition, manual configuration and maintenance of this binding together of workloads and backing services make the process tedious, inefficient, and error-prone.
The Service Binding Operator enables application developers to easily bind workloads together with Operator-managed backing services, without any manual procedures to configure the binding connection.
6.2.1. Service Binding terminology
This section summarizes the basic terms used in Service Binding.
Service binding
The representation of the action of providing information about a service to a workload. Examples include establishing the exchange of credentials between a Java application and a database that it requires.
Backing service
Any service or software that the application consumes over the network as part of its normal operation. Examples include a database, a message broker, an application with REST endpoints, an event stream, an Application Performance Monitor (APM), or a Hardware Security Module (HSM).
Workload (application)
Any process running within a container. Examples include a Spring Boot application, a NodeJS Express application, or a Ruby on Rails application.
Binding data
Information about a service that you use to configure the behavior of other resources within the cluster. Examples include credentials, connection details, volume mounts, or secrets.
Binding connection
Any connection that establishes an interaction between the connected components, such as a bindable backing service and an application requiring that backing service.
|
6.2.2. About Service Binding Operator
The Service Binding Operator consists of a controller and an accompanying custom resource definition (CRD) for service binding. It manages the data plane for workloads and backing services. The Service Binding Controller reads the data made available by the control plane of backing services. Then, it projects this data to workloads according to the rules specified through the
ServiceBinding
resource.
As a result, the Service Binding Operator enables workloads to use backing services or external services by automatically collecting and sharing binding data with the workloads. The process involves making the backing service bindable and binding the workload and the service together.
6.2.2.1. Making an Operator-managed backing service bindable
To make a service bindable, as an Operator provider, you need to expose the binding data required by workloads to bind with the services provided by the Operator. You can provide the binding data either as annotations or as descriptors in the CRD of the Operator that manages the backing service.
6.2.2.2. Binding a workload together with a backing service
By using the Service Binding Operator, as an application developer, you need to declare the intent of establishing a binding connection. You must create a
ServiceBinding
CR that references the backing service. This action triggers the Service Binding Operator to project the exposed binding data into the workload. The Service Binding Operator receives the declared intent and binds the workload together with the backing service.
The CRD of the Service Binding Operator supports the following APIs:
Service Binding
with the
binding.operators.coreos.com
API group.
Service Binding (Spec API)
with the
servicebinding.io
API group.
With Service Binding Operator, you can:
Bind your workloads to Operator-managed backing services.
Automate configuration of binding data.
Provide service operators with a low-touch administrative experience to provision and manage access to services.
Enrich the development lifecycle with a consistent and declarative service binding method that eliminates discrepancies in cluster environments.
-
Exposal of binding data from services
Based on annotations present in CRD, custom resources (CRs), or resources.
Workload projection
Projection of binding data as files, with volume mounts.
Projection of binding data as environment variables.
Service Binding Options
Bind backing services in a namespace that is different from the workload namespace.
Project binding data into the specific container workloads.
Auto-detection of the binding data from resources owned by the backing service CR.
Compose custom binding data from the exposed binding data.
Support for non-
PodSpec
compliant workload resources.
Security
Support for role-based access control (RBAC).
The CRD of the Service Binding Operator supports the following APIs:
Service Binding
with the
binding.operators.coreos.com
API group.
Service Binding (Spec API)
with the
servicebinding.io
API group.
Both of these API groups have similar features, but they are not completely identical. Here is the complete list of differences between these API groups:
Feature
|
Supported by the
binding.operators.coreos.com
API group
|
Supported by the
servicebinding.io
API group
|
Notes
|
Binding to provisioned services
Not applicable (N/A)
Direct secret projection
Not applicable (N/A)
Bind as files
Default behavior for the service bindings of the
servicebinding.io
API group
Opt-in functionality for the service bindings of the
binding.operators.coreos.com
API group
Bind as environment variables
Default behavior for the service bindings of the
binding.operators.coreos.com
API group.
Opt-in functionality for the service bindings of the
servicebinding.io
API group: Environment variables are created alongside files.
Selecting workload with a label selector
Not applicable (N/A)
Detecting binding resources (
.spec.detectBindingResources
)
The
servicebinding.io
API group has no equivalent feature.
Naming strategies
There is no current mechanism within the
servicebinding.io
API group to interpret the templates that naming strategies use.
Container path
Partial
Because a service binding of the
binding.operators.coreos.com
API group can specify mapping behavior within the
ServiceBinding
resource, the
servicebinding.io
API group cannot fully support an equivalent behavior without more information about the workload.
Container name filtering
The
binding.operators.coreos.com
API group has no equivalent feature.
Secret path
The
servicebinding.io
API group has no equivalent feature.
Alternative binding sources (for example, binding data from annotations)
Allowed by Service Binding Operator
The specification requires support for getting binding data from provisioned services and secrets. However, a strict reading of the specification suggests that support for other binding data sources is allowed. Using this fact, Service Binding Operator can pull the binding data from various sources (for example, pulling binding data from annotations). Service Binding Operator supports these sources on both the API groups.
|
6.2.5. Additional resources
6.3. Installing Service Binding Operator
This guide walks cluster administrators through the process of installing the Service Binding Operator to an OpenShift Container Platform cluster.
You can install Service Binding Operator on OpenShift Container Platform 4.7 and later.
Prerequisites
-
You have access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions.
Your cluster has the
Marketplace capability
enabled or the Red Hat Operator catalog source configured manually.
6.3.1. Installing the Service Binding Operator using the web console
You can install Service Binding Operator using the OpenShift Container Platform OperatorHub. When you install the Service Binding Operator, the custom resources (CRs) required for the service binding configuration are automatically installed along with the Operator.
Procedure
-
In the
Administrator
perspective of the web console, navigate to
Operators
→
OperatorHub
.
Use the
Filter by keyword
box to search for
Service Binding Operator
in the catalog. Click the
Service Binding Operator
tile.
Read the brief description about the Operator on the
Service Binding Operator
page. Click
Install
.
On the
Install Operator
page:
Select
All namespaces on the cluster (default)
for the
Installation Mode
. This mode installs the Operator in the default
openshift-operators
namespace, which enables the Operator to watch and be made available to all namespaces in the cluster.
Select
Automatic
for the
Approval Strategy
. This ensures that the future upgrades to the Operator are handled automatically by the Operator Lifecycle Manager (OLM). If you select the
Manual
approval strategy, OLM creates an update request. As a cluster administrator, you must then manually approve the OLM update request to update the Operator to the new version.
Select an
Update Channel
.
By default, the
stable
channel enables installation of the latest stable and supported release of the Service Binding Operator.
Click
Install
.
The Operator is installed automatically into the
openshift-operators
namespace.
On the
Installed Operator — ready for use
pane, click
View Operator
. You will see the Operator listed on the
Installed Operators
page.
Verify that the
Status
is set to
Succeeded
to confirm successful installation of Service Binding Operator.
6.3.2. Additional resources
6.4. Getting started with service binding
The Service Binding Operator manages the data plane for workloads and backing services. This guide provides instructions with examples to help you create a database instance, deploy an application, and use the Service Binding Operator to create a binding connection between the application and the database service.
Prerequisites
-
You have access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions.
You have installed the
oc
CLI.
You have installed Service Binding Operator from OperatorHub.
You have installed the 5.1.2 version of the Crunchy Postgres for Kubernetes Operator from OperatorHub using the
v5
Update channel. The installed Operator is available in an appropriate namespace, such as the
my-petclinic
namespace.
You can create the namespace using the
oc create namespace my-petclinic
command.
6.4.1. Creating a PostgreSQL database instance
To create a PostgreSQL database instance, you must create a
PostgresCluster
custom resource (CR) and configure the database.
Procedure
-
Create the
PostgresCluster
CR in the
my-petclinic
namespace by running the following command in shell:
$ oc apply -n my-petclinic -f - << EOD
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
name: hippo
spec:
image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-14.4-0
postgresVersion: 14
instances:
- name: instance1
dataVolumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
backups:
pgbackrest:
image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.38-0
repos:
- name: repo1
volume:
volumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
The annotations added in this PostgresCluster CR enable the service binding connection and trigger the Operator reconciliation.
The output verifies that the database instance is created:
6.4.2. Deploying the Spring PetClinic sample application
To deploy the Spring PetClinic sample application on an OpenShift Container Platform cluster, you must use a deployment configuration and configure your local environment to be able to test the application.
Procedure
-
Deploy the
spring-petclinic
application with the
PostgresCluster
custom resource (CR) by running the following command in shell:
$ oc apply -n my-petclinic -f - << EOD
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-petclinic
labels:
app: spring-petclinic
spec:
replicas: 1
selector:
matchLabels:
app: spring-petclinic
template:
metadata:
labels:
app: spring-petclinic
spec:
containers:
- name: app
image: quay.io/service-binding/spring-petclinic:latest
imagePullPolicy: Always
- name: SPRING_PROFILES_ACTIVE
value: postgres
ports:
- name: http
containerPort: 8080
apiVersion: v1
kind: Service
metadata:
labels:
app: spring-petclinic
name: spring-petclinic
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: spring-petclinic
The output verifies that the Spring PetClinic sample application is created and deployed:
6.4.3. Connecting the Spring PetClinic sample application to the PostgreSQL database service
To connect the sample application to the database service, you must create a
ServiceBinding
custom resource (CR) that triggers the Service Binding Operator to project the binding data into the application.
Procedure
-
Create a
ServiceBinding
CR to project the binding data:
$ oc apply -n my-petclinic -f - << EOD
apiVersion: binding.operators.coreos.com/v1alpha1
kind: ServiceBinding
metadata:
name: spring-petclinic-pgcluster
spec:
services: 1
- group: postgres-operator.crunchydata.com
version: v1beta1
kind: PostgresCluster 2
name: hippo
application: 3
name: spring-petclinic
group: apps
version: v1
resource: deployments
EOD
-
1
-
Specifies a list of service resources.
The CR of the database.
The sample application that points to a Deployment or any other similar resource with an embedded PodSpec.
The output verifies that the
ServiceBinding
CR is created to project the binding data into the sample application.
6.4.4. Additional resources
6.5. Getting started with service binding on IBM Power, IBM Z, and IBM(R) LinuxONE
The Service Binding Operator manages the data plane for workloads and backing services. This guide provides instructions with examples to help you create a database instance, deploy an application, and use the Service Binding Operator to create a binding connection between the application and the database service.
Prerequisites
-
You have access to an OpenShift Container Platform cluster using an account with
cluster-admin
permissions.
You have installed the
oc
CLI.
You have installed the Service Binding Operator from OperatorHub.
6.5.1. Deploying a PostgreSQL Operator
Procedure
-
To deploy the Dev4Devs PostgreSQL Operator in the
my-petclinic
namespace run the following command in shell:
$ oc apply -f - << EOD
apiVersion: v1
kind: Namespace
metadata:
name: my-petclinic
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: postgres-operator-group
namespace: my-petclinic
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: ibm-multiarch-catalog
namespace: openshift-marketplace
spec:
sourceType: grpc
image: quay.io/ibm/operator-registry-<architecture> 1
imagePullPolicy: IfNotPresent
displayName: ibm-multiarch-catalog
updateStrategy:
registryPoll:
interval: 30m
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: postgresql-operator-dev4devs-com
namespace: openshift-operators
spec:
channel: alpha
installPlanApproval: Automatic
name: postgresql-operator-dev4devs-com
source: ibm-multiarch-catalog
sourceNamespace: openshift-marketplace
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: database-view
labels:
servicebinding.io/controller: "true"
rules:
- apiGroups:
- postgresql.dev4devs.com
resources:
- databases
verbs:
- get
- list
EOD
-
1
-
The Operator image.
For IBM Power:
quay.io/ibm/operator-registry-ppc64le:release-4.9
For IBM Z and IBM® LinuxONE:
quay.io/ibm/operator-registry-s390x:release-4.8
Verification
-
After the operator is installed, list the operator subscriptions in the
openshift-operators
namespace:
$ oc get subs -n openshift-operators
6.5.2. Creating a PostgreSQL database instance
To create a PostgreSQL database instance, you must create a
Database
custom resource (CR) and configure the database.
Procedure
-
Create the
Database
CR in the
my-petclinic
namespace by running the following command in shell:
$ oc apply -f - << EOD
apiVersion: postgresql.dev4devs.com/v1alpha1
kind: Database
metadata:
name: sampledatabase
namespace: my-petclinic
annotations:
host: sampledatabase
type: postgresql
port: "5432"
service.binding/database: 'path={.spec.databaseName}'
service.binding/port: 'path={.metadata.annotations.port}'
service.binding/password: 'path={.spec.databasePassword}'
service.binding/username: 'path={.spec.databaseUser}'
service.binding/type: 'path={.metadata.annotations.type}'
service.binding/host: 'path={.metadata.annotations.host}'
spec:
databaseCpu: 30m
databaseCpuLimit: 60m
databaseMemoryLimit: 512Mi
databaseMemoryRequest: 128Mi
databaseName: "sampledb"
databaseNameKeyEnvVar: POSTGRESQL_DATABASE
databasePassword: "samplepwd"
databasePasswordKeyEnvVar: POSTGRESQL_PASSWORD
databaseStorageRequest: 1Gi
databaseUser: "sampleuser"
databaseUserKeyEnvVar: POSTGRESQL_USER
image: registry.redhat.io/rhel8/postgresql-13:latest
databaseStorageClassName: nfs-storage-provisioner
size: 1
The annotations added in this Database CR enable the service binding connection and trigger the Operator reconciliation.
The output verifies that the database instance is created:
6.5.3. Deploying the Spring PetClinic sample application
To deploy the Spring PetClinic sample application on an OpenShift Container Platform cluster, you must use a deployment configuration and configure your local environment to be able to test the application.
Procedure
-
Deploy the
spring-petclinic
application with the
PostgresCluster
custom resource (CR) by running the following command in shell:
$ oc apply -n my-petclinic -f - << EOD
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-petclinic
labels:
app: spring-petclinic
spec:
replicas: 1
selector:
matchLabels:
app: spring-petclinic
template:
metadata:
labels:
app: spring-petclinic
spec:
containers:
- name: app
image: quay.io/service-binding/spring-petclinic:latest
imagePullPolicy: Always
- name: SPRING_PROFILES_ACTIVE
value: postgres
- name: org.springframework.cloud.bindings.boot.enable
value: "true"
ports:
- name: http
containerPort: 8080
apiVersion: v1
kind: Service
metadata:
labels:
app: spring-petclinic
name: spring-petclinic
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: spring-petclinic
The output verifies that the Spring PetClinic sample application is created and deployed:
6.5.4. Connecting the Spring PetClinic sample application to the PostgreSQL database service
To connect the sample application to the database service, you must create a
ServiceBinding
custom resource (CR) that triggers the Service Binding Operator to project the binding data into the application.
Procedure
-
Create a
ServiceBinding
CR to project the binding data:
$ oc apply -n my-petclinic -f - << EOD
apiVersion: binding.operators.coreos.com/v1alpha1
kind: ServiceBinding
metadata:
name: spring-petclinic-pgcluster
spec:
services: 1
- group: postgresql.dev4devs.com
kind: Database 2
name: sampledatabase
version: v1alpha1
application: 3
name: spring-petclinic
group: apps
version: v1
resource: deployments
EOD
-
1
-
Specifies a list of service resources.
The CR of the database.
The sample application that points to a Deployment or any other similar resource with an embedded PodSpec.
The output verifies that the
ServiceBinding
CR is created to project the binding data into the sample application.
-
Set up the port forwarding from the application port to access the sample application from your local environment:
$ oc port-forward --address 0.0.0.0 svc/spring-petclinic 8080:80 -n my-petclinic
6.5.5. Additional resources
6.6. Exposing binding data from a service
Application developers need access to backing services to build and connect workloads. Connecting workloads to backing services is always a challenge because each service provider requires a different way to access their secrets and consume them in a workload.
The Service Binding Operator enables application developers to easily bind workloads together with operator-managed backing services, without any manual procedures to configure the binding connection. For the Service Binding Operator to provide the binding data, as an Operator provider or user who creates backing services, you must expose the binding data to be automatically detected by the Service Binding Operator. Then, the Service Binding Operator automatically collects the binding data from the backing service and shares it with a workload to provide a consistent and predictable experience.
6.6.1. Methods of exposing binding data
This section describes the methods you can use to expose the binding data.
Ensure that you know and understand your workload requirements and environment, and how it works with the provided services.
Binding data is exposed under the following circumstances:
Backing service is available as a provisioned service resource.
The service you intend to connect to is compliant with the Service Binding specification. You must create a
Secret
resource with all the required binding data values and reference it in the backing service custom resource (CR). The detection of all the binding data values is automatic.
Backing service is not available as a provisioned service resource.
You must expose the binding data from the backing service. Depending on your workload requirements and environment, you can choose any of the following methods to expose the binding data:
Direct secret reference
Declaring binding data through custom resource definition (CRD) or CR annotations
Detection of binding data through owned resources
6.6.1.1. Provisioned service
Provisioned service represents a backing service CR with a reference to a
Secret
resource placed in the
.status.binding.name
field of the backing service CR.
As an Operator provider or the user who creates backing services, you can use this method to be compliant with the Service Binding specification, by creating a
Secret
resource and referencing it in the
.status.binding.name
section of the backing service CR. This
Secret
resource must provide all the binding data values required for a workload to connect to the backing service.
The following examples show an
AccountService
CR that represents a backing service and a
Secret
resource referenced from the CR.
6.6.1.2. Direct secret reference
You can use this method, if all the required binding data values are available in a
Secret
resource that you can reference in your Service Binding definition. In this method, a
ServiceBinding
resource directly references a
Secret
resource to connect to a service. All the keys in the
Secret
resource are exposed as binding data.
6.6.1.3. Declaring binding data through CRD or CR annotations
You can use this method to annotate the resources of the backing service to expose the binding data with specific annotations. Adding annotations under the
metadata
section alters the CRs and CRDs of the backing services. Service Binding Operator detects the annotations added to the CRs and CRDs and then creates a
Secret
resource with the values extracted based on the annotations.
The following examples show the annotations that are added under the
metadata
section and a referenced
ConfigMap
object from a resource:
6.6.1.4. Detection of binding data through owned resources
You can use this method if your backing service owns one or more Kubernetes resources such as route, service, config map, or secret that you can use to detect the binding data. In this method, the Service Binding Operator detects the binding data from resources owned by the backing service CR.
The following examples show the
detectBindingResources
API option set to
true
in the
ServiceBinding
CR:
The data model used in the annotations follows specific conventions.
Service binding annotations must use the following convention:
service.binding(/<NAME>)?:
"<VALUE>|(path=<JSONPATH_TEMPLATE>(,objectType=<OBJECT_TYPE>)?(,elementType=<ELEMENT_TYPE>)?(,sourceKey=<SOURCE_KEY>)?(,sourceValue=<SOURCE_VALUE>)?)"
where:
Specifies the name under which the binding value is to be exposed. You can exclude it only when the
objectType
parameter is set to
Secret
or
ConfigMap
.
<VALUE>
Specifies the constant value exposed when no
path
is set.
The data model provides the details on the allowed values and semantic for the
path
,
elementType
,
objectType
,
sourceKey
, and
sourceValue
parameters.
Table 6.4. Parameters and their descriptions
Parameter
|
Description
|
Default value
|
JSONPath template that consists JSONPath expressions enclosed by curly braces {}.
elementType
Specifies whether the value of the element referenced in the
path
parameter complies with any one of the following types:
string
sliceOfStrings
sliceOfMaps
string
objectType
Specifies whether the value of the element indicated in the
path
parameter refers to a
ConfigMap
,
Secret
, or plain string in the current namespace.
Secret
, if
elementType
is non-string.
sourceKey
Specifies the key in the
ConfigMap
or
Secret
resource to be added to the binding secret when collecting the binding data.
Note:
When used in conjunction with
elementType
=
sliceOfMaps
, the
sourceKey
parameter specifies the key in the slice of maps whose value is used as a key in the binding secret.
Use this optional parameter to expose a specific entry in the referenced
Secret
or
ConfigMap
resource as binding data.
When not specified, all keys and values from the
Secret
or
ConfigMap
resource are exposed and are added to the binding secret.
sourceValue
Specifies the key in the slice of maps.
Note:
The value of this key is used as the base to generate the value of the entry for the key-value pair to be added to the binding secret.
In addition, the value of the
sourceKey
is used as the key of the entry for the key-value pair to be added to the binding secret.
It is mandatory only if
elementType
=
sliceOfMaps
.
The
sourceKey
and
sourceValue
parameters are applicable only if the element indicated in the
path
parameter refers to a
ConfigMap
or
Secret
resource.
6.6.3. Setting annotations mapping to be optional
You can have optional fields in the annotations. For example, a path to the credentials might not be present if the service endpoint does not require authentication. In such cases, a field might not exist in the target path of the annotations. As a result, Service Binding Operator generates an error, by default.
As a service provider, to indicate whether you require annotations mapping, you can set a value for the
optional
flag in your annotations when enabling services. Service Binding Operator provides annotations mapping only if the target path is available. When the target path is not available, the Service Binding Operator skips the optional mapping and continues with the projection of the existing mappings without throwing any errors.
To expose the backing service binding data using the Service Binding Operator, you require certain Role-based access control (RBAC) permissions. Specify certain verbs under the
rules
field of the
ClusterRole
resource to grant the RBAC permissions for the backing service resources. When you define these
rules
, you allow the Service Binding Operator to read the binding data of the backing service resources throughout the cluster. If the users do not have permissions to read binding data or modify application resource, the Service Binding Operator prevents such users to bind services to application. Adhering to the RBAC requirements avoids unnecessary permission elevation for the user and prevents access to unauthorized services or applications.
The Service Binding Operator performs requests against the Kubernetes API using a dedicated service account. By default, this account has permissions to bind services to workloads, both represented by the following standard Kubernetes or OpenShift objects:
Deployments
DaemonSets
ReplicaSets
StatefulSets
DeploymentConfigs
The Operator service account is bound to an aggregated cluster role, allowing Operator providers or cluster administrators to enable binding custom service resources to workloads. To grant the required permissions within a
ClusterRole
, label it with the
servicebinding.io/controller
flag and set the flag value to
true
. The following example shows how to allow the Service Binding Operator to
get
,
watch
, and
list
the custom resources (CRs) of Crunchy PostgreSQL Operator:
6.6.5. Categories of exposable binding data
The Service Binding Operator enables you to expose the binding data values from the backing service resources and custom resource definitions (CRDs).
This section provides examples to show how you can use the various categories of exposable binding data. You must modify these examples to suit your work environment and requirements.
6.6.5.1. Exposing a string from a resource
The following example shows how to expose the string from the
metadata.name
field of the
PostgresCluster
custom resource (CR) as a username:
6.6.5.2. Exposing a constant value as the binding item
The following examples show how to expose a constant value from the
PostgresCluster
custom resource (CR):
6.6.5.3. Exposing an entire config map or secret that is referenced from a resource
The following examples show how to expose an entire secret through annotations:
6.6.5.4. Exposing a specific entry from a config map or secret that is referenced from a resource
The following examples show how to expose a specific entry from a config map through annotations:
apiVersion: v1
kind: ConfigMap
metadata:
name: hippo-config
data:
db_timeout: "10s"
user: "hippo"
6.6.5.5. Exposing a resource definition value
The following example shows how to expose a resource definition value through annotations:
6.6.5.6. Exposing entries of a collection with the key and value from each entry
The following example shows how to expose the entries of a collection with the key and value from each entry through annotations:
6.6.5.7. Exposing items of a collection with one key per item
The following example shows how to expose the items of a collection with one key per item through annotations:
6.6.5.8. Exposing values of collection entries with one key per entry value
The following example shows how to expose the values of collection entries with one key per entry value through annotations:
6.6.6. Additional resources
6.7. Projecting binding data
This section provides information on how you can consume the binding data.
6.7.1. Consumption of binding data
After the backing service exposes the binding data, for a workload to access and consume this data, you must project it into the workload from a backing service. Service Binding Operator automatically projects this set of data into the workload in the following methods:
By default, as files.
As environment variables, after you configure the
.spec.bindAsFiles
parameter from the
ServiceBinding
resource.
6.7.2. Configuration of the directory path to project the binding data inside workload container
By default, Service Binding Operator mounts the binding data as files at a specific directory in your workload resource. You can configure the directory path using the
SERVICE_BINDING_ROOT
environment variable setup in the container where your workload runs.
6.8.5. Additional resources
6.9. Connecting an application to a service using the Developer perspective
Use the
Topology
view for the following purposes:
Grouping multiple components within an application.
Connecting components with each other.
Connecting multiple resources to services with labels.
You can either use a binding or a visual connector to connect components.
A binding connection between the components can be established only if the target node is an Operator-backed service. This is indicated by the
Create a binding connector
tool-tip which appears when you drag an arrow to such a target node. When an application is connected to a service by using a binding connector a
ServiceBinding
resource is created. Then, the Service Binding Operator controller projects the necessary binding data into the application deployment. After the request is successful, the application is redeployed establishing an interaction between the connected components.
A visual connector establishes only a visual connection between the components, depicting an intent to connect. No interaction between the components is established. If the target node is not an Operator-backed service the
Create a visual connector
tool-tip is displayed when you drag an arrow to a target node.
6.9.1. Discovering and identifying Operator-backed bindable services
As a user, if you want to create a bindable service, you must know which services are bindable. Bindable services are services that the applications can consume easily because they expose their binding data such as credentials, connection details, volume mounts, secrets, and other binding data in a standard way. The
Developer
perspective helps you discover and identify such bindable services.
Procedure
-
To discover and identify Operator-backed bindable services, consider the following alternative approaches:
Click
+Add
→
Developer Catalog
→
Operator Backed
to see the Operator-backed tiles. Operator-backed services that support service binding features have a
Bindable
badge on the tiles.
On the left pane of the
Operator Backed
page, select
Bindable
.
Click the help icon next to
Service binding
to see more information about bindable services.
Click
+Add
→
Add
and search for Operator-backed services. When you click the bindable service, you can view the
Bindable
badge in the side panel.
6.9.2. Creating a visual connection between components
You can depict an intent to connect application components by using the visual connector.
This procedure walks you through an example of creating a visual connection between a PostgreSQL Database service and a Spring PetClinic sample application.
Prerequisites
-
You have created and deployed a Spring PetClinic sample application by using the
Developer
perspective.
You have created and deployed a Crunchy PostgreSQL database instance by using the
Developer
perspective. This instance has the following components:
hippo-backup
,
hippo-instance
,
hippo-repo-host
, and
hippo-pgbouncer
.
Procedure
-
In the
Developer
perspective, switch to the relevant project, for example,
my-petclinic
.
Hover over the Spring PetClinic sample application to see a dangling arrow on the node.
-
Click and drag the arrow towards the
hippo-pgbouncer
deployment to connect the Spring PetClinic sample application with it.
Click the
spring-petclinic
deployment to see the
Overview
panel. Under the
Details
tab, click the edit icon in the
Annotations
section to see the
Key =
app.openshift.io/connects-to
and
Value =
[{"apiVersion":"apps/v1","kind":"Deployment","name":"hippo-pgbouncer"}]
annotation added to the deployment.
Optional: You can repeat these steps to establish visual connections between other applications and components you create.
6.9.3. Creating a binding connection between components
You can create a binding connection with Operator-backed components, as demonstrated in the following example, which uses a PostgreSQL Database service and a Spring PetClinic sample application. To create a binding connection with a service that the PostgreSQL Database Operator backs, you must first add the Red Hat-provided PostgreSQL Database Operator to the
OperatorHub
, and then install the Operator. The PostreSQL Database Operator then creates and manages the Database resource, which exposes the binding data in secrets, config maps, status, and spec attributes.
Prerequisites
-
You created and deployed a Spring PetClinic sample application in the
Developer
perspective.
You installed Service Binding Operator from the
OperatorHub
.
You installed the
Crunchy Postgres for Kubernetes
Operator from the OperatorHub in the
v5
Update
channel.
You created a
PostgresCluster
resource in the
Developer
perspective, which resulted in a Crunchy PostgreSQL database instance with the following components:
hippo-backup
,
hippo-instance
,
hippo-repo-host
, and
hippo-pgbouncer
.
Procedure
-
In the
Developer
perspective, switch to the relevant project, for example,
my-petclinic
.
In the
Topology
view, hover over the Spring PetClinic sample application to see a dangling arrow on the node.
Drag and drop the arrow onto the
hippo
database icon in the Postgres Cluster to make a binding connection with the Spring PetClinic sample application.
In the
Create Service Binding
dialog, keep the default name or add a different name for the service binding, and then click
Create
.
-
Optional: If there is difficulty in making a binding connection using the Topology view, go to
+Add
→
YAML
→
Import YAML
.
Optional: In the YAML editor, add the
ServiceBinding
resource:
apiVersion: binding.operators.coreos.com/v1alpha1
kind: ServiceBinding
metadata:
name: spring-petclinic-pgcluster
namespace: my-petclinic
spec:
services:
- group: postgres-operator.crunchydata.com
version: v1beta1
kind: PostgresCluster
name: hippo
application:
name: spring-petclinic
group: apps
version: v1
resource: deployments
A service binding request is created and a binding connection is created through a
ServiceBinding
resource. When the database service connection request succeeds, the application is redeployed and the connection is established.
You can also use the context menu by dragging the dangling arrow to add and create a binding connection to an operator-backed service.
-
In the navigation menu, click
Topology
. The spring-petclinic deployment in the Topology view includes an Open URL link to view its web page.
Click the
Open URL
link.
You can now view the Spring PetClinic sample application remotely to confirm that the application is now connected to the database service and that the data has been successfully projected to the application from the Crunchy PostgreSQL database service.
The Service Binding Operator has successfully created a working connection between the application and the database service.
6.9.4. Verifying the status of your service binding from the Topology view
The
Developer
perspective helps you verify the status of your service binding through the
Topology
view.
Procedure
-
If a service binding was successful, click the binding connector. A side panel appears displaying the
Connected
status under the
Details
tab.
Optionally, you can view the
Connected
status on the following pages from the
Developer
perspective:
The
ServiceBindings
page.
The
ServiceBinding details
page. In addition, the page title displays a
Connected
badge.
If a service binding was unsuccessful, the binding connector shows a red arrowhead and a red cross in the middle of the connection. Click this connector to view the
Error
status in the side panel under the
Details
tab. Optionally, click the
Error
status to view specific information about the underlying problem.
You can also view the
Error
status and a tooltip on the following pages from the
Developer
perspective:
The
ServiceBindings
page.
The
ServiceBinding details
page. In addition, the page title displays an
Error
badge.
In the
ServiceBindings
page, use the
Filter
dropdown to list the service bindings based on their status.
6.9.5. Visualizing the binding connections to resources
As a user, use
Label Selector
in the
Topology
view to visualize a service binding and simplify the process of binding applications to backing services. When creating
ServiceBinding
resources, specify labels by using
Label Selector
to find and connect applications instead of using the name of the application. The Service Binding Operator then consumes these
ServiceBinding
resources and specified labels to find the applications to create a service binding with.
To navigate to a list of all connected resources, click the label selector associated with the
ServiceBinding
resource.
To view the
Label Selector
, consider the following approaches:
After you import a
ServiceBinding
resource, view the
Label Selector
associated with the service binding on the
ServiceBinding details
page.
To use
Label Selector
and to create one or more connections at once, you must import the YAML file of the
ServiceBinding
resource.
After the connection is established and when you click the binding connector, the service binding connector
Details
side panel appears. You can view the
Label Selector
associated with the service binding on this panel.
When you delete a binding connector (a single connection within
Topology
along with a service binding), the action removes all connections that are tied to the deleted service binding. While deleting a binding connector, a confirmation dialog appears, which informs that all connectors will be deleted.
6.9.6. Additional resources
Chapter 7. Working with Helm charts
Helm is a software package manager that simplifies deployment of applications and services to OpenShift Container Platform clusters.
Helm uses a packaging format called
charts
. A Helm chart is a collection of files that describes the OpenShift Container Platform resources.
Creating a chart in a cluster creates a running instance of the chart known as a
release
.
Each time a chart is created, or a release is upgraded or rolled back, an incremental revision is created.
Helm provides the ability to:
Search through a large collection of charts stored in the chart repository.
Modify existing charts.
Create your own charts with OpenShift Container Platform or Kubernetes resources.
Package and share your applications as charts.
7.1.2. Red Hat Certification of Helm charts for OpenShift
You can choose to verify and certify your Helm charts by Red Hat for all the components you will be deploying on the Red Hat OpenShift Container Platform. Charts go through an automated Red Hat OpenShift certification workflow that guarantees security compliance as well as best integration and experience with the platform. Certification assures the integrity of the chart and ensures that the Helm chart works seamlessly on Red Hat OpenShift clusters.
7.1.3. Additional resources
The following section describes how to install Helm on different platforms using the CLI.
You can also find the URL to the latest binaries from the OpenShift Container Platform web console by clicking the
?
icon in the upper-right corner and selecting
Command Line Tools
.
Prerequisites
-
You have installed Go, version 1.13 or higher.
-
Download the Helm binary and add it to your path:
Linux (x86_64, amd64)
# curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 -o /usr/local/bin/helm
-
Linux on IBM Z and IBM® LinuxONE (s390x)
# curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-s390x -o /usr/local/bin/helm
-
Linux on IBM Power (ppc64le)
# curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-ppc64le -o /usr/local/bin/helm
Make the binary file executable:
# chmod +x /usr/local/bin/helm
Check the installed version:
$ helm version
-
Download the latest
.exe
file
and put in a directory of your preference.
Right click
Start
and click
Control Panel
.
Select
System and Security
and then click
System
.
From the menu on the left, select
Advanced systems settings
and click
Environment Variables
at the bottom.
Select
Path
from the
Variable
section and click
Edit
.
Click
New
and type the path to the folder with the
.exe
file into the field or click
Browse
and select the directory, and click
OK
.
-
Download the latest
.exe
file
and put in a directory of your preference.
Click
Search
and type
env
or
environment
.
Select
Edit environment variables for your account
.
Select
Path
from the
Variable
section and click
Edit
.
Click
New
and type the path to the directory with the exe file into the field or click
Browse
and select the directory, and click
OK
.
-
Download the Helm binary and add it to your path:
# curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-darwin-amd64 -o /usr/local/bin/helm
-
Make the binary file executable:
# chmod +x /usr/local/bin/helm
-
Check the installed version:
$ helm version
7.3. Configuring custom Helm chart repositories
You can create Helm releases on an OpenShift Container Platform cluster using the following methods:
The CLI.
The
Developer
perspective of the web console.
The
Developer Catalog
, in the
Developer
perspective of the web console, displays the Helm charts available in the cluster. By default, it lists the Helm charts from the Red Hat OpenShift Helm chart repository. For a list of the charts, see
the Red Hat
Helm index
file
.
As a cluster administrator, you can add multiple cluster-scoped and namespace-scoped Helm chart repositories, separate from the default cluster-scoped Helm repository, and display the Helm charts from these repositories in the
Developer Catalog
.
As a regular user or project member with the appropriate role-based access control (RBAC) permissions, you can add multiple namespace-scoped Helm chart repositories, apart from the default cluster-scoped Helm repository, and display the Helm charts from these repositories in the
Developer Catalog
.
In the
Developer
perspective of the web console, you can use the
Helm
page to:
Create Helm Releases and Repositories using the
Create
button.
Create, update, or delete a cluster-scoped or namespace-scoped Helm chart repository.
View the list of the existing Helm chart repositories in the Repositories tab, which can also be easily distinguished as either cluster scoped or namespace scoped.
7.3.1. Installing a Helm chart on an OpenShift Container Platform cluster
Prerequisites
-
You have a running OpenShift Container Platform cluster and you have logged into it.
You have installed Helm.
Procedure
-
Create a new project:
$ oc new-project vault
-
Add a repository of Helm charts to your local Helm client:
$ helm repo add openshift-helm-charts https://charts.openshift.io/
-
Install an example HashiCorp Vault:
$ helm install example-vault openshift-helm-charts/hashicorp-vault
7.3.2. Creating Helm releases using the Developer perspective
You can use either the
Developer
perspective in the web console or the CLI to select and create a release from the Helm charts listed in the
Developer Catalog
. You can create Helm releases by installing Helm charts and see them in the
Developer
perspective of the web console.
7.3.3. Using Helm in the web terminal
You can use Helm by
Accessing the web terminal
in the
Developer
perspective of the web console.
7.3.4. Creating a custom Helm chart on OpenShift Container Platform
Procedure
-
Create a new project:
$ oc new-project nodejs-ex-k
-
Download an example Node.js chart that contains OpenShift Container Platform objects:
$ git clone https://github.com/redhat-developer/redhat-helm-charts
-
Go to the directory with the sample chart:
$ cd redhat-helm-charts/alpha/nodejs-ex-k/
-
Edit the
Chart.yaml
file and add a description of your chart:
apiVersion: v2 1
name: nodejs-ex-k 2
description: A Helm chart for OpenShift 3
icon: https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.svg 4
version: 0.2.1 5
-
1
-
The chart API version. It should be
v2
for Helm charts that require at least Helm 3.
The name of your chart.
The description of your chart.
The URL to an image to be used as an icon.
The Version of your chart as per the Semantic Versioning (SemVer) 2.0.0 Specification.
Verify that the chart is formatted properly:
$ helm lint
-
Install the chart:
$ helm install nodejs-chart nodejs-ex-k
-
Verify that the chart has installed successfully:
$ helm list
7.3.5. Adding custom Helm chart repositories
As a cluster administrator, you can add custom Helm chart repositories to your cluster and enable access to the Helm charts from these repositories in the
Developer Catalog
.
Procedure
-
To add a new Helm Chart Repository, you must add the Helm Chart Repository custom resource (CR) to your cluster.
7.3.6. Adding namespace-scoped custom Helm chart repositories
The cluster-scoped
HelmChartRepository
custom resource definition (CRD) for Helm repository provides the ability for administrators to add Helm repositories as custom resources. The namespace-scoped
ProjectHelmChartRepository
CRD allows project members with the appropriate role-based access control (RBAC) permissions to create Helm repository resources of their choice but scoped to their namespace. Such project members can see charts from both cluster-scoped and namespace-scoped Helm repository resources.
Administrators can limit users from creating namespace-scoped Helm repository resources. By limiting users, administrators have the flexibility to control the RBAC through a namespace role instead of a cluster role. This avoids unnecessary permission elevation for the user and prevents access to unauthorized services or applications.
The addition of the namespace-scoped Helm repository does not impact the behavior of the existing cluster-scoped Helm repository.
As a regular user or project member with the appropriate RBAC permissions, you can add custom namespace-scoped Helm chart repositories to your cluster and enable access to the Helm charts from these repositories in the
Developer Catalog
.
Procedure
-
To add a new namespace-scoped Helm Chart Repository, you must add the Helm Chart Repository custom resource (CR) to your namespace.
7.3.7. Creating credentials and CA certificates to add Helm chart repositories
Some Helm chart repositories need credentials and custom certificate authority (CA) certificates to connect to it. You can use the web console as well as the CLI to add credentials and certificates.
7.3.8. Filtering Helm Charts by their certification level
You can filter Helm charts based on their certification level in the
Developer Catalog
.
Procedure
-
In the
Developer
perspective, navigate to the
+Add
view and select a project.
From the
Developer Catalog
tile, select the
Helm Chart
option to see all the Helm charts in the
Developer Catalog
.
Use the filters to the left of the list of Helm charts to filter the required charts:
Use the
Chart Repositories
filter to filter charts provided by
Red Hat Certification Charts
or
OpenShift Helm Charts
.
Use the
Source
filter to filter charts sourced from
Partners
,
Community
, or
Red Hat
. Certified charts are indicated with the (
) icon.
The
Source
filter will not be visible when there is only one provider type.
You can now select the required chart and install it.
7.3.9. Disabling Helm Chart repositories
You can disable Helm Charts from a particular Helm Chart Repository in the catalog by setting the
disabled
property in the
HelmChartRepository
custom resource to
true
.
Procedure
-
To disable a Helm Chart repository by using CLI, add the
disabled: true
flag to the custom resource. For example, to remove an Azure sample chart repository, run:
$ cat <<EOF | oc apply -f -
apiVersion: helm.openshift.io/v1beta1
kind: HelmChartRepository
metadata:
name: azure-sample-repo
spec:
connectionConfig:
url:https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs
disabled: true
To disable a recently added Helm Chart repository by using Web Console:
Go to Custom Resource Definitions and search for the HelmChartRepository custom resource.
Go to Instances, find the repository you want to disable, and click its name.
Go to the YAML tab, add the disabled: true flag in the spec section, and click Save .
7.4. Working with Helm releases
You can use the
Developer
perspective in the web console to update, rollback, or delete a Helm release.
7.4.2. Upgrading a Helm release
You can upgrade a Helm release to upgrade to a new chart version or update your release configuration.
Procedure
-
In the
Topology
view, select the Helm release to see the side panel.
Click
Actions
→
Upgrade Helm Release
.
In the
Upgrade Helm Release
page, select the
Chart Version
you want to upgrade to, and then click
Upgrade
to create another Helm release. The
Helm Releases
page displays the two revisions.
7.4.3. Rolling back a Helm release
If a release fails, you can rollback the Helm release to a previous version.
7.4.4. Deleting a Helm release
Procedure
-
In the
Topology
view, right-click the Helm release and select
Delete Helm Release
.
In the confirmation prompt, enter the name of the chart and click
Delete
.
8.1. Understanding Deployment and DeploymentConfig objects
The
Deployment
and
DeploymentConfig
API objects in OpenShift Container Platform provide two similar but different methods for fine-grained management over common user applications. They are composed of the following separate API objects:
A
Deployment
or
DeploymentConfig
object, either of which describes the desired state of a particular component of the application as a pod template.
Deployment
objects involve one or more
replica sets
, which contain a point-in-time record of the state of a deployment as a pod template. Similarly,
DeploymentConfig
objects involve one or more
replication controllers
, which preceded replica sets.
One or more pods, which represent an instance of a particular version of an application.
Use
Deployment
objects unless you need a specific feature or behavior provided by
DeploymentConfig
objects.
8.1.1. Building blocks of a deployment
Deployments and deployment configs are enabled by the use of native Kubernetes API objects
ReplicaSet
and
ReplicationController
, respectively, as their building blocks.
Users do not have to manipulate replica sets, replication controllers, or pods owned by
Deployment
or
DeploymentConfig
objects. The deployment systems ensure changes are propagated appropriately.
If the existing deployment strategies are not suited for your use case and you must run manual steps during the lifecycle of your deployment, then you should consider creating a custom deployment strategy.
The following sections provide further details on these objects.
A
ReplicaSet
is a native Kubernetes API object that ensures a specified number of pod replicas are running at any given time.
Only use replica sets if you require custom update orchestration or do not require updates at all. Otherwise, use deployments. Replica sets can be used independently, but are used by deployments to orchestrate pod creation, deletion, and updates. Deployments manage their replica sets automatically, provide declarative updates to pods, and do not have to manually manage the replica sets that they create.
The following is an example
ReplicaSet
definition:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend-1
labels:
tier: frontend
spec:
replicas: 3
selector: 1
matchLabels: 2
tier: frontend
matchExpressions: 3
- {key: tier, operator: In, values: [frontend]}
template:
metadata:
labels:
tier: frontend
spec:
containers:
- image: openshift/hello-openshift
name: helloworld
ports:
- containerPort: 8080
protocol: TCP
restartPolicy: Always
-
1
-
A label query over a set of resources. The result of
matchLabels
and
matchExpressions
are logically conjoined.
Equality-based selector to specify resources with labels that match the selector.
Set-based selector to filter keys. This selects all resources with key equal to
tier
and value equal to
frontend
.
8.1.1.2. Replication controllers
Similar to a replica set, a replication controller ensures that a specified number of replicas of a pod are running at all times. If pods exit or are deleted, the replication controller instantiates more up to the defined number. Likewise, if there are more running than desired, it deletes as many as necessary to match the defined amount. The difference between a replica set and a replication controller is that a replica set supports set-based selector requirements whereas a replication controller only supports equality-based selector requirements.
A replication controller configuration consists of:
The number of replicas desired, which can be adjusted at run time.
A
Pod
definition to use when creating a replicated pod.
A selector for identifying managed pods.
A selector is a set of labels assigned to the pods that are managed by the replication controller. These labels are included in the
Pod
definition that the replication controller instantiates. The replication controller uses the selector to determine how many instances of the pod are already running in order to adjust as needed.
The replication controller does not perform auto-scaling based on load or traffic, as it does not track either. Rather, this requires its replica count to be adjusted by an external auto-scaler.
Use a
DeploymentConfig
to create a replication controller instead of creating replication controllers directly.
If you require custom orchestration or do not require updates, use replica sets instead of replication controllers.
The following is an example definition of a replication controller:
apiVersion: v1
kind: ReplicationController
metadata:
name: frontend-1
spec:
replicas: 1 1
selector: 2
name: frontend
template: 3
metadata:
labels: 4
name: frontend 5
spec:
containers:
- image: openshift/hello-openshift
name: helloworld
ports:
- containerPort: 8080
protocol: TCP
restartPolicy: Always
-
1
-
The number of copies of the pod to run.
The label selector of the pod to run.
A template for the pod the controller creates.
Labels on the pod should include those from the label selector.
The maximum name length after expanding any parameters is 63 characters.
Kubernetes provides a first-class, native API object type in OpenShift Container Platform called
Deployment
.
Deployment
objects describe the desired state of a particular component of an application as a pod template. Deployments create replica sets, which orchestrate pod lifecycles.
For example, the following deployment definition creates a replica set to bring up one
hello-openshift
pod:
8.1.3. DeploymentConfig objects
Building on replication controllers, OpenShift Container Platform adds expanded support for the software development and deployment lifecycle with the concept of
DeploymentConfig
objects. In the simplest case, a
DeploymentConfig
object creates a new replication controller and lets it start up pods.
However, OpenShift Container Platform deployments from
DeploymentConfig
objects also provide the ability to transition from an existing deployment of an image to a new one and also define hooks to be run before or after creating the replication controller.
The
DeploymentConfig
deployment system provides the following capabilities:
A
DeploymentConfig
object, which is a template for running applications.
Triggers that drive automated deployments in response to events.
User-customizable deployment strategies to transition from the previous version to the new version. A strategy runs inside a pod commonly referred as the deployment process.
A set of hooks (lifecycle hooks) for executing custom behavior in different points during the lifecycle of a deployment.
Versioning of your application to support rollbacks either manually or automatically in case of deployment failure.
Manual replication scaling and autoscaling.
When you create a
DeploymentConfig
object, a replication controller is created representing the
DeploymentConfig
object’s pod template. If the deployment changes, a new replication controller is created with the latest pod template, and a deployment process runs to scale down the old replication controller and scale up the new one.
Instances of your application are automatically added and removed from both service load balancers and routers as they are created. As long as your application supports graceful shutdown when it receives the
TERM
signal, you can ensure that running user connections are given a chance to complete normally.
The OpenShift Container Platform
DeploymentConfig
object defines the following details:
The elements of a
ReplicationController
definition.
Triggers for creating a new deployment automatically.
The strategy for transitioning between deployments.
Lifecycle hooks.
Each time a deployment is triggered, whether manually or automatically, a deployer pod manages the deployment (including scaling down the old replication controller, scaling up the new one, and running hooks). The deployment pod remains for an indefinite amount of time after it completes the deployment to retain its logs of the deployment. When a deployment is superseded by another, the previous replication controller is retained to enable easy rollback if needed.
8.1.4. Comparing Deployment and DeploymentConfig objects
Both Kubernetes
Deployment
objects and OpenShift Container Platform-provided
DeploymentConfig
objects are supported in OpenShift Container Platform; however, it is recommended to use
Deployment
objects unless you need a specific feature or behavior provided by
DeploymentConfig
objects.
The following sections go into more detail on the differences between the two object types to further help you decide which type to use.
One important difference between
Deployment
and
DeploymentConfig
objects is the properties of the
CAP theorem
that each design has chosen for the rollout process.
DeploymentConfig
objects prefer consistency, whereas
Deployments
objects take availability over consistency.
For
DeploymentConfig
objects, if a node running a deployer pod goes down, it will not get replaced. The process waits until the node comes back online or is manually deleted. Manually deleting the node also deletes the corresponding pod. This means that you can not delete the pod to unstick the rollout, as the kubelet is responsible for deleting the associated pod.
However, deployment rollouts are driven from a controller manager. The controller manager runs in high availability mode on masters and uses leader election algorithms to value availability over consistency. During a failure it is possible for other masters to act on the same deployment at the same time, but this issue will be reconciled shortly after the failure occurs.
8.1.4.2. Deployment-specific features
Rollover
The deployment process for
Deployment
objects is driven by a controller loop, in contrast to
DeploymentConfig
objects that use deployer pods for every new rollout. This means that the
Deployment
object can have as many active replica sets as possible, and eventually the deployment controller will scale down all old replica sets and scale up the newest one.
DeploymentConfig
objects can have at most one deployer pod running, otherwise multiple deployers might conflict when trying to scale up what they think should be the newest replication controller. Because of this, only two replication controllers can be active at any point in time. Ultimately, this results in faster rapid rollouts for
Deployment
objects.
Proportional scaling
Because the deployment controller is the sole source of truth for the sizes of new and old replica sets owned by a
Deployment
object, it can scale ongoing rollouts. Additional replicas are distributed proportionally based on the size of each replica set.
DeploymentConfig
objects cannot be scaled when a rollout is ongoing because the controller will have issues with the deployer process about the size of the new replication controller.
Pausing mid-rollout
Deployments can be paused at any point in time, meaning you can also pause ongoing rollouts. However, you currently cannot pause deployer pods; if you try to pause a deployment in the middle of a rollout, the deployer process is not affected and continues until it finishes.
8.1.4.3. DeploymentConfig object-specific features
Automatic rollbacks
Currently, deployments do not support automatically rolling back to the last successfully deployed replica set in case of a failure.
Triggers
Deployments have an implicit config change trigger in that every change in the pod template of a deployment automatically triggers a new rollout. If you do not want new rollouts on pod template changes, pause the deployment:
$ oc rollout pause deployments/<name>
Lifecycle hooks
Deployments do not yet support any lifecycle hooks.
Custom strategies
Deployments do not support user-specified custom deployment strategies.
8.2. Managing deployment processes
8.2.1. Managing DeploymentConfig objects
DeploymentConfig
objects can be managed from the OpenShift Container Platform web console’s
Workloads
page or using the
oc
CLI. The following procedures show CLI usage unless otherwise stated.
8.2.1.1. Starting a deployment
You can start a rollout to begin the deployment process of your application.
Procedure
-
To start a new deployment process from an existing
DeploymentConfig
object, run the following command:
$ oc rollout latest dc/<name>
If a deployment process is already in progress, the command displays a message and a new replication controller will not be deployed.
8.2.1.2. Viewing a deployment
You can view a deployment to get basic information about all the available revisions of your application.
Procedure
-
To show details about all recently created replication controllers for the provided
DeploymentConfig
object, including any currently running deployment process, run the following command:
$ oc rollout history dc/<name>
-
To view details specific to a revision, add the
--revision
flag:
$ oc rollout history dc/<name> --revision=1
-
For more detailed information about a
DeploymentConfig
object and its latest revision, use the
oc describe
command:
$ oc describe dc <name>
8.2.1.3. Retrying a deployment
If the current revision of your
DeploymentConfig
object failed to deploy, you can restart the deployment process.
Procedure
-
To restart a failed deployment process:
$ oc rollout retry dc/<name>
If the latest revision of it was deployed successfully, the command displays a message and the deployment process is not retried.
Retrying a deployment restarts the deployment process and does not create a new deployment revision. The restarted replication controller has the same configuration it had when it failed.
8.2.1.4. Rolling back a deployment
Rollbacks revert an application back to a previous revision and can be performed using the REST API, the CLI, or the web console.
Procedure
-
To rollback to the last successful deployed revision of your configuration:
$ oc rollout undo dc/<name>
The
DeploymentConfig
object’s template is reverted to match the deployment revision specified in the undo command, and a new replication controller is started. If no revision is specified with
--to-revision
, then the last successfully deployed revision is used.
Image change triggers on the
DeploymentConfig
object are disabled as part of the rollback to prevent accidentally starting a new deployment process soon after the rollback is complete.
To re-enable the image change triggers:
$ oc set triggers dc/<name> --auto
Deployment configs also support automatically rolling back to the last successful revision of the configuration in case the latest deployment process fails. In that case, the latest template that failed to deploy stays intact by the system and it is up to users to fix their configurations.
8.2.1.5. Executing commands inside a container
You can add a command to a container, which modifies the container’s startup behavior by overruling the image’s
ENTRYPOINT
. This is different from a lifecycle hook, which instead can be run once per deployment at a specified time.
Procedure
-
Add the
command
parameters to the
spec
field of the
DeploymentConfig
object. You can also add an
args
field, which modifies the
command
(or the
ENTRYPOINT
if
command
does not exist).
kind: DeploymentConfig
apiVersion: apps.openshift.io/v1
metadata:
name: example-dc
# ...
spec:
template:
# ...
spec:
containers:
- name: <container_name>
image: 'image'
command:
- '<command>'
args:
- '<argument_1>'
- '<argument_2>'
- '<argument_3>'
For example, to execute the
java
command with the
-jar
and
/opt/app-root/springboots2idemo.jar
arguments:
kind: DeploymentConfig
apiVersion: apps.openshift.io/v1
metadata:
name: example-dc
# ...
spec:
template:
# ...
spec:
containers:
- name: example-spring-boot
image: 'image'
command:
- java
args:
- '-jar'
- /opt/app-root/springboots2idemo.jar
# ...
8.2.1.6. Viewing deployment logs
Procedure
-
To stream the logs of the latest revision for a given
DeploymentConfig
object:
$ oc logs -f dc/<name>
If the latest revision is running or failed, the command returns the logs of the process that is responsible for deploying your pods. If it is successful, it returns the logs from a pod of your application.
You can also view logs from older failed deployment processes, if and only if these processes (old replication controllers and their deployer pods) exist and have not been pruned or deleted manually:
$ oc logs --version=1 dc/<name>
8.2.1.7. Deployment triggers
A
DeploymentConfig
object can contain triggers, which drive the creation of new deployment processes in response to events inside the cluster.
If no triggers are defined on a
DeploymentConfig
object, a config change trigger is added by default. If triggers are defined as an empty field, deployments must be started manually.
Config change deployment triggers
The config change trigger results in a new replication controller whenever configuration changes are detected in the pod template of the
DeploymentConfig
object.
If a config change trigger is defined on a
DeploymentConfig
object, the first replication controller is automatically created soon after the
DeploymentConfig
object itself is created and it is not paused.
Image change deployment triggers
The image change trigger results in a new replication controller whenever the content of an image stream tag changes (when a new version of the image is pushed).
8.2.1.7.1. Setting deployment triggers
Procedure
-
You can set deployment triggers for a
DeploymentConfig
object using the
oc set triggers
command. For example, to set a image change trigger, use the following command:
$ oc set triggers dc/<dc_name> \
--from-image=<project>/<image>:<tag> -c <container_name>
8.2.1.8. Setting deployment resources
A deployment is completed by a pod that consumes resources (memory, CPU, and ephemeral storage) on a node. By default, pods consume unbounded node resources. However, if a project specifies default container limits, then pods consume resources up to those limits.
The minimum memory limit for a deployment is 12 MB. If a container fails to start due to a
Cannot allocate memory
pod event, the memory limit is too low. Either increase or remove the memory limit. Removing the limit allows pods to consume unbounded node resources.
You can also limit resource use by specifying resource limits as part of the deployment strategy. Deployment resources can be used with the recreate, rolling, or custom deployment strategies.
Procedure
-
In the following example, each of
resources
,
cpu
,
memory
, and
ephemeral-storage
is optional:
kind: Deployment
apiVersion: apps/v1
metadata:
name: hello-openshift
# ...
spec:
# ...
type: "Recreate"
resources:
limits:
cpu: "100m" 1
memory: "256Mi" 2
ephemeral-storage: "1Gi" 3
-
1
-
cpu
is in CPU units:
100m
represents 0.1 CPU units (100 * 1e-3).
memory
is in bytes:
256Mi
represents 268435456 bytes (256 * 2 ^ 20).
ephemeral-storage
is in bytes:
1Gi
represents 1073741824 bytes (2 ^ 30).
However, if a quota has been defined for your project, one of the following two items is required:
A
resources
section set with an explicit
requests
:
kind: Deployment
apiVersion: apps/v1
metadata:
name: hello-openshift
# ...
spec:
# ...
type: "Recreate"
resources:
requests: 1
cpu: "100m"
memory: "256Mi"
ephemeral-storage: "1Gi"
-
1
-
The
requests
object contains the list of resources that correspond to the list of resources in the quota.
A limit range defined in your project, where the defaults from the
LimitRange
object apply to pods created during the deployment process.
To set deployment resources, choose one of the above options. Otherwise, deploy pod creation fails, citing a failure to satisfy quota.
8.2.1.9. Scaling manually
In addition to rollbacks, you can exercise fine-grained control over the number of replicas by manually scaling them.
Pods can also be auto-scaled using the
oc autoscale
command.
Procedure
-
To manually scale a
DeploymentConfig
object, use the
oc scale
command. For example, the following command sets the replicas in the
frontend
DeploymentConfig
object to
3
.
$ oc scale dc frontend --replicas=3
The number of replicas eventually propagates to the desired and current state of the deployment configured by the
DeploymentConfig
object
frontend
.
8.2.1.10. Accessing private repositories from DeploymentConfig objects
You can add a secret to your
DeploymentConfig
object so that it can access images from a private repository. This procedure shows the OpenShift Container Platform web console method.
Procedure
-
Create a new project.
Navigate to
Workloads
→
Secrets
.
Create a secret that contains credentials for accessing a private image repository.
Navigate to
Workloads
→
DeploymentConfigs
.
Create a
DeploymentConfig
object.
On the
DeploymentConfig
object editor page, set the
Pull Secret
and save your changes.
8.2.1.11. Assigning pods to specific nodes
You can use node selectors in conjunction with labeled nodes to control pod placement.
Cluster administrators can set the default node selector for a project in order to restrict pod placement to specific nodes. As a developer, you can set a node selector on a
Pod
configuration to restrict nodes even further.
Procedure
-
To add a node selector when creating a pod, edit the
Pod
configuration, and add the
nodeSelector
value. This can be added to a single
Pod
configuration, or in a
Pod
template:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
# ...
spec:
nodeSelector:
disktype: ssd
# ...
Pods created when the node selector is in place are assigned to nodes with the specified labels. The labels specified here are used in conjunction with the labels added by a cluster administrator.
For example, if a project has the
type=user-node
and
region=east
labels added to a project by the cluster administrator, and you add the above
disktype: ssd
label to a pod, the pod is only ever scheduled on nodes that have all three labels.
Labels can only be set to one value, so setting a node selector of
region=west
in a
Pod
configuration that has
region=east
as the administrator-set default, results in a pod that will never be scheduled.
8.2.1.12. Running a pod with a different service account
You can run a pod with a service account other than the default.
Procedure
-
Edit the
DeploymentConfig
object:
$ oc edit dc/<deployment_config>
-
Add the
serviceAccount
and
serviceAccountName
parameters to the
spec
field, and specify the service account you want to use:
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
name: example-dc
# ...
spec:
# ...
securityContext: {}
serviceAccount: <service_account>
serviceAccountName: <service_account>
8.3. Using deployment strategies
Deployment strategies
are used to change or upgrade applications without downtime so that users barely notice a change.
Because users generally access applications through a route handled by a router, deployment strategies can focus on
DeploymentConfig
object features or routing features. Strategies that focus on
DeploymentConfig
object features impact all routes that use the application. Strategies that use router features target individual routes.
Most deployment strategies are supported through the
DeploymentConfig
object, and some additional strategies are supported through router features.
8.3.1. Choosing a deployment strategy
Consider the following when choosing a deployment strategy:
Long-running connections must be handled gracefully.
Database conversions can be complex and must be done and rolled back along with the application.
If the application is a hybrid of microservices and traditional components, downtime might be required to complete the transition.
You must have the infrastructure to do this.
If you have a non-isolated test environment, you can break both new and old versions.
A deployment strategy uses readiness checks to determine if a new pod is ready for use. If a readiness check fails, the
DeploymentConfig
object retries to run the pod until it times out. The default timeout is
10m
, a value set in
TimeoutSeconds
in
dc.spec.strategy.*params
.
A rolling deployment slowly replaces instances of the previous version of an application with instances of the new version of the application. The rolling strategy is the default deployment strategy used if no strategy is specified on a
DeploymentConfig
object.
A rolling deployment typically waits for new pods to become
ready
via a readiness check before scaling down the old components. If a significant issue occurs, the rolling deployment can be aborted.
When to use a rolling deployment:
When you want to take no downtime during an application update.
When your application supports having old code and new code running at the same time.
A rolling deployment means you have both old and new versions of your code running at the same time. This typically requires that your application handle N-1 compatibility.
When scaling down, the rolling strategy waits for pods to become ready so it can decide whether further scaling would affect availability. If scaled up pods never become ready, the deployment process will eventually time out and result in a deployment failure.
The
maxUnavailable
parameter is the maximum number of pods that can be unavailable during the update. The
maxSurge
parameter is the maximum number of pods that can be scheduled above the original number of pods. Both parameters can be set to either a percentage (e.g.,
10%
) or an absolute value (e.g.,
2
). The default value for both is
25%
.
These parameters allow the deployment to be tuned for availability and speed. For example:
maxUnavailable*=0
and
maxSurge*=20%
ensures full capacity is maintained during the update and rapid scale up.
maxUnavailable*=10%
and
maxSurge*=0
performs an update using no extra capacity (an in-place update).
maxUnavailable*=10%
and
maxSurge*=10%
scales up and down quickly with some potential for capacity loss.
Generally, if you want fast rollouts, use
maxSurge
. If you have to take into account resource quota and can accept partial unavailability, use
maxUnavailable
.
The default setting for
maxUnavailable
is
1
for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to
3
for the control plane pool.
8.3.2.1. Canary deployments
All rolling deployments in OpenShift Container Platform are
canary deployments
; a new version (the canary) is tested before all of the old instances are replaced. If the readiness check never succeeds, the canary instance is removed and the
DeploymentConfig
object will be automatically rolled back.
The readiness check is part of the application code and can be as sophisticated as necessary to ensure the new instance is ready to be used. If you must implement more complex checks of the application (such as sending real user workloads to the new instance), consider implementing a custom deployment or using a blue-green deployment strategy.
8.3.2.2. Creating a rolling deployment
Rolling deployments are the default type in OpenShift Container Platform. You can create a rolling deployment using the CLI.
Procedure
-
Create an application based on the example deployment images found in
Quay.io
:
$ oc new-app quay.io/openshifttest/deployment-example:latest
This image does not expose any ports. If you want to expose your applications over an external LoadBalancer service or enable access to the application over the public internet, create a service by using the
oc expose dc/deployment-example --port=<port>
command after completing this procedure.
If you have the router installed, make the application available via a route or use the service IP directly.
$ oc expose svc/deployment-example
-
Browse to the application at
deployment-example.<project>.<router_domain>
to verify you see the
v1
image.
Scale the
DeploymentConfig
object up to three replicas:
$ oc scale dc/deployment-example --replicas=3
-
Trigger a new deployment automatically by tagging a new version of the example as the
latest
tag:
$ oc tag deployment-example:v2 deployment-example:latest
-
In your browser, refresh the page until you see the
v2
image.
When using the CLI, the following command shows how many pods are on version 1 and how many are on version 2. In the web console, the pods are progressively added to v2 and removed from v1:
$ oc describe dc deployment-example
During the deployment process, the new replication controller is incrementally scaled up. After the new pods are marked as
ready
(by passing their readiness check), the deployment process continues.
If the pods do not become ready, the process aborts, and the deployment rolls back to its previous version.
8.3.2.3. Editing a deployment by using the Developer perspective
You can edit the deployment strategy, image settings, environment variables, and advanced options for your deployment by using the
Developer
perspective.
Prerequisites
-
You are in the
Developer
perspective of the web console.
You have created an application.
Procedure
-
Navigate to the
Topology
view.
Click your application to see the
Details
panel.
In the
Actions
drop-down menu, select
Edit Deployment
to view the
Edit Deployment
page.
You can edit the following
Advanced options
for your deployment:
Optional: You can pause rollouts by clicking
Pause rollouts
, and then selecting the
Pause rollouts for this deployment
checkbox.
By pausing rollouts, you can make changes to your application without triggering a rollout. You can resume rollouts at any time.
Optional: Click
Scaling
to change the number of instances of your image by modifying the number of
Replicas
.
Click
Save
.
8.3.2.4. Starting a rolling deployment using the Developer perspective
You can upgrade an application by starting a rolling deployment.
Prerequisites
-
You are in the
Developer
perspective of the web console.
You have created an application.
Procedure
-
In the
Topology
view, click the application node to see the
Overview
tab in the side panel. Note that the
Update Strategy
is set to the default
Rolling
strategy.
In the
Actions
drop-down menu, select
Start Rollout
to start a rolling update. The rolling deployment spins up the new version of the application and then terminates the old one.
The recreate strategy has basic rollout behavior and supports lifecycle hooks for injecting code into the deployment process.
During scale up, if the replica count of the deployment is greater than one, the first replica of the deployment will be validated for readiness before fully scaling up the deployment. If the validation of the first replica fails, the deployment will be considered a failure.
When to use a recreate deployment:
When you must run migrations or other data transformations before your new code starts.
When you do not support having new and old versions of your application code running at the same time.
When you want to use a RWO volume, which is not supported being shared between multiple replicas.
A recreate deployment incurs downtime because, for a brief period, no instances of your application are running. However, your old code and new code do not run at the same time.
8.3.3.1. Editing a deployment by using the Developer perspective
You can edit the deployment strategy, image settings, environment variables, and advanced options for your deployment by using the
Developer
perspective.
Prerequisites
-
You are in the
Developer
perspective of the web console.
You have created an application.
Procedure
-
Navigate to the
Topology
view.
Click your application to see the
Details
panel.
In the
Actions
drop-down menu, select
Edit Deployment
to view the
Edit Deployment
page.
You can edit the following
Advanced options
for your deployment:
Optional: You can pause rollouts by clicking
Pause rollouts
, and then selecting the
Pause rollouts for this deployment
checkbox.
By pausing rollouts, you can make changes to your application without triggering a rollout. You can resume rollouts at any time.
Optional: Click
Scaling
to change the number of instances of your image by modifying the number of
Replicas
.
Click
Save
.
8.3.3.2. Starting a recreate deployment using the Developer perspective
You can switch the deployment strategy from the default rolling update to a recreate update using the
Developer
perspective in the web console.
Prerequisites
-
Ensure that you are in the
Developer
perspective of the web console.
Ensure that you have created an application using the
Add
view and see it deployed in the
Topology
view.
The custom strategy allows you to provide your own deployment behavior.
|
|