Red Hat Advanced Cluster Management for Kubernetes (ACM) provides management, visibility and control for your OpenShift and Kubernetes environments. It provides management capabilities for:
All across hybrid cloud environments.
Clusters and applications are visible and managed from a single console, with built-in security policies. Run your operations from anywhere that Red Hat OpenShift runs, and manage any Kubernetes cluster in your fleet.
Before you can start using ACM, you have to install it using an Operator on your OpenShift cluster.
Advanced Cluster Management for Kubernetes
operator.open-cluster-management
by default.After the operator has been installed it will inform you to create a MultiClusterHub
, the central component of ACM.
Click the Create MultiClusterHub button and have a look at the available installation parameters, but don’t change anything.
Click Create.
At some point you will be asked to refresh the web console. Do this, you’ll notice a new drop-down menu at the top of the left menu bar. If left set to local-cluster
you get the standard console view, switching to All Clusters
takes you to a view provided by ACM covering all your clusters.
Okay, right now you’ll only see one, your local-cluster
listed here.
Now let’s change to the full ACM console:
local-clusters
viewmulticlusterhub
instance you deployed should be in Status Running
by now.All Clusters
You are now in your ACM dashboard!
Have a look around:
One of the main features of Advanced Cluster Management is cluster lifecycle management. ACM can help to:
Let’s give this a try!
Okay, do not overstress our cloud ressources and for the fun of it we’ll deploy a Single Node OpenShift (SNO) cluster to the same AWS account your lab cluster is running in.
The first step is to create credentials in ACM to deploy to the Amazon Web Services account.
You’ll get the AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
needed to deploy to AWS from your facilitators.
sno
AWS
sno
namespacesandbox<NNNN>.opentlc.com
, replace <NNNN>
with your id, you can find it e.g. in the URLAccess key ID
and Secret access key
as provided.openshift-config
and copy the content of the secret pull-secret
$HOME/.ssh/<LABID>key.pem
) and public key ($HOME/.ssh/<LABID>key.pub
).
<LABID>
can be found in the URL, e.g. multicloud-console.apps.cluster-z48z9.z48z9.sandbox910.opentlc.comYou have created a new set of credentials to deploy to the AWS account you are using.
Now you’ll deploy a new OpenShift instance:
sno
credential you created.us-east-1
m5.2xlarge
.0
(we want a single node OCP…).Now click Next until you arrive at the Review. Do the following:
YAML: On
controlPlane
section change the replicas
field to 1
.It’s time to deploy your cluster, click Create!
ACM monitors the installation of the new cluster and finally imports it. Click View logs under Cluster install to follow the installation log.
Installation of a SNO takes around 30 minutes in our lab environment.
After the installation has finished, access the Clusters section in the ACM portal again.
Explore the information ACM is providing, including the Console URL and the access credentials of your shiny new SNO instance. Use them to login to the SNO Web Console.
In the previous lab, you explored the Cluster Lifecycle functionality of RHACM by deploying a new OpenShift single-node instance to AWS. Now let’s have a look at another capability, Application Lifecycle management.
Application Lifecycle management is used to manage applications on your clusters. This allows you to define a single or multi-cluster application using Kubernetes specifications, but with additional automation of the deployment and lifecycle management of resources to individual clusters. An application designed to run on a single cluster is straightforward and something you ought to be familiar with from working with OpenShift fundamentals. A multi-cluster application allows you to orchestrate the deployment of these same resources to multiple clusters, based on a set of rules you define for which clusters run the application components.
The naming convention of the different components of the Application Lifecycle model in RHACM is as follows:
Start with adding labels to your two OpenShift clusters in your ACM console:
environment=prod
environment=dev
Now it’s time to actually deploy the application. But first have a look at the manifest definitions ACM will use as deployables at https://github.com/devsecops-workshop/book-import/tree/master/book-import.
Then in the ACM console navigate to Applications:
GIT
Click Create and then the topology tab to view the application being deployed:
environment=dev
Now edit the application in the ACM console and change the label to environment=prod
. What happens?
In this simple example you have seen how to deploy an application to an OpenShift cluster using ACM. All manifests defining the application where kept in a Git repo, ACM then used the manifests to deploy the required objects into the target cluster.
You can integrate Ansible Automation Platform and the Automation Controller (formerly known as Ansible Tower) with ACM to perform pre / post tasks within the application lifecycle engine. The prehook and posthook task allows you to trigger an Ansible playbook before and after the application is deployed, respectively.
Notice that you will need a Red Hat Account with a valid Ansible subscription for this part.
To give this a try you need an Automation Controller instance. So let’s deploy one on your cluster using the AAP Operator:
Ansible Automation Platform
operator and install it using the default settings.automationcontroller
automationcontroller-admin-password
secretautomationcontroller
route, access it and login as user admin
using the password from the secretYou are now set with a shiny new Ansible Automation Platform Controller!
In the Automation Controller web UI, generate a token for the admin user:
admin
and select TokensToken for use by ACM
Write
Save the token value to a text file, you will need this token later!
For Automation Controller to run something we must configure a Project and a Template first.
Create an Ansible Project:
Create an Ansible Job Template:
Verify that the Job run by going to Jobs and looking for an acm-test
job showing a successful Playbook run.
Set up the credential which is going to allow ACM to interact with your AAP instance in your ACM Portal:
And now let’s configure the ACM integration with Ansible Automation Platform to kick off a job in Automation Controller. In this case the Ansible job will just run our simple playbook that will only output a message.
In the ACM Portal:
Give this a few minutes. The application will complete and in the application topology view you will see the Ansible prehook. In Automation Controller go to Jobs and verify the Automation Job run.