How to deploy your Red Hat Fuse service to OpenShift

Martien van den Akker
7 min readJun 10, 2022

Lately, I have been playing around with Microcks, a native containerized test and mocking tool. I’d like to write about using Microcks. But I feel that if I want to do so, I need to set up a case that can be used to illustrate my story. Such a case has many aspects, and side stories to tell. For example, how to run your adapters together with Microcks in a container setup? How to do the same in OpenShift or other Kubernetes-based platforms?

So let’s start with a series of articles. I foresee the following subjects:

  1. This article, about deploying Fuse services to OpenShift, including a description of the case
  2. A Fuse environment in Docker-Compose
  3. Deploy an operator-based AMQ Broker instance on OpenShift
  4. Using Microcks with SoapUI under docker-compose
  5. Install and use Microcks under OpenShift

I’ll update this list along with the publication of the other articles.

A few weeks ago, I created some scripts to do a deployment of a similar setup on OpenShift. And thus I want to refactor those to this demo setup first.

Case Introduction

I build the case in the FuseSoapAmqMicrocksDemo project on GitHub. It consists of two Fuse Adapter Services. A SOAP Service, that enqueues the incoming message on a queue or topic on an AMQ Broker. Then another AMQP-based adapter dequeues the message and sends it to a SOAP-based API on a remote Enterprise Information System (EIS):

AMQP based Asynchronous decoupling of two SOAP-based Enterprise Information Systems

We’ll use Microcks in a later stage of this series to mock the API of the Target EIS. This setup consists of several components that need to work together. And for the Microcks demo we can use it as both a driver to run the tests as if it were the Source EIS and a stub to mock the Target EIS.

The FuseSoapAmqMicrocksDemo project on GitHub contains two Red Hat Fuse projects:

  1. fuse-adapter-animalorder-soap
  2. fuse-adapter-animalorder-amqp

To publish or consume a SOAP Service I already wrote the following articles:

Besides publishing the messages on the AMQ Broker, there’s no other logic. For the explanation of working of the services, I’ll leave you to those articles.

In this article, I’ll focus on the deployment of the fuse-adapter-animalorder-soap adapter. But, I’ll try to keep it as generic as possible. And I’ll add a similar deployment for the fuse-adapter-animalorder-amqp adapter when I’m at it.

Oh, and I do so just using Linux Bash en YAML-templates. So, I intentionally leave Helm aside. Therefore, you don’t need Helm knowledge for this article. But, I’ll probably leave you hungering for a solution as Helm.

Another side-note: I work with Red Hat OpenShift here. And thus, use the oc CLI. For a Kubernetes environment, you should replace oc with kubectl in most instructions. And you should create an Ingress where I suggest creating an (OpenShift) Route.

Keystore Generation

I created a set of scripts to easily (re-)create keystores and truststores. To prevent me from figuring out how to do that over and over again.

It starts with an environment setting script that saves the important values, like the keystore and trustore location, DNAME and Subject Alternative Names (SANs), etc. And, yes, also the passwords. I know that it is not secure, but I don’t like to enter those over and over again. I would not share the production ones on GitHub. But, I would not work on production environments. We have technical sysadmins to have that responsibility. And, besides, those scripts would reside on servers secured with authentication mechanisms. The keystore_env.sh is as follows:

Example keystore_env.sh script

There is a set of variables for the service, but also for an amq broker.

To create a keystore and save the public certificate in the truststore I have the 1.createKeyAndTrustStore.sh script:

It generates a keypair in a new keystore in the configuration folder in the root of the project, using the DNAME and the SAN variables from the keystore_env.sh script. Then it will export the public certificate and import that into the truststore. If the truststore does not exist, it will create it.

The script 2.createBrokerKeyStore.sh does exactly the same, but then for the broker. The two scripts create separate keystores, but import the public certificates into the same truststore. By reusing the truststore for both purposes the two components can trust each other.

These scripts assume unsigned certificates. Which for this demo setup is ok.

Create and push adapter image

Preparation

To be able to push your image to the OpenShift Embedded Docker registry, you might need to expose the registry by enabling the Default route by using DefaultRoute parameter in the configs.imageregistry.operator.openshift.io resource. I did this as described in the paragraph Exposing-registry manually in the OpenShift 4.8 documentation.

Having done that, you can do a docker login like:

Docker login through oc

This does a docker login using the host and credentials to which you are logged on using the oc command.

For your, and my, convenience I put that in a nice dckr_oc_login.sh script.

To pull and push images you need to grant the particular user the registry-viewer and registry-editor policies:

grant the registry-viewer and registry-editor policies

Build the image

To build the image for the fuse-adapter-animalorder-soap adapter, I created a dckr_bld.sh script. And of course, also another dckr_bld.sh script for the fuse-adapter-animalorder-amqp adapter.

Push the image

To build the image for the fuse-adapter-animalorder-soap adapter, I created a dckr_push.sh script. This script tags and pushes the image to the OpenShift image registry. It uses the same HOST variable as shown with logging on the OpenShift Docker registry described above. And also the fuse-adapter-animalorder-amqp adapter has a corresponding dckr_push.sh script.

Having built and pushed the images we can go on and generate the deployments of the artifacts to OpenShift.

Generate OpenShift artifact YAMLs — No Helm

For this demo, I use a poor man’s way to do a deployment. In a real life you probably at least would use Helm to create your artifacts. And I would advise you to use GitOps (ArgoCD) to do your deployments. However, for the sake of understanding and learning, I want to create and generate my YAML files using shell scripts. This also shows why a tool such as Helm would come in handy.

Secrets

In this case, there are two secrets to create:

  • fuse-adapter-animalorder-soap-tls-cert: with the keystore for the SOAP Adapter
  • fuse-amq-user-secret.yml: with the username password for the AMQ Broker

The script create_secret_cert_soap.sh in the scripts/openshift folder uses the template file secret_cert.yml.tpl to create a secret based on the keystore as created earlier in this article. Make sure you have done those steps before executing this script.

In the script, the following lines are interesting:

Base64 Encode the Keystore and password

Line 1 “cats” the keystore and pipes that to the base64 tool, which will encode the file. The parameter “-w 0” is important to have it encoded into one line. When omitted, the file is encoded into several lines of 76 characters. That will corrupt the YAML file that is the result of the script.

Line 2 encodes the password. Important here is the “-n” parameter of the echo. When omitted, the echo statement will add a newline character. And that will corrupt the password value in the environment variable.

The script also has similar lines for the truststore.

Then the script will use envsubst to expand the template file with the environment variables. It results in the fuse-adapter-animalorder-soap-tls-cert.yml file that contains the keystore and the truststore with their respective passwords.

The script create_user_secret_amq.sh works in a similar way to create a secret for the AMQ User, based on the secret_user.yml.tpl file. It results in the YAML-file fuse-amq-user-secret.yml with the username/password on the AMQ Broker.

Deployment

In essence,the script create_deployment_soap.sh is functioning the same way. However, the templates are divided into three:

  1. deployment.yml.part1.tpl: the head of the deployment
  2. deployment_soap-env.yml.tpl: the environment variables specific for the soap adapter. This part may differ significantly between different adapter deployments.
  3. deployment.yml.part2.tpl: the tail of the deployment

The script create_deployment_soap.sh creates the deployment by concatenating the expanded templates above, resulting in the YAML deployment_fuse-adapter-animalorder-soap.yml. It uses the following settings scripts:

After creating the deployments, it also creates the accompanying service and route.

Service

A ClusterIP type Service is generated using the service.yml.tpl templates. It references the port settings in adapter_soap_env.sh, which are also used in deployment.yml.part1.tpl to define the HTTP-port in the deployment.

It results in the service definition in the YAML service_fuse-adapter-animalorder-soap.yml.

Route

In the same way, the template route.yml.tpl is expanded to route_fuse-adapter-animalorder-soap.yml.

A route is an OpenShift-specific concept. I love it, because the Kubernetes counterpart, the Ingress, requires more setup and configuration. Creating a route in OpenShift results in an Ingress object that OpenShift will keep synchronized on changes of the routes and vice versa. Read more on Kubernetes ingress vs OpenShift route.

If you want to use this on a (more or less) plain Kubernetes platform, then create your own Ingress.

Deploy/Undeploy

After generating the YAMLs you can apply them one by one, with for instance:

Or use oc apply.

But, to apply them all in the proper order, I created deploy_soap.sh.

This results, amongst others, in the deployment including the environment variables:

Environment of the fuse-adapter-animalorder-soap deployment

And this results in the adapter pod started from the deployment:

Pod from the deployment

To quickly remove them, use undeploy_soap.sh.

Conclusion

And that, …, is how it’s done! Well, without Helm of course.

I’ll add the deployment of the AMQP adapter later when I get to it. And I did not describe how to configure the AMQ Broker, yet. Maybe later. So, the adapter will raise connection refused exceptions.

Good luck with your OpenShift or Kubernetes deployments.

--

--

Martien van den Akker

Technology Architect at Oracle Netherlands. The views expressed on this blog are my own and do not necessarily reflect the views of Oracle