Install RTF and deploy simple apps to your AKS cluster using AGIC with API People

Install RTF and Deploy Simple Apps to Your AKS Cluster Using AGIC

Installation of Anypoint Runtime Fabric on an Azure Kubernetes Service using the application gateway ingress controller (Part 2/3)


This is the second part of a series of three blog posts covering the installation of Anypoint Runtime Fabric (RTF) on an Azure Kubernetes Service (AKS) using the Application Gateway Ingress Controller (AGIC). To check the installation steps of AKS, please check here. The configuration of the RTF in this part will be secured until the Ingress Controller, while the last-mile will be plain HTTP. For instructions on configuring the last-mile TLS, check out the next blog post here.

Prerequisites

Installing RTF

This section will describe the step-by-step process of installing Runtime Fabric (RTF) and protecting the Azure Gateway Ingress Controller with TLS. Also, comments on common pitfalls and troubleshooting will be commented on later. RTF gives us two installation options:

  • Helm: This is the standard approach and gives the users more control over the process, but you will need to use more commands to setup your environment.
  • Rtfctl: This is a second option and more straightforward to setup, but you will have fewer options to customize your environment.

In this tutorial, we will use the Helm installation.

Step 1

You will need to find the instructions on your Anypoint. Go to Runtime Manager Runtime Fabrics and click Create Runtime Fabric:

RTF p2 1 API People

Step 2

A popup will open, asking for a Name and to select Azure Kubernetes Service. Fill the fields and click Next:

RTF p2 2 API People

Step 3

Another popup will appear just to Accept the responsibility term:

RTF p2 3 API People

Step 4

Once Accepted, a Runtime Manager base screen will appear, giving you two installation options. Follow the instructions on your screen for Helm.

RTF p2 4 API People

Step 5

The values.yaml will have two fields to be verified and, if blank, provided. The activationData should be pre-filled by MuleSoft, while the license must be loaded with the base64 encoding of your muleLicense. You can use the following command to encrypt your license:

base64 -b 0 -i <your-mule-license>.lic

Which will output something like this, but longer:

2+W35i...NOjxk=

Step 6

Update the fields in your values.yaml file and execute the last steps on your Runtime Manager screen to apply the values.yaml file. RTF will be installed and it will take some time to come up.

activationData: <your-activation-data>
proxy:
  http_proxy:
  http_no_proxy:
  monitoring_proxy:
muleLicense: <your-mule-license>
customLog4jEnabled: false
global:
  crds:
    install: true
  authorizedNamespaces: false
  image:
    rtfRegistry: rtf-runtime-registry.kprod.msap.io
    pullSecretName: rtf-pull-secret
  containerLogPaths:
  - /var/lib/docker/containers
  - /var/log/containers
  - /var/log/pods

Once the RTF is active, you will see the All systems operational message like the following image:

RTF p2 5 API People

Step 7

With that, you can go to the Associated Environments tab and associate the RTF with the environment you will use. Once selected, click on Apply Allocations.

RTF p2 7 API People

This will make your Runtime Fabric available for deployment under the Business Group and Environments selected.

Create a certificate with let’s encrypt

To make the implementation cheaper, instead of combining the certificate and Azure Key Vault, we will use Let’s Encrypt to create a valid certificate for our Ingress Controller.

Step 1

First, make sure you are logged into your Azure account with the right Kubernetes cluster referenced:

resourceGroup="<your-rg>"    # Resource Group name to be used
aksCluster="<your-aks-name>" # AKS Cluster name
az login
az account set --subscription <name or id>
az aks get-credentials -n $aksCluster -g $resourceGroup --overwrite-existing --admin

Step 2

Create a script to install a Let’s Encrypt Helm chart to your cluster. Just create a file named create-lets-encrypt-helm.sh and paste the following content:

#!/bin/bash
# Install the CustomResourceDefinition resources separately
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.1/cert-manager.crds.yaml
# Create the namespace for cert-manager
kubectl create namespace cert-manager
# Label the cert-manager namespace to disable resource validation
kubectl label namespace cert-manager cert-manager.io/disable-validation=true
# Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io
# Update your local Helm chart repository cache
helm repo update
# Install the cert-manager Helm chart
# Helm v3+
helm install \
 cert-manager jetstack/cert-manager \
 --namespace cert-manager \
 --version v1.12.1

Step 3

Givi It execution permission and run:

chmod u+x ./create-lets-encrypt-helm.sh
./create-lets-encrypt-helm.sh

This will install the Helm chart and create a namespace cert-manager in your cluster, where the related pods will also be located.

Step 4

Another important step is regarding a Service Principal cert-manager to validate your domain and provide you with a valid certificate. The cert-manager provides a detailed guide through AzureDNS, and you can also integrate it using other options.

In this tutorial, we will be going straight to the commands without explanation:

# Choose a name for the service principal that contacts azure DNS to present
# the challenge.
AZURE_CERT_MANAGER_NEW_SP_NAME=<your-sp-name>

# This is the name of the resource group that you have your dns zone in.
AZURE_DNS_ZONE_RESOURCE_GROUP=<your-dns-resource-group>

# The DNS zone name. It should be something like domain.com or sub.domain.com.
AZURE_DNS_ZONE=<AZURE_DNS_ZONE>
DNS_SP=$(az ad sp create-for-rbac --name $AZURE_CERT_MANAGER_NEW_SP_NAME --output json)
AZURE_CERT_MANAGER_SP_APP_ID=$(echo $DNS_SP | jq -r '.appId')
AZURE_CERT_MANAGER_SP_PASSWORD=$(echo $DNS_SP | jq -r '.password')
AZURE_TENANT_ID=$(echo $DNS_SP | jq -r '.tenant')
AZURE_SUBSCRIPTION_ID=$(az account show --output json | jq -r '.id')

This will create a Service Principal (SP) and load some variables associated with your Azure environment.

Step 5

Next, you will need to delete the Contributor role from your SP. We want to limit the access as Contributor only to the Azure DNS. To delete the Contributor role (if needed), you can use the following:

az role assignment delete --assignee $AZURE_CERT_MANAGER_SP_APP_ID --role Contributor

Step 6

Then give limited access to the Azure DNS, using:

DNS_ID=$(az network dns zone show --name $AZURE_DNS_ZONE --resource-group $AZURE_DNS_ZONE_RESOURCE_GROUP --query "id" --output tsv)
az role assignment create --assignee $AZURE_CERT_MANAGER_SP_APP_ID --role "DNS Zone Contributor" --scope $DNS_ID

Step 7

Once executed, your SP should have Contributor access only to the Azure DNS. So you can load the Service Principal password as a Kubernetes secret to your cluster with the name azuredns-config.

kubectl create secret generic azuredns-config --from-literal=client-secret=$AZURE_CERT_MANAGER_SP_PASSWORD
kubectl create secret generic azuredns-config -n rtf --from-literal=client-secret=$AZURE_CERT_MANAGER_SP_PASSWORD
kubectl create secret generic azuredns-config -n cert-manager --from-literal=client-secret=$AZURE_CERT_MANAGER_SP_PASSWORD

Before the next step, it might be a good idea to run the following command to print the needed variables and take notes on the values.

echo "AZURE_CERT_MANAGER_SP_APP_ID:$AZURE_CERT_MANAGER_SP_APP_ID"
echo "AZURE_CERT_MANAGER_SP_PASSWORD:$AZURE_CERT_MANAGER_SP_PASSWORD"
echo "AZURE_SUBSCRIPTION_ID:$AZURE_SUBSCRIPTION_ID"
echo "AZURE_TENANT_ID:$AZURE_TENANT_ID"
echo "AZURE_DNS_ZONE:$AZURE_DNS_ZONE"
echo "AZURE_DNS_ZONE_RESOURCE_GROUP:$AZURE_DNS_ZONE_RESOURCE_GROUP"

Step 8

Once you have the Helm chart for the cert-manager in place, and the value for the variables, paste the following content to a new cert-issuer.yaml file and update the placeholder fields with the proper information:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    email: guilherme.machado@apipeople.com
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: letsencrypt-secret
    solvers:
      - dns01:
        azureDNS:
          clientID: <AZURE_CERT_MANAGER_SP_APP_ID>
          clientSecretSecretRef:
            name: azuredns-config
            key: client-secret
          subscriptionID: <AZURE_SUBSCRIPTION_ID>
          tenantID: <AZURE_TENANT_ID>
          resourceGroupName: <AZURE_DNS_ZONE_RESOURCE_GROUP>
          hostedZoneName: <AZURE_DNS_ZONE>
          environment: AzurePublicCloud

Apply the Kubernetes configuration for that:

kubectl apply -f cert-issuer.yaml

This will create a non-specific namespace resource to make the cert-manager the authority representing Let’s Encrypt. The Ingress Controller will later use this to refer to the certificate.

Deploy an ingress controller (AGIC)

The following configuration describes the Kubernetes Ingress Controller used at this point.

Step 1

Create a file named ingress-tls.yaml and paste the content, replacing the placeholders:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: rtf-ingress-template
  namespace: rtf
  annotations:
    kubernetes.io/ingress.class: rtf-azure/application-gateway
    appgw.ingress.kubernetes.io/health-probe-status-codes: "200-399,404"
    appgw.ingress.kubernetes.io/backend-path-prefix : /
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  tls:
  - hosts:
    # In our case, the hostname will be rtf.azure.apipeople.com
    # The secret doesn't exist yet. It will be created by the ingress controller
    - <YOUR_HOSTNAME>
    secretName: ingress-rtf-tls
  rules:
  - host: <YOUR_HOSTNAME>
    http:
      paths:
      - path: /app-name/*
        pathType: Prefix
        backend:
          service:
            name: service
            port:
              number: 80

Important notes about the configuration:

  • The Ingress Controller class needs to be appended with rtf-.RTF requires this to intercept the information needed by the Anypoint RuntimeManager.
  • The health probe status codes use a workaround regarding the API deployment status code. By default, the application gateway will create a health probe that will query the API base path /, which our API will return 404. But the 404 will only be returned by the API if it has already come up. If not, the error code will be 502. So, in our series of blog posts, we will use this workaround to validate that the API is already up and running. This can be fixed later on by creating a specific endpoint to verify API health in your API and customize the Ingress Controller to use it, for example, the following:
appgw.ingress.kubernetes.io/health-probe-path: "/healthz"
  • The app-name in your path is a reserved name in MuleSoft, and it will be overwritten with your application name by RTF.
  • The port number is 80 for an internal HTTP endpoint.
  • The reference to the previously created cluster issuer is:
cert-manager.io/cluster-issuer: letsencrypt-staging
  • The secret name doesn’t need to exist previously. The cert-manager will create it.
secretName: ingress-rtf-tls
  • Apply the configuration with kubectl:
kubectl apply -f ingress-tls.yaml

Step 2

Once you apply the Ingress Controller configuration, the Inbound Traffic changes will be intercepted by RTF and update the URL with the host configuration you have used in your Ingress Controller.

RTF p2 8 API People

Configure DNS with AGIC public IP

Once the gateway has been successfully deployed to Azure, you must configure your App GW Public IP in your DNS. In this tutorial, we are using Azure to manage the DNS zone.

Step 1

Search for application gateways in the Azure search bar.

RTF p2 9 API People

This will open a list of application gateways. Search on that for the application gateway used by your AKS. The public IP will already be there, but we will see the IP configured under the Frontend IP configuration.

RTF p2 10 API People

Step 2

Inside the application gateway overview page, we can click on Settings → Frontend IP configurations.

RTF p2 11 API People

This will show the IPs configured to our application gateway and if they are configured in the associated listener.

RTF p2 12 API People

Step 3

Now that you have the IP address search in the top bar for Azure DNS, open it, and go to your DNS zone to add a new + Record Set.

RTF p2 13 API People

Step 4

This will open a popup window where you can add the subdomain you used in your Ingress Controller and the Public IP address, like the following:

RTF p2 14 API People

This will add another line to your DNS configuration list.

Deploy a sample app to your cluster

Step 1

To deploy a Mule app to your Kubernetes cluster, go to Runtime Manager → Select the Environment (in our case RTFDEMO) → Applications → Deploy application.

RTF p2 15 API People

This will open the Deploy Application screen, where you must first fill Application Name and change the Deployment Target to your RTF cluster.

RTF p2 16 API People

Step 2

Check the Ingress Controller tab and see if the app-name was correctly filled with the application name you gave at the top of the Public endpoint.

RTF p2 17 API People

Then click on the Choose file right side of the Application File and select Import file from Exchange.

RTF p2 18 API People

Step 3

On the popup window, select Example, and search for hello. This will give you a list of matching names. In this tutorial, we are looking for a Hello World sample application. Once selected, click the Select button.

RTF p2 19 API People

Step 2

The application will be loaded, and you can now click the Deploy Application button.

RTF p2 20 API People

It might take a while for the application to come up. In our case, the sample request is:

#Change the URL to match your application
curl --location 'https://rtf.azure.apipeople.com/hello-app/helloWorld'

And the response for that should be:

Hello World!

This concludes this part of the tutorial. For the next or the previous posts of this series, check ‘More in this Series’ below.

Important notes

  • When executing the installation steps in the AGIC, it’s common to face some permission issues, especially when following a brownfield installation for AGIC and you have limited environmental permissions.
  • For AGIC, when using the greenfield setup, you will not have direct access to change the log level. You will need to download and customize your helm-config.yaml from the cluster, which is out of the scope of this blog post. But two things might be useful here if the changes to your Ingress Controller are not working well:

First, you can check the ActivityLog for your application gateway and see if the Service Principal has been used to change the application gateway.

RTF p2 21 1 API People

Second, you can check the AGIC pod in your AKS cluster. The AGIC will have a pod in your cluster to map the changes from your Ingress Controller to your Azure Application Gateway. The pod name will be ingress-appgw-<something>. The Events or Live logs section inside the pod will contain errors if the link between the cluster and the application gateway doesn’t work.

RTF p2 22 API People

More in this series

References

Comments are closed.