Cloud foundry Setup using Korifi

Karanbir Singh
8 min readDec 17, 2023

Korifi is a very interesting cloud foundry community initiative. It runs as an abstraction on top of kubernetes and enables the developers to focus more on building applications.

  • It relies on kpack for building & packaging the applications.
  • It makes use of contour for ingression.
  • It leverages cert-manager to generate and provision the certificates(I am doing that via let’s encrypt)

In the blog post here, I am using an UBUNTU VM to uplift a korifi environment. The main components and their layout is like below:-

Prerequisites

  • a VM, ubuntu in my case here, with enough memory.
  • registered domain (I suggest get one from cloud flare — will be using it). If running on local, this can be ignored.
  • knowledge/ confidence on DNS.
  • docker pro account or a custom docker registry.
  • a basic understanding on ports and services.
  • good to have basic knowledge on Kubernetes.
  • zeal, confidence & patience 🙄.

Various steps to install korifi are defined below:-

Step 0— Setup the pre-required tools.

Apache2 Utils, use the below command:-

apt-get install apache2-utils

Install K3S Kubernetes:-

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--tls-san <replace with Public IP of VM>" sh -s - --disable traefik --write-kubeconfig-mode 644

# set the below variable to point to the k3s KUBECONFIG

export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

Install Helm:-

# download the helm installation script
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3

# chenge permissions of the script
chmod 700 get_helm.sh

# run the script
./get_helm.sh

# validate the installation, this prints the installed helm's version
helm version

Install reflector:-

The usage of the reflector is to reflect credential across various namespaces of Kubernetes.

# add repo for the emberstack (required for the credential reflection)
helm repo add emberstack https://emberstack.github.io/helm-charts

# update helm repos
helm repo update

# install it
helm upgrade --install reflector emberstack/reflector

Step 1 — install kpack

# replace the version of the release accordingly
kubectl apply -f https://github.com/buildpacks-community/kpack/releases/download/v0.12.3/release-0.12.3.yaml

# validate the installation using below command
kubectl get pods --namespace kpack --watch

Step 2— install cert-manager

# replace the version v1.13.2, with anything recent in the future.
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.2/cert-manager.yaml

# validate the installation using the command
kubectl get pods - namespace cert-manager

Step 3— install contour

# add repo to helm for binami's helm charts
helm repo add bitnami https://charts.bitnami.com/bitnami

# install contour
helm install my-release bitnami/contour --namespace projectcontour --create-namespace

# validate the installation
kubectl -n projectcontour get po,svc

Step 4— install korifi and cf namespaces

  • create a namespace for korifi by using the below yaml:-
apiVersion: v1
kind: Namespace
metadata:
name: korifi
labels:
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/enforce: restricted
  • create cf namespace by using the similar yaml:-
apiVersion: v1
kind: Namespace
metadata:
name: cf
labels:
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/enforce: restricted

Step 5— DNS and cert management

we need separate certificates for API endpoint and application/ workloads

The application certificate is a wildcard certificate, DNS records for example:-

  • app.cf.domain.com — an A record pointing to the IP address of the VM
  • *.app.cf.domain.com — A CNAME record pointing to app.cf.domain.com

And another one for the API endpoint, though the wildcard certificate was not required, I did it during my installation scenario, DNS records required accordingly are:-

  • api.cf.domain.com — an A record pointing to the IP address of the VM
  • *.api.cf.domain.com — A CNAME record pointing to api.cf.domain.com

Following Steps are scoped in for the cloud flare based Domain & DNS.

  • Manifest YAML for storing the cloud flare API token:-
---
apiVersion: v1
kind: Secret
metadata:
name: cflare-token-secret
namespace: cert-manager
annotations:
reflector.v1.k8s.emberstack.com/reflection-allowed: "true"
reflector.v1.k8s.emberstack.com/reflection-allowed-namespaces: "default,cf,korifi" # Control destination namespaces
reflector.v1.k8s.emberstack.com/reflection-auto-enabled: "true" # Auto create reflection for matching namespaces
reflector.v1.k8s.emberstack.com/reflection-auto-namespaces: "default,cf,korifi"
type: Opaque
stringData:
cflare-token: <replace with cloud flare token>
  • Manifest YAML for Let’s Encrypt ACME based cert resolver/ provisioner:-
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: le-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: <email id> #- to recieve notifications relating to the certificate events
privateKeySecretRef:
name: le-prod
solvers:
- dns01:
cloudflare:
email: <replace with email id of cloud flare account>
apiTokenSecretRef:
name: cflare-token-secret
key: cflare-token
selector:
dnsZones:
- "domain.com" # replace with the base domain name accordingly
  • manifest for the API certificate:-
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: korifi-api-ingress-cert # do not change this property name, needed by korifi
namespace: cert-manager
spec:
secretName: korifi-api-ingress-cert # do not change this property name, needed by korifi
issuerRef:
name: le-prod
kind: ClusterIssuer
commonName: "api.cf.<domain.com>" # replace the <domain.com> as per your own setup
dnsNames:
- "api.cf.<domain.com>" # replace the <domain.com> as per your own setup
- "*.api.cf.<domain.com>" # replace the <domain.com> as per your own setup
secretTemplate:
annotations:
reflector.v1.k8s.emberstack.com/reflection-allowed: "true"
reflector.v1.k8s.emberstack.com/reflection-allowed-namespaces: "default,korifi" # Control destination namespaces
reflector.v1.k8s.emberstack.com/reflection-auto-enabled: "true" # Auto create reflection for matching namespaces
reflector.v1.k8s.emberstack.com/reflection-auto-namespaces: "default,korifi"
  • manifest for APP(workdload) certificate:-
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: korifi-workloads-ingress-cert # do not change this property name, needed by korifi
namespace: cert-manager
spec:
secretName: korifi-workloads-ingress-cert # do not change this property name, needed by korifi
issuerRef:
name: le-prod
kind: ClusterIssuer
commonName: "app.cf.<domain.com>" # replace the <domain.com> as per your own setup
dnsNames:
- "app.cf.<domain.com>" # replace the <domain.com> as per your own setup
- "*.app.cf.<domain.com>" # replace the <domain.com> as per your own setup
secretTemplate:
annotations:
reflector.v1.k8s.emberstack.com/reflection-allowed: "true"
reflector.v1.k8s.emberstack.com/reflection-allowed-namespaces: "default,korifi" # Control destination namespaces
reflector.v1.k8s.emberstack.com/reflection-auto-enabled: "true" # Auto create reflection for matching namespaces
reflector.v1.k8s.emberstack.com/reflection-auto-namespaces: "default,korifi"

After running above manifests in order:-

inspect kubectl get pods --namespace cert-manager
if the setup is working fine, you will get an output similar to this

afterwords check the logs for the cert-manager using the command
kubectl logs -n cert-manager -f cert-manager-57688f5dc6-kd5qc

If all good, the logs should show up details like below:-

To inspect details of the certificates, use the command — kubectl describe certificate — all-namespaces

The output should be something similar:-

Step 6— Set up docker image registry credentials

kubectl --namespace cf create secret docker-registry image-registry-credentials \
--docker-username="<your-container-registry-username>" \
--docker-password="<your-container-registry-password>" \
--docker-server="<your-container-registry-hostname-and-port>"

--docker-server is not required when using docker hub registry.

refer the link here for other options & other details.

Step 7 —Setup Admin User, named as cf-admin(can be anything you want).

# 1. create user's private key, replace 2048 to 3072, or 4096 for better security
openssl genrsa -out cf-admin.key 2048
# 2. create a CSR for the user cf-admin
openssl req -new -key cf-admin.key -out cf-admin.csr -subj "/CN=cf-admin"
# 3. convert csr to base64
cat cf-admin.csr | base64 | tr -d "\n"
# 4. create a yaml file for the certificate request whose content are like below
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: cf-admin
spec:
request: LS0t... # <replace LS0t... with output from previous command here>
signerName: kubernetes.io/kube-apiserver-client
expirationSeconds: 604800 # this is 7 days
usages:
- client auth
# 5. apply the manifest using the file you created in previous step
kubectl apply -f <the file your created>
# 6. validate the CSR if it got created successfully.
kubectl get csr
# 7. approve the certificate request
kubectl certificate approve cf-admin
# 8. save the certificate to a file cf-admin.crt
kubectl get csr cf-admin -o jsonpath='{.status.certificate}'| base64 -d > cf-admin.crt
# 9. embed the certificate and key in the kubeconfig.
kubectl config set-credentials cf-admin \
--client-key=cf-admin.key \
--client-certificate=cf-admin.crt \
--embed-certs=true
# 10. create context for the cf-admin.
kubectl config set-context cf-admin --cluster=kubernetes --user=cf-admin
# 11. create developer role for the cf-admin.
kubectl create role developer --verb=create --verb=get --verb=list --verb=update --verb=delete --resource=pods
# 12. bind the role to the cf-admin.
kubectl create rolebinding developer-binding-cf-admin --role=developer --user=cf-admin
# 13.

Step 8 — Install KORIFI (Important and Final one!!)

# replace the <domain.com> with your actual domain.
# replace the <docker_registry> with your actual registry.

helm install korifi https://github.com/cloudfoundry/korifi/releases/download/v0.10.0/korifi-0.10.0.tgz \
--namespace="korifi" \ # we had created this above in the Step 4
--set=rootNamespace="cf" \ # we had created this above in the Step 4
--set=adminUserName="cf-admin" \ # the admin user we just created
--set=api.apiServer.url="api.cf.<domain.com>" \
--set=defaultAppDomainName="app.cf.<domain.com>" \
--set=containerRepositoryPrefix=<docker_registry>/ \
--set=kpackImageBuilder.builderRepository=<docker_registry>/kpack-builder --wait

Post Installation & Validation

Copy the /etc/rancher/k3s/k3s.yaml to your client machine.

set the environment variable KUBECONFIG=<base location>/k3s.yaml

change the ip address cluster.server to your VM’s IP address.

Install the cf cli — follow this link for installation steps & details.

# set the API endpoint
cf api https://api.cf.<domain.com> # replace the <domain.com> with actual one
# initiate the login
cf login
# create the cf org
cf create-org <org_name> # replace the <org_name> for the name you want to creat and org
# create a space in the <org_name>
cf create-space -o <org_name> <space_name> # replace the <space_name> accordingly
# set cf target to an org
cf target -o <org_name>


# push the app to the cloudfoundary
cd <directory of the project> # change the <directory of the project> to the project's directory
# push the app
cf push <app_name> # replace the <app_name> accordingly

The app should now be available at https://<app_name>.app.cf.<domain.com>/**

--

--

Karanbir Singh

API developer + Web Application developer + Devops Engineer = Full Stack Developer