Tuesday 2 July 2024

Introduction to Helm

 


What is Helm?

  • Package management tool for deploying/installing applications to Kubernetes clusters
  • used for automating the deployment, scaling, and management of containerized applications
  • helps manage Kubernetes applications
  • makes it easier to deploy to a Kubernetes cluster applications or services which are highly repeatable and used in different scenarios  
  • tool for managing Charts
  • open-source platform 
  • streamlines the management of Kubernetes applications by providing a robust set of tools for packaging, deploying, and managing application lifecycle.

Helm Features


Helm simplifies the process of defining, installing, and upgrading even the most complex Kubernetes applications by providing the following features:
  • Package Management:
    • Helm packages are called charts. A Helm chart is a collection of files that describe a related set of Kubernetes resources.
    • Charts can be shared and reused, similar to how software libraries or packages are used in other programming environments.
  • Application Deployment:
    • Helm allows users to deploy applications in Kubernetes using a single command. This reduces the complexity of managing multiple YAML configuration files.
  • Version Control:
    • Helm charts can be versioned, enabling the ability to roll back to previous versions of a deployment. This is useful for managing updates and dealing with deployment issues.
  • Templating:
    • Helm uses a templating engine to manage complex Kubernetes manifests. Templates enable the dynamic generation of Kubernetes resource definitions based on input parameters, allowing more flexibility and reusability.
  • Dependency Management:
    • Charts can depend on other charts. Helm manages these dependencies, simplifying the process of building complex applications from modular components.
  • Release Management:
    • Helm manages releases of applications, making it easy to track and manage application deployments in different environments (e.g., development, staging, production).
  • Repository Management:
    • Helm charts can be stored in repositories. Helm provides commands to manage repositories, search for charts, and install them from repositories.

Common Use Cases for Helm


  • Deploying Complex Applications
    • Helm is used to deploy applications with many Kubernetes resources, such as web applications with databases, caching layers, and background jobs.
  • Managing Configuration
    • Helm allows different configurations for different environments (e.g., production vs. development) to be managed easily.
  • Continuous Deployment Pipelines
    • Helm is often integrated into CI/CD pipelines to automate the deployment of applications.
  • Microservices
    • Helm simplifies the deployment and management of microservices by packaging each service as a chart and managing the dependencies between them.

Helm Installation



What problem does Helm solve?

In Deploying Microservices Application on the Minikube Kubernetes cluster I described how to deploy containerised application on the (Minikube) Kubernetes cluster.  kubernetes-demo/projects/kodekloud-voting-app/voting-app-via-deployments at main · BojanKomazec/kubernetes-demo contains all the Kubernetes manifests and also an instruction how to deploy each of them. We can see two problems here:

1) We have a repetitive action of executing kubectl create command against the multiple .yaml files. This can be automated via some script but if we add/remove/move/rename any of the files, we'll also need to edit the script. This is not ideal as any manual task is prone to errors.

2) Parameters of Kubernetes objects we want to create are all hardcoded in these yaml files. Labels, number of replicas, security credentials, networking parameters (ports, domain names...), Docker image names and versions...are all parameters that we might want to change for different environments (e.g. dev, stage, beta, prod). In this example we have 5 deployments and 4 services, 9 yaml files in total. If we'd like to have them different for all 4 environments, that means we'd need to maintain 9 x 4 = 36 different yaml files! That would be a proper maintenance hell. 

Helm brings solution for both issues by extracting parameter values into separate files (e.g. one values yaml file per environment) and then injecting these values into Kubernetes configuration/manifest files which are templated, all during application installation where a single helm CLI command replaces all multiple kubectl create commands.

We can run application containers using the Kubernetes command line (kubectl) but the easiest way to run workloads in Kubernetes is using the ready-made Helm charts. 

Helm bundles all these values and template files into a package called a chart.


source: How To Create Helm Chart [Comprehensive Beginners Guide]


 

What is a Helm Chart?

  • Helm package which contains all of the resource definitions necessary to run an application, tool, or service (e.g. Karpenter) inside of a Kubernetes cluster
  • package that contains all the information that Kubernetes needs to know for managing a specific application within the cluster
  • package of pre-configured Kubernetes resources
  • indicates to Kubernetes how to perform the application deployment and how to manage the container clusters
  • help define, install and upgrade Kubernetes applications
  • easy to create, version, share, and publish
  • expose dozens of useful configurations and automatically set up complex resources
  • contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster
  • a collection of files that describe a related set of Kubernetes resources
  • provides streamlined package management functions
  • describes how to manage a specific application on Kubernetes
  • consists of metadata that describes the application, plus the infrastructure needed to operate it in terms of the standard Kubernetes primitives. Each chart references one or more (typically Docker-compatible) container images that contain the application code to be run.
  • Helm charts contains at least these two elements:
    • A description of the package (chart.yml)
    • One or more templates, which contains Kubernetes manifest files.
  • templates that describe how to install, configure, and run a set of Kubernetes resources
  • written in YAML, just like Kubernetes manifests, but they are organized in a way that makes them easy to manage, version, and share
 
A Release is an instance of a chart running in a Kubernetes cluster.

Let's say we have the following application stack:
  • a Node.js application which needs to be highly available
    • It's therefore running in two replicas (pods) which will be handling all incoming requests because it . We have a deployment definition file 
  • MongoDB which handles communication to and from each of these replicas
  • NodePort service as a way to access the application

How to convert Kubernetes configuration YAML file into a Helm chart?

Steps:
  • Create a Chart.yaml file that describes the chart and its dependencies.
    • This file should include the chart name, version, description, and other metadata
    • We can also specify any dependencies that the chart needs to function properly
  • Create a templates directory and move the YAML files into that directory
    • This directory will contain the templates for the Kubernetes resources that the chart will install
  • Update the YAML files to use Helm's templating syntax and replace any hardcoded values with variables. This means replacing any hardcoded values with variables that will be populated at install time. Helm uses Go templates for its templating syntax, which allows you to define variables and use control structures like loops and conditionals.
  • Package the chart using the helm package command
    • This will create a .tgz file that contains the chart metadata and templates.
  • Deploy it to your Kubernetes cluster using the helm install command
    • This will create the Kubernetes resources defined in the chart and set any values you specified at install time.


Helm | Helm Create - This command creates a chart directory along with the common files and directories used in a chart.

$ helm create my-app
Creating my-app


This is basically a Helm chart boilerplate:

$ tree
.
├── my-app
│   ├── charts 
│   ├── Chart.yaml
│   ├── templates
│   │   ├── deployment.yaml
│   │   ├── _helpers.tpl
│   │   ├── hpa.yaml
│   │   ├── ingress.yaml
│   │   ├── NOTES.txt
│   │   ├── serviceaccount.yaml
│   │   ├── service.yaml
│   │   └── tests
│   │       └── test-connection.yaml
│   └── values.yaml


4 directories, 11 files


charts directory contains chart dependencies.

Chart.yaml contains global variables for the chart such as version and description:

$ cat my-app/Chart.yaml 
apiVersion: v2
name: my-app
description: A Helm chart for Kubernetes

# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application

# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0

# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"


templates directory is where we'll put all the *.yaml files for Kubernetes. By default, Helm creates these files:
  • deployment.yaml (kind: Deployment)
  • hpa.yaml (kind: HorizontalPodAutoscaler)
  • ingress.yaml (kind: Ingress)
  • service.yaml (kind: Service) 
  • serviceaccount.yaml (kind: ServiceAccount)
Helm uses Go template markup language to customize these files. 

All the files in this directory are "skeletons" which are filled with the variables from values.yaml when we deploy our Helm chart. 


source: How To Create Helm Chart [Comprehensive Beginners Guide]


File _helpers.tpl contains our custom helper functions for variable calculaton.


Linting Helm chart



$ helm lint ./my-app/
==> Linting ./my-app/
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, 0 chart(s) failed


How to validate if the values are getting substituted in the templates?



$ helm template ./my-app/
---
# Source: my-app/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: release-name-my-app
  labels:
    helm.sh/chart: my-app-0.1.0
    app.kubernetes.io/name: my-app
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
automountServiceAccountToken: true
---
# Source: my-app/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-my-app
  labels:
    helm.sh/chart: my-app-0.1.0
    app.kubernetes.io/name: my-app
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: my-app
    app.kubernetes.io/instance: release-name
---
# Source: my-app/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-my-app
  labels:
    helm.sh/chart: my-app-0.1.0
    app.kubernetes.io/name: my-app
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: my-app
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      labels:
        helm.sh/chart: my-app-0.1.0
        app.kubernetes.io/name: my-app
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/version: "1.16.0"
        app.kubernetes.io/managed-by: Helm
    spec:
      serviceAccountName: release-name-my-app
      securityContext:
        {}
      containers:
        - name: my-app
          securityContext:
            {}
          image: "nginx:1.16.0"
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /
              port: http
          readinessProbe:
            httpGet:
              path: /
              port: http
          resources:
            {}
---
# Source: my-app/templates/tests/test-connection.yaml
apiVersion: v1
kind: Pod
metadata:
  name: "release-name-my-app-test-connection"
  labels:
    helm.sh/chart: my-app-0.1.0
    app.kubernetes.io/name: my-app
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
  annotations:
    "helm.sh/hook": test
spec:
  containers:
    - name: wget
      image: busybox
      command: ['wget']
      args: ['release-name-my-app:80']
  restartPolicy: Never


How to perform a Kubernetes cluster deployment dry run



We can use Minikube local cluster or a cluster in the cloud (e.g. AWS EKS).

Minikube


Start the Minikube cluster:


$ minikube start


Dry run deployment:


$ helm install --dry-run my-release ./my-app/
NAME: my-release
LAST DEPLOYED: Wed Jul  3 00:57:12 2024
NAMESPACE: default
STATUS: pending-install
REVISION: 1
HOOKS:
---
# Source: my-app/templates/tests/test-connection.yaml
apiVersion: v1
kind: Pod
metadata:
  name: "my-release-my-app-test-connection"
  labels:
    helm.sh/chart: my-app-0.1.0
    app.kubernetes.io/name: my-app
    app.kubernetes.io/instance: my-release
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
  annotations:
    "helm.sh/hook": test
spec:
  containers:
    - name: wget
      image: busybox
      command: ['wget']
      args: ['my-release-my-app:80']
  restartPolicy: Never
MANIFEST:
---
# Source: my-app/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-release-my-app
  labels:
    helm.sh/chart: my-app-0.1.0
    app.kubernetes.io/name: my-app
    app.kubernetes.io/instance: my-release
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
automountServiceAccountToken: true
---
# Source: my-app/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-release-my-app
  labels:
    helm.sh/chart: my-app-0.1.0
    app.kubernetes.io/name: my-app
    app.kubernetes.io/instance: my-release
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: my-app
    app.kubernetes.io/instance: my-release
---
# Source: my-app/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-release-my-app
  labels:
    helm.sh/chart: my-app-0.1.0
    app.kubernetes.io/name: my-app
    app.kubernetes.io/instance: my-release
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: my-app
      app.kubernetes.io/instance: my-release
  template:
    metadata:
      labels:
        helm.sh/chart: my-app-0.1.0
        app.kubernetes.io/name: my-app
        app.kubernetes.io/instance: my-release
        app.kubernetes.io/version: "1.16.0"
        app.kubernetes.io/managed-by: Helm
    spec:
      serviceAccountName: my-release-my-app
      securityContext:
        {}
      containers:
        - name: my-app
          securityContext:
            {}
          image: "nginx:1.16.0"
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /
              port: http
          readinessProbe:
            httpGet:
              path: /
              port: http
          resources:
            {}

NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=my-app,app.kubernetes.io/instance=my-release" -o jsonpath="{.items[0].metadata.name}")
  export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT


Helm chart deployment (release)


To release a new version of our dockerized application we want to perform a Helm chart deployment:


$ helm install my-release ./my-app/

my-release is the name of the release and it can be named like version: v2.3.1-beta
It is followed by the path to Helm chart.


When the chart is being deployed Helm:
  • reads the chart and configuration values from the values.yaml file
  • generates the manifest files
  • sends these files to the Kubernetes API server and then Kubernetes creates the requested resources in the cluster

Helm Chart Repositories


Chart repository:
  • a location where packaged charts can be stored and shared.
  • consists of:
    • a special file called index.yaml (index file) which contains an index of all of the charts in the repository
    • packaged charts (optionally, as they can reside at some other web server)
  • can be any HTTP server that can serve YAML and tar files and can answer GET requests
  • hosting options: 
    • Google Cloud Storage (GCS) bucket
    • Cloudsmith
    • Amazon S3 bucket
    • JFrog Artifactory
    • GitHub Pages
      • Chart Releaser Action is a GitHub Action workflow to turn a GitHub project into a self-hosted Helm chart repo, using helm/chart-releaser CLI tool
    • GitLab Package Registry
    • create your own ordinary web server
  • place where charts get uploaded to in order for them to be shared
  • Examples:
    • Community public repository of Helm Charts: Artifact Hub.
    • Karpenter Helm Chart repository: https://charts.karpenter.sh/ which is actually https://charts.karpenter.sh/index.yaml

https://charts.karpenter.sh/ (or https://charts.karpenter.sh/index.yaml) returns index.yaml file which is a special file which contains a list of charts hosted by this repository and their download urls.

apiVersion: v1
entries:
  karpenter: <-- chart name. What follows is the content of the Chart.yaml from .tgz file
  - apiVersion: v2 
    appVersion: 0.16.3
    created: "2022-09-27T14:35:08.091269-07:00"
    description: A Helm chart for Karpenter, an open-source node provisioning project
      built for Kubernetes.
    digest: bd60e546bf25071c64b94928d907baf6713f929e6336f2ed452689ca9f9176b3
    home: https://karpenter.sh/
    icon: https://repository-images.githubusercontent.com/278480393/dab059c8-caa1-4b55-aaa7-3d30e47a5616
    keywords:
    - cluster
    - node
    - scheduler
    - autoscaling
    - lifecycle
    name: karpenter
    sources:
    - https://github.com/aws/karpenter/
    type: application
    urls:
    - karpenter-0.16.3.tgz 
    version: 0.16.3
  - apiVersion: v2 <-- the beginning of the v0.16.2 version object
    appVersion: 0.16.2
    ...
generated: "2022-09-27T14:35:08.060027-07:00" <-- this line indicates that this file is not manually created but is generated

helm repo index command will generate an index file based on a given local directory that contains packaged charts.

It is not required that a chart package be located on the same server as the index.yaml file. That is not the case here because urls list contains a list of relative URL paths which means that the charts are hosted on the same server as this index file. In the example above e.g. for v0.16.3 chart download url is: https://charts.karpenter.sh/karpenter-0.16.3.tgz.

If we download and unpack this helm chart, this is the directory tree of the unarchived chart:

$ tree karpenter-0.16.3/
karpenter-0.16.3/
└── karpenter
    ├── Chart.lock
    ├── Chart.yaml
    ├── crds
    │   ├── karpenter.k8s.aws_awsnodetemplates.yaml
    │   └── karpenter.sh_provisioners.yaml
    ├── README.md
    ├── README.md.gotmpl
    ├── templates
    │   ├── aggregate-clusterrole.yaml
    │   ├── clusterrolebinding.yaml
    │   ├── clusterrole.yaml
    │   ├── configmap-logging.yaml
    │   ├── configmap.yaml
    │   ├── deployment.yaml
    │   ├── _helpers.tpl
    │   ├── poddisruptionbudget.yaml
    │   ├── rolebinding.yaml
    │   ├── role.yaml
    │   ├── secret-webhook-cert.yaml
    │   ├── serviceaccount.yaml
    │   ├── servicemonitor.yaml
    │   ├── service.yaml
    │   └── webhooks.yaml
    └── values.yaml

3 directories, 22 files


$ cat Chart.lock 
dependencies: []
digest: sha256:5595919ac269b4105dd65d20eb27cb271b8976c1d10903e0b504d349df30f017
generated: "2020-12-02T11:48:25.741819-08:00"


$ cat Chart.yaml 
apiVersion: v2
appVersion: 0.16.3
description: A Helm chart for Karpenter, an open-source node provisioning project
  built for Kubernetes.
home: https://karpenter.sh/
icon: https://repository-images.githubusercontent.com/278480393/dab059c8-caa1-4b55-aaa7-3d30e47a5616
keywords:
- cluster
- node
- scheduler
- autoscaling
- lifecycle
name: karpenter
sources:
- https://github.com/aws/karpenter/
type: application
version: 0.16.3


$ cat .helmignore 
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/


Local Helm Config and Cache Files


~/.config/helm/ 
~/.cache/helm/

HELM_REPOSITORY_CONFIG
HELM_REPOSITORY_CACHE

helm repo update


$ helm repo

This command consists of multiple subcommands to interact with chart repositories.

It can be used to add, remove, list, and index chart repositories.

Usage:
  helm repo [command]

Available Commands:
  add         add a chart repository
  index       generate an index file given a directory containing packaged charts
  list        list chart repositories
  remove      remove one or more chart repositories
  update      update information of available charts locally from chart repositories

Flags:
  -h, --help   help for repo

Global Flags:
      --burst-limit int                 client-side default throttling limit (default 100)
      --debug                           enable verbose output
      --kube-apiserver string           the address and the port for the Kubernetes API server
      --kube-as-group stringArray       group to impersonate for the operation, this flag can be repeated to specify multiple groups.
      --kube-as-user string             username to impersonate for the operation
      --kube-ca-file string             the certificate authority file for the Kubernetes API server connection
      --kube-context string             name of the kubeconfig context to use
      --kube-insecure-skip-tls-verify   if true, the Kubernetes API server's certificate will not be checked for validity. This will make your HTTPS connections insecure
      --kube-tls-server-name string     server name to use for Kubernetes API server certificate validation. If it is not provided, the hostname used to contact the server is used
      --kube-token string               bearer token used for authentication
      --kubeconfig string               path to the kubeconfig file
  -n, --namespace string                namespace scope for this request
      --qps float32                     queries per second used when communicating with the Kubernetes API, not including bursting
      --registry-config string          path to the registry config file (default "/home/bojan/.config/helm/registry/config.json")
      --repository-cache string         path to the file containing cached repository indexes (default "/home/bojan/.cache/helm/repository")
      --repository-config string        path to the file containing repository names and URLs (default "/home/bojan/.config/helm/repositories.yaml")

Use "helm repo [command] --help" for more information about a command.


$ helm repo list
Error: no repositories to show


$ helm repo add --help
add a chart repository

Usage:
  helm repo add [NAME] [URL] [flags]

Flags:
      --allow-deprecated-repos     by default, this command will not allow adding official repos that have been permanently deleted. This disables that behavior
      --ca-file string             verify certificates of HTTPS-enabled servers using this CA bundle
      --cert-file string           identify HTTPS client using this SSL certificate file
      --force-update               replace (overwrite) the repo if it already exists
  -h, --help                       help for add
      --insecure-skip-tls-verify   skip tls certificate checks for the repository
      --key-file string            identify HTTPS client using this SSL key file
      --no-update                  Ignored. Formerly, it would disabled forced updates. It is deprecated by force-update.
      --pass-credentials           pass credentials to all domains
      --password string            chart repository password
      --password-stdin             read chart repository password from stdin
      --username string            chart repository username

Global Flags:
    ...

References:



No comments: