Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Implementing GitOps with Kubernetes

You're reading from   Implementing GitOps with Kubernetes Automate, manage, scale, and secure infrastructure and cloud-native applications on AWS and Azure

Arrow left icon
Product type Paperback
Published in Aug 2024
Publisher Packt
ISBN-13 9781835884225
Length 444 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (2):
Arrow left icon
Pietro Libro Pietro Libro
Author Profile Icon Pietro Libro
Pietro Libro
Artem Lajko Artem Lajko
Author Profile Icon Artem Lajko
Artem Lajko
Arrow right icon
View More author details
Toc

Table of Contents (21) Chapters Close

Preface 1. Part 1: Understanding GitOps via Uncomplicated Orchestrations/ Kubernetes FREE CHAPTER
2. Chapter 1: An Introduction to GitOps 3. Chapter 2: Navigating Cloud-native Operations with GitOps 4. Chapter 3: Version Control and Integration with Git and GitHub 5. Chapter 4: Kubernetes with GitOps Tools 6. Part 2: Harnessing Advanced Orchestrations, Culture, and Control in GitOps Practices
7. Chapter 5: GitOps at Scale and Multitenancy 8. Chapter 6: GitOps Architectural Designs and Operational Control 9. Chapter 7: Cultural Transformation in IT for Embracing GitOps 10. Part 3: Hands-on Automating Infrastructure and CI/CD with GitOps
11. Chapter 8: GitOps with OpenShift 12. Chapter 9: GitOps for Azure and AWS Deployments 13. Chapter 10: GitOps for Infrastructure Automation – Terraform and Flux CD 14. Chapter 11: Deploying Real-World Projects with GitOps on Kubernetes 15. Part 4: Operational Excellence Through GitOps Best Practices
16. Chapter 12: Observability with GitOps 17. Chapter 13: Security with GitOps 18. Chapter 14: FinOps, Sustainability, AI, and Future Trends for GitOps 19. Index 20. Other Books You May Enjoy

Exploring K3s as a lightweight Kubernetes distribution

As mentioned previously, throughout this book, and specifically in this chapter, we will utilize K3s, a lightweight Kubernetes distribution (https://k3s.io/), to run our examples.

K3s is particularly well-suited for scenarios where the full-scale implementation of Kubernetes may be too resource-intensive or complex.

Its lightweight nature makes it ideal for edge computing and IoT scenarios, where resources are often limited, and efficiency is paramount. In these environments, K3s provides the necessary Kubernetes features without the overhead. Additionally, solutions such as vCluster from Loft have leveraged K3s to run Kubernetes within Kubernetes, facilitating multi-tenancy on a host cluster. This approach allows for isolated Kubernetes environments within a single cluster, optimizing resource usage and offering scalability in multi-tenant setups. These use cases highlight K3s’s versatility and efficiency in diverse computing environments. More information about K3s can be found in the official documentation: https://docs.k3s.io/.

Origin of the K3s name

The name K3s, as explained in the official documentation (https://docs.k3s.io/), is derived from the intent to create a Kubernetes installation that’s significantly smaller in memory size. The naming convention follows that of Kubernetes, often abbreviated as K8s, which consists of 10 letters. Halving this led to K3s, which was stylized to represent a more compact version of Kubernetes. Unlike Kubernetes, K3s does not have an expanded form, and its pronunciation is not officially defined. This naming reflects the goal of a lighter, more efficient version of Kubernetes.

K3s simplifies the process of deploying a Kubernetes cluster, making it accessible even for small-scale operations or development purposes. By removing non-essential components and using lighter-weight alternatives, K3s significantly reduces the size and complexity of Kubernetes while maintaining its core functionalities.

K3s maintain compatibility with the larger Kubernetes ecosystem, ensuring that tools and applications designed for Kubernetes can generally be used with K3s as well.

One of the key features of K3s is its single binary installation, which includes both the Kubernetes server and agent, simplifying the setup process. This makes it an ideal choice for developers who want to quickly set up a Kubernetes environment for testing or development without the overhead of a full Kubernetes installation.

K3s also offers flexible networking and storage options, catering to a wide range of use cases – from small local clusters to larger, more complex environments. Its versatility and ease of use make it a popular choice for those looking to explore Kubernetes without the need for extensive infrastructure.

Lastly, K3s’s lightweight nature and efficiency make it a suitable choice for continuous integration/continuous deployment (CI/CD) pipelines, allowing for faster build and test cycles in environments where resources are a consideration. In Chapter 5, we’ll learn how to use K3s to run Kubernetes on Kubernetes.

Local cluster setup

Before diving into our first deployment example, it’s essential to set up the environment and understand how Kubernetes, particularly K3s, facilitates our deployments. K3s is primarily designed for Linux environments, so make sure you have a modern Linux system such as Red Hat Enterprise Linux, CentOS, Fedora, Ubuntu/Debian, or even Raspberry Pi. If you’re a Windows user, you can still engage with K3s by setting up WSL or running a Linux virtual machine (VM) through VirtualBox. These setups will prepare you to harness the power of Kubernetes for your deployments.

Choosing your local Kubernetes environment – K3s, Minikube, and alternatives

In this chapter, we have chosen to use K3s due to its lightweight nature and ease of setup, which makes it particularly suitable for developing and testing Kubernetes environments. However, there are several other alternatives for setting up local Kubernetes clusters that cater to different needs and platforms. For instance, Colima (https://github.com/abiosoft/colima) is an excellent choice for macOS users, offering a Docker and Kubernetes environment directly on macOS with minimal configuration. Minikube (https://minikube.sigs.k8s.io) is another popular option that runs on Windows, macOS, and Linux and is ideal for those looking to simulate a Kubernetes cluster in a single node where they can experiment and test Kubernetes applications.

While K3s is our choice for this chapter, you are encouraged to use the local cluster setup that best fits your platform or preferences. In subsequent chapters, we will primarily focus on using K3s or Minikube. These platforms provide a convenient and consistent environment for learning and deploying applications using Kubernetes, ensuring that the concepts and procedures we’ll explore are accessible regardless of the specific local cluster technology used.

Setting up WSL

All details regarding the nature of WSL and the procedures for installing it on Windows are beyond the scope of this book. However, comprehensive guidance on setup steps and in-depth information about WSL can be accessed through the official Microsoft documentation (see [1] in the Further reading section at the end of this chapter):

Figure 2.2 – A conceptual illustration representing WSL on a Windows operating system

Figure 2.2 – A conceptual illustration representing WSL on a Windows operating system

Remember, staying updated with the latest WSL versions and features through the official site will enhance your experience and ensure compatibility with the most recent Windows updates.

Setting up VirtualBox

VirtualBox is an open source virtualization software developed by Oracle. It allows users to run multiple operating systems on a single physical computer, creating VMs that can operate independently. This makes it an invaluable tool for software testing, development, and educational purposes as it provides a flexible and isolated environment for running and experimenting with different operating systems without risk to the host system:

Figure 2.3 – The VirtualBox home page at https://www.virtualbox.org/.

Figure 2.3 – The VirtualBox home page at https://www.virtualbox.org/.

The detailed steps for installing VirtualBox are beyond the scope of this book. However, comprehensive installation instructions and additional information can be found in the official documentation [2].

For the most current information and tips, visiting the official VirtualBox documentation is highly recommended.

Unless otherwise specified, for this chapter and the subsequent ones, we will assume the use of an Ubuntu-22.04 LTS installation within WSL. This setup provides a consistent and controlled environment for our examples and demonstrations.

By focusing on a specific version of Ubuntu, we ensure that the instructions and scenarios presented are as relevant and applicable as possible, aligning closely with the most common and stable Linux distribution used in WSL.

K3s setup and installation verification

In this section, we’ll cover the basic steps that are necessary to establish a Kubernetes cluster using K3s in its default configuration, assuming that WSL is already installed and functioning correctly.

Downloading and installing K3s

Follow these steps to download and install K3s:

  1. Let’s start by opening a new Terminal window and typing the following command:
    $ wsl --install -d Ubuntu-22.04

    At a certain stage, the setup will require you to specify a UNIX username (for example, pietro), which does not need to match your Windows username. The next step involves setting the password that will be used to run a command as an administrator (sudo). If the operations are completed correctly, the Terminal window should look like this:

Figure 2.4 – Successfully installing an instance of Ubuntu 22.04.3 LTS on WSL

Figure 2.4 – Successfully installing an instance of Ubuntu 22.04.3 LTS on WSL

  1. Before proceeding with the K3s setup, it is always better to execute commands to update the operating system with the latest patches:
    $ sudo apt update
    $ sudo apt upgrade

    This ensures that you are working with the most recent and secure versions of the software.

The apt update and apt upgrade commands

The apt update and apt upgrade commands are fundamental in maintaining the software on systems using the APT package manager, commonly found in Debian-based Linux distributions such as Ubuntu. The apt update command refreshes the local package index by retrieving the latest information about available packages and their versions from configured sources. This doesn’t install or upgrade any packages and instead updates the package lists to inform the system of new, removed, or updated software. Once the package index has been updated, the apt upgrade command is used to upgrade installed packages to their latest versions. It downloads and installs the updates for any packages where newer versions are available, ensuring the system is up-to-date and potentially more secure.

If required, enter the password you set up while installing Ubuntu. After executing these commands, the Terminal should look as follows:

Figure 2.5 – Terminal window after executing the apt update and apt upgrade commands

Figure 2.5 – Terminal window after executing the apt update and apt upgrade commands

  1. The next step is to install K3s using the following command:
    $ curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644

    The preceding command will download and set up the necessary tools, followed by launching the K3s server. The successful setup of a K3s instance is depicted in Figure 2.6:

    Figure 2.6 – Successfully setting up K3s

Figure 2.6 – Successfully setting up K3s

Verifying the K3s installation

It is necessary to use two commands to check the correctness of the K3s setup and configuration. The first one is as follows:

$ k3s --version

The preceding command is used to check which version of K3s we are running. If the K3s server is running correctly, we should be able to see a message similar to the following:

Figure 2.7 – The result of executing the k3s –version command

Figure 2.7 – The result of executing the k3s –version command

The second command that checks the correctness of the K3s setup is as follows:

$ k3s check-config

The k3s check-config command performs a diagnostic check on the system’s configuration to ensure it is suitable for running a K3s cluster. It verifies critical aspects such as kernel compatibility, required system dependencies, and the presence of necessary features and modules. This command helps in identifying potential issues or missing configurations before proceeding with the K3s installation, ensuring a smoother setup process:

Figure 2.8 – Successfully configuring the k3s check-config command

Figure 2.8 – Successfully configuring the k3s check-config command

Congratulations! You have confirmed that the K3s server has been installed in your local development environment. Now, it’s time to verify the Kubernetes cluster and deploy a test application.

Checking the Kubernetes cluster

To confirm that our K3s node is up and running, let’s type the following command:

$ kubectl get nodes

If the Kubernetes cluster is working correctly, the preceding command will produce the following output:

Figure 2.9 – Example output after running the kubectl get nodes command

Figure 2.9 – Example output after running the kubectl get nodes command

After confirming that the node is up and running correctly, we can run the following command to obtain more information about the running cluster:

$ kubectl cluster-info

The kubectl cluster-info command is a useful tool in Kubernetes for obtaining essential information about a cluster. When executed, it displays key details such as the Kubernetes master and services endpoint addresses. This command helps users quickly understand the state and connectivity of their cluster’s control plane and core services such as KubeDNS and, when applicable, the dashboard. It is particularly valuable for troubleshooting and ensuring that the Kubernetes cluster is configured correctly and operational. Easy to use, kubectl cluster-info is often one of the first commands you should run to verify the health and status of a Kubernetes environment, as shown here:

Figure 2.10 – Information provided after executing the kubectl cluster-info command

Figure 2.10 – Information provided after executing the kubectl cluster-info command

kubectl

kubectl is a command-line tool that serves as the primary interface for interacting with Kubernetes. It allows users to deploy applications, inspect and manage cluster resources, and view logs. Essentially, kubectl provides the necessary commands to control Kubernetes clusters effectively. Users can create, delete, and update parts of their Kubernetes applications and infrastructure using this versatile tool. It is designed to be user-friendly, offering comprehensive help commands and output formatting options, making it easier to understand and manage complex Kubernetes environments. kubectl is an indispensable tool for developers and system administrators working with Kubernetes, offering a robust and flexible way to handle containerized applications and services in various environments.

Kubernetes manifest

A Kubernetes manifest is a configuration file, typically written in YAML or JSON, that defines resources that should be deployed to a Kubernetes cluster. It specifies the desired state of objects, such as Pods, Services, or Deployments, that Kubernetes needs to create and manage. This manifest enables users to declare their applications’ requirements, networking, and storage configurations, among other settings, in a structured and versionable format.

As an example, a basic Kubernetes manifest for deploying a simple application might look like this:

apiVersion: v1
kind: Pod
metadata:
  name: hw-gitops-folks
spec:
  containers:
  - name: hw-gitops-folks-container
    image: k8s.gcr.io/echoserver:1.4
    ports:
    - containerPort: 8080

In this manifest, a Pod named hw-gitops-folks is defined. It contains one container named hw-gitops-container, which uses the echoserver:1.4 image from Kubernetes’ container registry. The container exposes port 8080. This manifest, when applied to a Kubernetes cluster, will create a Pod running a simple echo server that can be used for basic testing.

Our first deployment with K3s

Now that we have successfully set up, configured, and verified our K3s cluster, we are poised to embark on an exciting phase: preparing for our first deployment. This step marks a significant milestone in our journey as we transition from the foundational aspects of K3s to actively utilizing the cluster for practical applications. The upcoming deployment process will not only reinforce our understanding of Kubernetes concepts but also demonstrate the real-world utility of our K3s environment. It’s a moment where theory meets practice, allowing us to see firsthand how our configured cluster can host and manage applications. Let’s proceed with an eagerness to explore the capabilities of our Kubernetes setup while keeping the practices we’ve learned and the robust infrastructure we’ve established in mind:

  1. Let’s begin by typing the following command, which should list all the running Pods:
    $ kubectl get pods

    The result of its execution should look something like this:

    No resources found in default namespace
  2. The preceding output is normal since no deployments have been performed so far. Let’s try another command:
    $ kubectl get pods --all-namespaces

    This time, the result should be different as we are requesting to include Pods running in all namespaces, both user-defined and system-defined, such as those within the predefined kube-system namespace. These Pods are essential for the operation of the Kubernetes system. The specific Pods and their statuses are detailed in Figure 2.11, offering a comprehensive view of the active system components within this crucial namespace:

Figure 2.11 – Example of running Pods in the kube-system namespace

Figure 2.11 – Example of running Pods in the kube-system namespace

What is a namespace in Kubernetes?

In Kubernetes, a namespace is a fundamental concept that’s used to organize clusters into logically isolated sub-groups. It provides a way to divide cluster resources between multiple users and applications. Essentially, namespaces are like virtual clusters within a physical Kubernetes cluster. They allow for resource management, access control, and quota management, enabling efficient and secure multi-tenancy environments. For instance, different development teams or projects can operate in separate namespaces, without interference. Namespaces also facilitate resource naming, ensuring that resources with the same name can coexist in different namespaces. They play a crucial role in Kubernetes for scalability and maintaining order, especially in larger systems with numerous applications and teams.

Creating different namespaces in Kubernetes is widely regarded as a best practice for several compelling reasons. Namespaces provide a logical partitioning of the cluster, allowing for more organized and efficient resource management. This separation is particularly beneficial in environments with multiple teams or projects as it ensures a clear distinction between resources, reduces naming conflicts, and enhances security by isolating workloads. Additionally, namespaces facilitate fine-grained access control as administrators can assign specific permissions and resource limits to different namespaces, preventing accidental or unauthorized interactions between distinct parts of the cluster. By using namespaces, teams can also streamline deployment processes and monitor resource usage more effectively, leading to a more robust and scalable Kubernetes environment. In essence, namespaces are crucial in maintaining order, security, and efficiency in complex Kubernetes clusters. So, let’s get started by creating one:

  1. Let’s continue by creating a new namespace before continuing with our first deployment:
    $ kubectl create namespace gitops-kubernetes

    The response to this command should look something like this:

    namespace/gitops-kubernetes created
  2. The command to delete a namespace is as follows:
    $ kubectl delete namespace gitops-kubernetes
  3. For the first deployment, we will create a Kubernetes manifest file that defines a deployment for a simple “hello-world” web page, along with a corresponding service to expose it. This manifest file will create a deployment that runs a container based on a generic hello-world image and a service to make the deployment accessible (the complete version of the manifest mentioned here can be found in this book’s GitHub repository):
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: hello-world-deployment
      namespace: gitops-kubernetes
    ...
    spec:
    ...
        spec:
          containers:
          - name: hello-world
            image: nginxdemos/hello
            ports:
            - containerPort: 80
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: hello-world-service
      namespace: gitops-kubernetes
    spec:
      type: NodePort
      ...
      ports:
        - protocol: TCP
          port: 80
          nodePort: 30007

    To apply the manifest, we need to save it in a .yaml (or .yml) file, such as hello-world-deployment.yaml (its name isn’t important).

  4. To edit the file, we can use an editor such as nano by running the following command:
    $ nano hello-world-deployment.yaml

    This manifest file has two parts:

    • Deployment: It creates a deployment named hello-world-deployment that runs a container using the nginxdemos/hello image, which serves a simple HTML page. The container is configured to expose port 80. In the metadata section, we have specified to run the Pod in the namespace we created previously – that is, namespace: gitops-kubernetes.
    • Service: It creates a service named hello-world-service of the NodePort type to expose the deployment. This service makes the hello-world application accessible on a port on the nodes in the cluster (in this example, port 30007). In the metadata section, we have specified to run the service in the namespace we created previously – that is, namespace: gitops-kubernetes.

NodePort

In this hello-world service example, the NodePort service type was chosen to demonstrate a simple way of exposing a service to external traffic in Kubernetes. NodePort opens a specific port on all the nodes; any traffic sent to this port is forwarded to the service. While this is useful for development and testing, it may not be ideal in a real-world cloud scenario, especially when running on a VM in the cloud. This is because NodePort exposes a port on the host VM/node, potentially posing a security risk by making the service accessible externally. In production environments, more secure and controlled methods of exposing services are typically preferred.

  1. To apply this manifest, use the kubectl apply -f <filename>.yaml command:
    $ kubectl apply -f hello-world-deployment.yaml

    The response to this command should look something like this:

    deployment.apps/hello-world-deployment created
    service/hello-world-service unchanged
  2. Now, we can list the Pods and services that are running in the gitpos-kubernetes namespace using the following command:
    $ kubectl get pods --namespace gitops-kubernetes & kubectl get services --namespace gitops-kubernetes

    The result of this command is shown in Figure 2.12:

Figure 2.12 – Results of applying the deployment file, where we can see useful information such as the Cluster-IP and the assigned ports

Figure 2.12 – Results of applying the deployment file, where we can see useful information such as the Cluster-IP and the assigned ports

Now that we have deployed our application in the Kubernetes cluster, the next crucial step is to test its functionality. This is where port forwarding plays a key role.

Port forwarding

Port forwarding with kubectl allows us to temporarily route traffic from our local machine to a pod in the Kubernetes cluster. This method is especially useful for testing purposes as it enables us to interact with the application as if it were running locally, without the need to expose it publicly. By forwarding a local port to a port on the pod, we can verify the deployment’s operational aspects, ensuring that our application behaves as expected in a controlled environment before making it accessible to external traffic. The following steps outline the process for executing port forwarding on the running pod and testing its functionality using curl:

  1. Start port forwarding: Use the following kubectl command to start port forwarding from a local port to a port on the Pod:
    $ kubectl port-forward pod/[POD_NAME] [LOCAL_PORT]:[REMOTE_PORT]

    Replace [POD_NAME] with the name of your Pod. For instance, in Figure 2.12, the name of the pod is hello-world-deployment-6b7f766747-nxj44. Here, [LOCAL_PORT] should be replaced with the local port on your machine – for example, 9000 (ensure that the local port is not already used by another running service!) – while [REMOTE_PORT] should be replaced with the port on the Pod that you want to forward traffic to. In our case, as reported in Figure 2.10, the Pod port is 80.

  2. At this point, we are using the Pod’s name, hello-world-deployment-6b7f766747-nxj44. So, if we want to forward traffic from local port 9000 to the Pod’s port, 80, the command would be as follows:
    $ kubectl port-forward hello-world-deployment-6b7f766747-nxj44 --namespace gitops-kubernetes 9000:80

    This will produce the following output:

    Forwarding from 127.0.0.1:9000 -> 80
    Forwarding from [::1]:9000 -> 80

    The preceding output indicates that port forwarding is set up on your machine to redirect traffic from a local port to a port on a Kubernetes Pod or another network service. Keep this command running as it maintains the port forwarding session.

  3. Open a new Terminal or Command Prompt and type the following command to open a new WSL shell:
    $ wsl -d Ubuntu-22.04
  4. Use curl to send a request to the local port that is being forwarded:
    $ curl http://localhost:9000

    This command sends a request to your local machine on port 9000, which kubectl then forwards to the Pod’s port (80). You should see the output of the request in your Terminal. Typically, this is the content that’s served by your application running in the Kubernetes Pod, as shown in Figure 2.13:

Figure 2.13 – Example of content served by our application running in the Kubernetes Pod

Figure 2.13 – Example of content served by our application running in the Kubernetes Pod

Congratulations on achieving this remarkable result! You’ve successfully deployed your first application in Kubernetes, and the content is being correctly served, as evidenced by the successful curl call. This is a significant milestone in your journey with Kubernetes, showcasing your ability to not only deploy an application but also ensure its proper functioning within the cluster.

In the upcoming section, we will delve deeper into Docker, closely examining its essential components, functionalities, and practical applications. We’ll build our first Docker image and demonstrate how to run it as a container locally.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime
Modal Close icon
Modal Close icon