Chapter 4. KubeVirt Use Cases

If you’re already using Kubernetes, KubeVirt brings you the power to manage VMs using the tools that are familiar to you. KubeVirt supports a practically unlimited number of VMs, offering the ability to store persistent data. This opens up a number of use cases, from managing traditional VM workloads on Kubernetes to spinning up test environments to running Kubernetes on Kubernetes.

Managing Traditional Workloads

Kubernetes is a powerful application orchestration engine that eases the complexities of distributed computing. Using it with traditional VM workloads can have a number of benefits. Bringing VM-based and containerized workloads together means you don’t have to maintain separate environments, personnel, and skill sets for both. This makes it easier to bring VM-based workloads closer to your DevOps workflows, because you can create VMs declaratively and manage them with Kubernetes commands (and virtctl, of course). Instead of building separate DevOps pipelines for your containerized and VM-based workloads, you can combine them and manage them from one place.

Working with Legacy Applications

Some applications can’t simply be relocated to cloud-native environments. For applications that require specific kernel modules or settings, need to be tightly coupled with a dedicated data store, or are written using an uncommon, antique, or proprietary language or hardware, containerization may not be easy.

With KubeVirt, any app you can run on a virtual or physical server can be moved to a VM managed by virt-launcher in a Kubernetes pod. This means you can use Kubernetes in cloud-native environments to manage older applications, proprietary applications you can’t rearchitect, or applications made up of a patchwork of different technologies. Managed in Kubernetes, these applications can communicate more easily with containerized applications.

The Kubernetes control plane and its powerful scheduler can make it easy to manage large or distributed applications, moving and scheduling KubeVirt VM pods automatically to keep the applications running.

Not all legacy applications are easy to run in containerized VMs. Mainframe-based software or applications that require hardware connections to sensors or other infrastructure can be more difficult to run in KubeVirt. However, because KubeVirt leverages the network connectivity of the pod, any hardware that connects over the network should work with a KubeVirt VM. This makes it possible to connect modern VMs with legacy hardware and software in a containerized environment, allowing legacy applications to fit into modern architectures and development pipelines.

Application Modernization

In some cases, it makes sense to rearchitect a monolithic legacy application, breaking it into cloud-native, containerized microservices to take advantage of distributed computing. This is a significant undertaking, sometimes involving months or years of development, during which operations can’t just pause.

It is often possible to migrate and refactor an application gradually, breaking out functions as microservices one by one. When this is the case, KubeVirt makes it possible to manage the containerized and legacy parts of the application together, behind a single control plane. For much more detail on microservices, see Sam Newman’s book Monolith to Microservices: Evolutionary Patterns to Transform Your Monolith (O’Reilly, 2019).

Building Test Environments

A test environment is a server you can set up in a specific way, time after time, to run tests, often including a set of test data. A test environment is designed to provide information about the behavior of an application, comparing it to the requirements that have been set. It is often an isolated copy of the production environment, designed to give insight into how the application will perform in real scenarios.

KubeVirt lets you declaratively define a VM template, specifying the exact machine characteristics you need, then using it to run a defined VM image you’ve uploaded using CDI. You can use Kubernetes-native tools to create identical instances of the same machine as needed, automating test cycles and integrating them with your DevOps pipeline. At the same time, you can create many variations on the same test environment, enabling you to test specific parts of an application individually.

Running Virtual Appliances

Because a virtual appliance stands in for a hardware appliance, it’s no surprise that virtual appliances are not always easy to containerize. A virtual appliance relies on physical or virtual server capabilities, including kernel settings or modules, direct access to storage, or a specific operating system. Virtual appliances are often designed as VM images, ready to run on QEMU or KVM. KubeVirt, which leverages QEMU and KVM, makes it possible to run appliances such as Virtual Network Functions (VNFs) in containers, providing the flexibility of Kubernetes with the kernel isolation of VMs.

Deploying Kubernetes on Kubernetes

You can use KubeVirt VMs as nodes in a virtualized Kubernetes cluster—running Kubernetes on Kubernetes. This is useful for Kubernetes providers who need to provide multitenancy to their customers with strict isolation among tenants. With KubeVirt, you can create identical VMs across hybrid Kubernetes environments, bringing consistency to large deployments across providers or infrastructures transparently.

The Kubernetes Cluster API, a declarative Kubernetes cluster management framework, includes tools for working with KubeVirt. With these two tools, it’s possible to create and destroy Kubernetes clusters as easily as pods. This capability is useful for creating ephemeral clusters for testing.

Get What Is KubeVirt? now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.