Introduction

Kubernetes has revolutionized the way we deploy and manage containerized applications. With its robust orchestration capabilities, it has become the go-to platform for scaling and automating containerized workloads.

However, there are scenarios where running virtual machines is preferred or necessary. This is where Virtink, a Kubernetes add-on for virtualization, comes into play. Virtink allows the use of Virtual Machines alongside containerized applications within a Kubernetes cluster.

In this blog post we will dive into Virtink, understanding its purposes, key features, and how it enables seamless integration of Virtual Machines within Kubernetes clusters. By the end of this tutorial you should have a deeper understanding of Virtink and a Cluster API Kubernetes cluster running it.

To find out more about Virtink, check out SmartX’s blog post which goes into more depth in terms of background, aims, etc.

Key Benefits of Virtink

  1. Hybrid Workloads: Virtink enables running hybrid workloads consisting of both containers and virtual machines. This flexibility is particularly useful when transitioning legacy applications or accommodating specific software requirements that are better suited for virtual machines.

  2. Infrastructure Consolidation: By running both containers and VMs on the same Kubernetes cluster, Virtink helps consolidate infrastructure, reducing hardware and management overhead. This simplifies the overall architecture and streamlines operations.

  3. Unified Management: With Virtink, you can manage both containers and VMs using a single platform, eliminating the need for separate infrastructure management tools. This simplifies monitoring, logging, scaling, and other administrative tasks.

Installation and Demo of Virtink

To demonstrate deploying Virtink and using Virtual Machines within a Kubernetes cluster, we will be deploying an external Virtink workload cluster using the Cluster API utility.

Prerequisites:

  • A Kubernetes cluster v1.16 ~ v1.25 with KVM functiomility with cluster api initialised

  • Kubernetes apiserver must have --allow-privileged=true in order to run Virtink’s privileged DaemonSet. It’s usually set by default.

  • cert-manager v1.0 ~ v1.8 installed in Kubernetes cluster. You can install it with kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.2/cert-manager.yaml.

Create a kubeconfig with permissions for virtink

Use the following manifest to create a service account with the required permissions to create, destroy, etc on Virtual Machines deployed, this is considered a best practice and so should not be skipped:

cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: virtink-infra-cluster
rules:
- apiGroups:
  - virt.virtink.smartx.com
  resources:
  - virtualmachines
  verbs:
  - create
  - delete
  - get
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - create
  - delete
  - get
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: virtink-infra-cluster
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: virtink-infra-cluster
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: virtink-infra-cluster
subjects:
  - kind: ServiceAccount
    name: virtink-infra-cluster
    namespace: default
EOF

The next few commands will create a kubeconfig that can be used for API access control to the Virtink workload cluster:

kubectl config --kubeconfig virtink-infra-cluster.kubeconfig unset users.kubernetes-admin.client-certificate
kubectl config --kubeconfig virtink-infra-cluster.kubeconfig unset users.kubernetes-admin.client-key
SA_SECRET="$(kubectl get sa virtink-infra-cluster -o jsonpath='{.secrets[0].name}')"
SA_TOKEN="$(kubectl get secret "${SA_SECRET}" -o jsonpath='{.data.token}' | base64 -d)"
kubectl config --kubeconfig virtink-infra-cluster.kubeconfig set-credentials kubernetes-admin --token="${SA_TOKEN}"

Create the virtink workload cluster

To create a virtink workload cluster, we must first export the control plane service that we are going to use:

export VIRTINK_CONTROL_PLANE_SERVICE_TYPE=LoadBalancer
export VIRTINK_INFRA_CLUSTER_SECRET_NAME=virtink-infra-cluster
export VIRTINK_INFRA_CLUSTER_SECRET_NAMESPACE=default

Then we can focus on applying the manifest that will actuallly generate the cluster, the manifest will first need to be generated with clusterctl and then edited as such:

To generate the manifest: clusterctl generate cluster --infrastructure virtink:v0.7.0 capi-quickstart > capi-virtink.yaml. Note, v0.7.0 should replaced with the current version of the virtink cluster api proivder.

As a note, make sure to add in an annotations in the output manifest. For example it could look like this:

apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VirtinkCluster
metadata:
  name: capi-quickstart
  namespace: default
spec:
  controlPlaneServiceTemplate:
    metadata:
      namespace: default
      <where to insert annotations>
    type: LoadBalancer
  infraClusterSecretRef:
    name: virtink-infra-cluster
    namespace: default

For our specific case, we are going to enter in these annotations that are required by hetzner cloud, which is what we use:

apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VirtinkCluster
metadata:
  name: capi-quickstart
  namespace: default
spec:
  controlPlaneServiceTemplate:
    metadata:
      namespace: default
      annotations:
        load-balancer.hetzner.cloud/location: fsn1
        load-balancer.hetzner.cloud/use-private-ip: "false"
        load-balancer.hetzner.cloud/ipv6-disabled: "true"
        load-balancer.hetzner.cloud/disable-private-ingress: "true"
    type: LoadBalancer
  infraClusterSecretRef:
    name: virtink-infra-cluster
    namespace: default

However, it may differ depending on what cloud provider with KVM capabilities that you are using.

Under this metadata field, there is also a field called type that is normally set to nodeport, we want to change this value to LoadBalancer

There is also one other field that needs to be changed, and it is located towards the top of the YAML file in the kind: cluster section. We need to change the CIDR blocks under the spec from whatever value it is currently set as to 10.0.0.0/24. As an example:

services:
  cidrBlocks: 10.0.0.0/24

The manifest can then be saved with the changes, and applied to the cluster using:

kubectl apply -f capi-virtink.yaml

Accessing the cluster

To access the cluster, we need to gather the kubeconfig so that we can use it to gain access and use virtink.

The command for getting the kubeconfig is below:

clusterctl get kubeconfig capi-quickstart > kubeconfig.yaml

Apply virtink YAML

After the workload cluster is confirmed as being up and online, we will need to provide the virtink YAML to create the needed CRD’s, etc that we will need for creating the Virtual Machine.

Apply the virtink YAML with this command:

kubectl apply -f https://github.com/smartxworks/virtink/releases/download/v0.13.0/virtink.yaml

Deploying a VM into the cluster

If you can verify that the namespaces have been created, you can move on to trying to deploy a Virtual Machine using the below code:

cat <<EOF | kubectl apply -f -
apiVersion: virt.virtink.smartx.com/v1alpha1
kind: VirtualMachine
metadata:
  name: ubuntu-container-rootfs
spec:
  instance:
    memory:
      size: 1Gi
    kernel:
      image: smartxworks/virtink-kernel-5.15.12
      cmdline: "console=ttyS0 root=/dev/vda rw"
    disks:
      - name: ubuntu
      - name: cloud-init
    interfaces:
      - name: pod
  volumes:
    - name: ubuntu
      containerRootfs:
        image: smartxworks/virtink-container-rootfs-ubuntu
        size: 4Gi
    - name: cloud-init
      cloudInit:
        userData: |-
          #cloud-config
          password: password
          chpasswd: { expire: False }
          ssh_pwauth: True
  networks:
    - name: pod
      pod: {}
EOF

It may take some time for this Virtual Appliance to come online, because like any pod it has to go out and fetch the image that it is deploying, you can watch the state of the Virtual Machine by using the following command: kubectl wait vm ubuntu-container-rootfs --for jsonpath='{.status.phase}'=Running --timeout -1s.

Accessing the Virtual Machine via SSH

Use the below commands to access your newly created Virtual Appliance via the remote protocol known as SSH:

export VM_NAME=ubuntu-container-rootfs

export VM_POD_NAME=$(kubectl get vm $VM_NAME -o jsonpath='{.status.vmPodName}')

export VM_IP=$(kubectl get pod $VM_POD_NAME -o jsonpath='{.status.podIP}')

kubectl run ssh-$VM_NAME --rm --image=alpine --restart=Never -it -- /bin/sh -c "apk add openssh-client && ssh ubuntu@$VM_IP"

If all goes well, you should be placed into an SSH environment, prompted for you to save the VM to your hosts, and a password prompt, and then a normal environment if you have ever logged in over SSH on any Linux machine, a prompt will appear like this:

ubuntu@ubuntu-container-rootfs:~$

You can use the command uname -a to verify where you are.

Cleanup

clusterctl cleanup

When you are finished with your cluster, you can run the following command to cleanup (delete) your workload cluster:

kubectl delete cluster capi-quickstart

You can then delete your overall management cluster using your preferred kubectl cluster tool.

VM Cleanup

If you want to shutdown the machine, you can stop it by using this command kubectl patch vm ubuntu-container-rootfs --subresource=status --type=merge -p '{"status":{"powerAction":"PowerOff"}}'

Or if you decide that the VM is no longer needed for service, you can delete it by simply deleting the pod with kubectl delete pod ubuntu-container-rootfs

Conclusion

Virtink extends Kubernetes’ capabilities to seamlessly incorporate virtual machines, empowering organizations with the flexibility to run diverse workloads on a unified platform. By leveraging Kubernetes’ orchestration and management features, Virtink simplifies the deployment and operation of hybrid container-VM environments. As the demand for integrating containers and virtual machines continues to rise, Virtink emerges as a valuable add-on, bridging the gap between these two technologies and enabling a more versatile and efficient infrastructure for modern applications.

We can see from the demo that our use of Virtink combined with Cluster API allows for seamless management and provisioning of virtual machines within Kubernetes clusters. We can leverage the benefits of Cluster API which allows us to quickly and easily spin up a Virtink workload cluster, and then use Virtink to install virtual machines in this cluster.

Need help running Kubernetes?

Get in touch and see how we can help you.

Contact Us