System virtualisation: a myriad of possibilities
System virtualisation represents the ability to create multiple environments or (virtual) instances on the same hardware (physical) resource, increasing productivity and efficiency in both business and personal scenarios, as well as in cloud or on-premises environments.
It is a concept that can be applied to resources such as processing, storage, or network communications, as well as data, applications, and workstations.
My first memory of this concept dates back to the early years of this millennium (perhaps 2002), when the first versions of VMWare Workstation were made publicly available. Since then, virtualisation has evolved significantly, creating possibilities in data centre infrastructure management and changing the paradigm in software and service development and maintenance.
System virtualisation: the clash of the titans
Despite several previous projects (IBM, Macintosh, Connectix, among others), the turn of the millennium brought immense innovations in the field of system virtualisation, making VMWARE one of the main players in the market.
Microsoft followed this trend with the acquisition of Connectix’s Virtual PC and Virtual Server product lines, releasing HyperV (which accompanied Windows Server) in 2008. At the same time, open source solutions such as Virtuozzo (in 2005), KVM (in 2007) and Docker (in 2013) appeared, all with innovative and distinctive approaches.
Many years later, and after a series of developments and acquisitions by tech giants, we have reached 2024 with several virtualisation solutions to meet all requirements, scenarios, and budgets.
Microsoft has decided to discontinue HyperV, betting everything on Azure Stack HCI, a hybrid product that allows the integration of on-premises and Azure cloud resources for all types of services in the Microsoft ecosystem.
VMWARE continues to offer a wide range of products, whether for workstations (Horizon and Workspace), desktops (Fusion for Mac, Workstation Player/Pro), cloud (VShpere, vSAN, Cloud Foundation) or even in the networking and security segment (Carbon and NSX).
AWS (Amazon Web Services, created in 2006) and Google (Google Computing Engine, launched in 2012) developed their cloud services with KVM technologies and have become leaders (along with Microsoft) in IaaS (cloud infrastructure as a service) and PaaS (Platforms as a Service).
Other solutions such as Nutanix Cloud Platform, Citrix Hypervisor, Red Hat Virtualisation, Proxmox, Oracle VM Server, Virtuozzo Hybrid Server, Xen Project, and QEMU continue to be presented as alternatives to the main players, each with its own added value and benefits.
Among all these solutions, I have high expectations for the portfolio of Canonical, a company led by Mark Shuttleworth, which develops and supports Ubuntu (operating system, server and workstation), participates in the Ceph project (storage), develops Multipass (virtualisation), LXD (unified management of virtual machines and containers), Juju (orchestration engine), MaaS (Metal as a service), Data Fabric (data integration and processing) and MicroK8s (Kubernetes).
Considering Canonical’s partnership with Microsoft in 2016 (which enabled the inclusion of bash in Windows 10), or its presence at the Ubuntu Summit 2023 event (held in November 2023), we may be witnessing a historic turning point for Canonical (a London-based company), which in recent years (since 2018) has been considering going public with an initial public offering (IPO), with companies such as Netflix, eBay, Walmart, AT&T and Telekom, which have chosen Canonical platforms to develop their services, as the main interested parties.
From hypervisor to Kubernetes
Regardless of the companies that develop each of these technologies, the truth is that system virtualisation was a concept that completely changed the paradigm of data centre management, software development and maintenance.
The hypervisor allows you to create an intermediate layer between the physical hardware (called hosts) and the virtual machines (usually called guests), managing the physical resources that are shared by the virtual loads.
Hypervisors are generally either type 1 (also known as Bare Metal), where the hypervisor interacts directly with the hardware resources and no additional operating system is required (e.g. Proxmox or KVM), or type 2, where the hypervisor runs as if it were an application on top of the existing operating system (e.g. VMWare Player, Oracle VM for x86).
This intermediate layer between the host hardware and the guest operating system has made it possible to optimise many of the tasks performed by sysadmins (system administrators), particularly in the installation, validation and maintenance processes.
By taking advantage of templates (pre-configured models), it is now possible to deploy virtual servers in a matter of minutes. Using snapshots (instant records) or cloning, it is now possible to test updates or restore older versions whenever necessary. Using live migration features (transfer of virtual loads between hosts), it is now possible to disconnect infrastructure components (e.g. physical servers) without any downtime in virtual loads. These are just a few examples of the advantages of operating system virtualisation.
On the other hand, the emergence of containers (e.g. Docker) allows the application component to be isolated, virtualising only this layer and other dependencies or configurations, creating much lighter and more portable images.
This containerisation platform allows applications to be packaged and isolated in containers, which are lightweight and share the host operating system kernel. It differs from a common hypervisor because it does not create virtual machines or differentiated operating systems. Instead, Docker containers use the host’s (physical machine) resources and provide an efficient way to install and run applications without worrying about dependencies or getting stuck with compatibility issues.
Although Docker and both types of hypervisors (type 1 and 2) provide a layer of isolation, the underlying technology and approach are different. Hypervisors create virtual machines that simulate hardware and run their own operating systems. Docker containers, on the other hand, share the host operating system kernel and provide a lighter and more efficient solution because they virtualise only the resources necessary for running applications.
This component has made it possible to optimise many of the tasks of application development teams, as they no longer have to worry about the dependencies of the various environments (development, testing or production), since the Docker container already includes all the resources necessary for its correct execution. As such, installing or migrating applications only requires the availability of a host with Docker active, capable of executing a previously created image (“docker run [OPTIONS] IMAGE [COMMAND] [ARG…]”). This way, instead of spending time preparing environments, installing dependencies, libraries, and changing configuration files, the development team only has to create an initial image, with all the resources necessary to run the respective application, and can even reuse images managed and made available by other teams/other software producers (e.g. use of a web server or database engine).
The success that Docker has had in the tech community has led to the emergence of new technologies, such as Kubernetes, which are container orchestration systems that enable the implementation, automation, scaling, and management of this ecosystem.
Currently maintained by the Cloud Native Computing Foundation (the result of a partnership between Google and the Linux Foundation), this technology (commonly referred to as k8s) is based on the concept that the manager must define the parameters, requirements and limitations, and the Kubernetes cluster must ensure that these objectives are met in the most efficient way.
The main components are the ‘worker machines’ called “nodes”, which run the containerised applications. The ‘worker nodes’ host the ‘pods’, which are a kind of cabin where the application components are executed.
As mentioned, the cluster manager defines the number of worker machines as well as the desired scaling parameters, and the control plane is responsible for managing how the cluster distributes the load across the processing resources. This control plane also has components such as the apiserver (which exposes APIs on the frontend), etcd (which records information about the cluster), the scheduler, controller-manager, among others.
One of the most interesting features of Kubernetes is ‘namespaces,’ which allow you to create logical divisions between environments (such as separating production and testing environments). Other features or functionalities such as ‘services,’ ‘volumes,’ ‘secrets,’ and ‘helm charts’ ensure a comprehensive solution for technology and information system development and operation environments.
What’s next?
Institutions, companies, foundations, and other players in these domains have been presenting the market (at a breakneck pace) with constant updates, new products, and services, requiring constant updating and specialisation from professionals in the sector.
With Broadcom’s recent acquisition of VMWARE (at the end of 2023, for $69 billion), reactions are expected from other players, but also from VMWARE’s major customers (some of whom are already looking for alternatives). Will VMWARE (now part of Broadcom) maintain its leading position in Gartner’s Magic Quadrant?
Will 2024 be a year of innovation in virtualisation technologies? Will there be new leaders in the sector in the short term? These are some of the questions that arise.