VMs vs. containers: Which one should you use in 2023?

Most companies and software are moving to using this thing called containers to run their apps. If the metaphor does not make sense to you, don’t worry: I will simplify it for you. We will also see, as we head into 2023, under which circumstances virtual machines are still useful.

What is a Virtual machine?

A virtual machine is a software program that simulates a physical computer. It allows a single physical computer to run multiple operating systems and applications at the same time, in a way that is isolated and secure. This allows multiple applications to be run on the same physical server, which can be more cost-effective than using dedicated servers for each application. Virtual machines can be created and managed using virtualization software, such as VMware or VirtualBox.

How does virtualization work?

Virtualization technology works by creating a software-based layer between the physical hardware of a computer and the operating system and applications that run on it. This layer, known as a hypervisor, is responsible for managing the allocation of hardware resources to the various virtual machines that are running on the physical computer.

The hypervisor acts as a sort of “traffic cop” that manages the flow of data between the virtual machines and the physical hardware. When a virtual machine needs to access the CPU, memory, or storage resources of the physical computer, it sends a request to the hypervisor, which then determines whether the request can be granted based on the availability of the requested resources and the allocation of those resources to other virtual machines.

An example of how hypervisors partition resources between different virtual machines

The hypervisor also provides each virtual machine with its own virtualized hardware, such as virtual CPUs, virtual memory, and virtual storage devices. This allows the virtual machines to run their own operating systems and applications as if they were running on a physical computer, without being aware of the presence of the other virtual machines on the same physical hardware.

In addition to managing the allocation of hardware resources, the hypervisor also provides other services to the virtual machines, such as virtual networking, virtual input/output devices, and the ability to migrate virtual machines between physical hosts. These services enable the virtual machines to operate in a flexible and scalable manner, allowing them to be easily moved and resized to meet the changing needs of the applications that run on them.

History of virtual machines

The concept of a virtual machine can be traced back to the 1960s, when IBM developed the concept of a “logical machine” that could be used to simulate the behavior of a physical computer. This idea was later refined and expanded by other researchers and companies, leading to the development of the first virtual machine programs.

IBM developed the concept of a logical machine as a way to simulate the behavior of a physical computer in order to test and debug software. This allowed programmers to run and test their code in a controlled environment without having to use a physical computer, which was expensive and time-consuming. By creating a simulated environment, IBM was able to improve the efficiency and reliability of its software development process.

People suddenly stopped developing the idea of virtualization in the 80’s and 90’s, when cheap UNIX/Linux servers began to flood the market.

Then, in the 1990s, VMware offered the first virtualization solutions for personal computers. They laid the groundwork for the widespread adoption of virtual machine technology in the 2000’s. But virtual machines had their drawbacks:

  • They set the trend for companies to buy excessively large hosts for running the hypervisor, under-utilizing most of the resources. It is particularly a problem during the holiday season when there is extremely high demand.
  • Consequentially, a lot of money is spent on resource capacity that is never used. On the other side of the coin, it is difficult to prvision a newer, more powerful server.
  • Increased learning curve as IT personnel now have to learn about hypervisors
  • Increased costs in disaster recovery and downtime since all user data is on a single piece of hardware.

Sooner or later, though, they had to grapple with a newer, more efficient technology that solved most of these problems.

Enter containers

A container is a standardized unit of software that packages an application’s code, libraries, and dependencies into a single package. This package can then be run on any compatible host operating system, regardless of the underlying software and dependencies. This allows containers to be easily moved between different environments, such as between a developer’s local machine and a production server, without having to worry about compatibility issues.

Containers do not typically run their own operating system. Instead, they use the host operating system’s kernel to run the application and its dependencies. This allows multiple containers to run on a single host operating system, sharing its kernel and other resources. This makes containers more lightweight and efficient than virtual machines, which each require their own operating system.

Also, containers are highly scalable. Since they do not require a separate operating system for each container, multiple containers can run on a single physical host, sharing its resources. This allows organizations to easily add more containers to a host to scale up their applications, or to move containers to other hosts to distribute the load and improve performance. Additionally, many container orchestration tools, such as Kubernetes, provide built-in support for scaling containers up or down based on changing resource requirements.

History of containers

The Linux kernel’s architecture allowed to replace the running kernel with another installed version, using a command or system call such as kexec, effectively skipping the BIOS and the bootloader. Something similar is used in the container software for Linux, in particular KVM, Xen, and LxC/LxD.

Similarly, on Windows, Microsoft initially created technologies such as Windows Virtual Server and later Hyper-V, both proprietrary.

In 2013, Docker was released to the market, which aimed to provide cross-platform containerization for all operating systems.

How do containers isolate resources from the host OS?

In Linux, containers use the virtualization, isolation, and resource management mechanisms provided by the Linux kernel to create isolated environments in which applications can run without interfering with each other. These mechanisms include Linux namespaces, which provide isolation between the containers and the host operating system, and cgroups, which allow the kernel to manage and allocate resources to the containers. The chroot mechanism, which changes the apparent root folder for a process, is also used in Linux containers to isolate the file system of each container.

Similar (though not interoperable) techniques are used in the Windows container infrastructure.

Performance comparison between VMs and containers

Containers are typically smaller than virtual machines (VMs) in storage size. This is because containers do not include a full operating system, as VMs do. Instead, containers use the host operating system’s kernel to run the application and its dependencies. This allows multiple containers to share the host operating system’s kernel, reducing the amount of storage space required. Additionally, many container images are designed to be as small as possible, using lightweight base images and optimizing the size of the application and its dependencies. This makes containers a more efficient and cost-effective solution for running applications in many cases.

As for speed, containers are typically faster to start up than virtual machines (VMs), because they do not require the host operating system to boot up and initialize a guest operating system. Instead, containers use the host operating system’s kernel to run the application and its dependencies (as mentioned), so they can be started almost instantly. This makes containers a more efficient and agile solution for running applications that need to be quickly scaled up or down, or that require a fast startup time.

Today, you can find containers everywhere, on all cloud providers, on Kubernetes, and on stand-alone software such as Docker. But you might still have a reason to use virtual machines. Maybe you need to emulate a different instruction set and architecture, something containerization cannot do. QEMU in particular allows you to emulate all kinds of processors and the virtual machine images created in QCOW2 format can be transported between any operating system, as can Virtualbox VM images.

Disclaimer: This post has been written with AI assistance. We try to ensure the information is correct where possible, but we cannot guarantee the accuracy of AI-generated content.

Subscribe our latest updates

Don't miss out. Be the first one to know when a new guide or tool comes out.

Subscription Form

Support Us ❤

Creating learning material requires a lot of time and resources. So if you appreciate what we do, send us a tip to bc1qm02xguzxxhk7299vytrt8sa8s6fkns2udf8gjj. Thanks!