Protection: Virtual Everything
In the world of computing, ensuring security, privacy, and efficient management of resources are some of the most significant challenges faced by modern technology. Virtual memory and virtual machines (VMs) play crucial roles in solving these challenges, providing isolation, enhanced control, and resource optimization. Let's dive deeper into how these technologies contribute to making systems more secure and efficient.
Virtual Memory: An Introduction
Virtual memory is a foundational concept in modern operating systems that provides each process with an isolated view of memory. This means that programs can operate as if they have dedicated access to a complete memory system, even though physical memory is shared among multiple processes. The abstraction provided by virtual memory significantly enhances system protection by preventing processes from interfering with one another.
One of the core components enabling virtual memory is the page table, which maps virtual addresses used by processes to physical addresses in the main memory. This mapping process ensures that a program's memory access is isolated, thereby enhancing security and preventing accidental or malicious interference. The concept of memory being divided into pages, typically 4 KB or 8 KB in size, allows for efficient and fine-grained management of memory access permissions.
Page Tables and Page Table Entries (PTEs)
A page table is a data structure that maps virtual addresses to physical addresses. Each entry in the page table, known as a Page Table Entry (PTE), contains crucial information about the page, including the physical address, access permissions, and status bits (e.g., valid bit, dirty bit, and reference bit). This mapping is essential for providing the process with its own logical address space while physically sharing memory with other processes.
Valid Bit: Indicates whether the page is currently in physical memory. If the bit is not set, accessing this page will trigger a page fault.
Access Permissions: Include read, write, and execute permissions that help in enforcing security and isolating processes from one another.
Dirty Bit: Set when a page is modified, indicating that its contents must be written back to disk before being evicted.
The page table is maintained by the operating system and resides in main memory. Since accessing the page table for every memory reference can be slow, specialized hardware such as the Translation Lookaside Buffer (TLB) is used to speed up address translation.
Translation Lookaside Buffer (TLB)
The Translation Lookaside Buffer (TLB) is a specialized cache that stores a small number of recent virtual-to-physical address translations to accelerate memory access. When a process accesses a memory location, the virtual address is first checked in the TLB:
TLB Hit: If the translation is found in the TLB, the physical address is retrieved quickly.
TLB Miss: If the translation is not found in the TLB, the page table must be accessed, leading to additional latency. After retrieving the mapping, the TLB is updated.
A well-designed TLB reduces the number of page table accesses, thereby improving overall system performance. The effectiveness of the TLB depends on the locality of reference, as processes often access the same memory pages repeatedly.
Page Faults and Demand Paging
A page fault occurs when a process attempts to access a page that is not currently in physical memory. Page faults trigger the demand paging mechanism, where the operating system loads the required page from disk into memory.
The steps for handling a page fault include:
Page Fault Detection: The hardware generates an interrupt when a page fault occurs.
Page Table Update: The operating system identifies the missing page, allocates a free frame in memory, and reads the page from secondary storage (e.g., hard disk).
TLB Update: The page table is updated, and the TLB is updated with the new translation to prevent another page fault for the same address.
Demand paging is an efficient way to utilize memory, as only the pages that are needed by a process are loaded, thereby reducing memory overhead.
Page Replacement Policies
When physical memory is full, the operating system needs to decide which page to evict to make space for a new one. Several page replacement policies are used to select a victim page:
Least Recently Used (LRU): Evicts the page that has not been used for the longest period of time. LRU aims to take advantage of the locality of reference.
First In, First Out (FIFO): Evicts the oldest page in memory. While simple, FIFO can lead to suboptimal results in some scenarios, known as Belady's anomaly.
Optimal (OPT): Replaces the page that will not be used for the longest time in the future. This policy is theoretical as it requires future knowledge of page references.
The goal of these policies is to minimize page fault rate and improve performance by keeping frequently accessed pages in memory.
Demand Paging and Caching in EMAT
In computing, Efficient Memory Access Techniques (EMAT) focus on optimizing memory access using paging and caching. Different configurations have varying impacts on performance:
Paging Only: In a paging-only system, each memory reference may result in multiple memory accesses if the desired page is not in memory. This setup relies heavily on effective page replacement policies to minimize page faults.
Caching Only: In a caching-only system, frequently accessed memory blocks are stored in a fast-access cache. This configuration improves access times by reducing the number of accesses to slower main memory.
Paging and Caching Combined: The combined approach uses both paging and caching to leverage the benefits of each. The TLB acts as a cache for page table entries, while other caches (such as L1 and L2 caches) store frequently accessed data blocks, reducing both address translation and data access latencies.
EMAT Equations
The Effective Memory Access Time (EMAT) is a measure used to determine the average time it takes to access memory, accounting for paging, caching, and TLB effects. The EMAT can be calculated using different equations based on the system configuration:
Paging Only
In a paging-only system without a TLB, each memory reference requires accessing the page table, followed by accessing the actual data:
$$EMAT = (1 - p) \times (m + t) + p \times (m + t + p_{f} \times d)$$
Where:
\((p)\): Page fault rate (probability of a page fault).
\((m)\): Time to access memory.
\((t)\): Time to access the page table.
\((p_{f})\): Probability that a page fault results in accessing the disk.
\((d)\): Time to access the disk.
Paging with TLB
In a paging system with a TLB, there are two scenarios: a TLB hit and a TLB miss.
$$EMAT = T_{TLB} + (1 - h) \times (t + m)$$
Where:
\((T_{TLB})\): Time to access the TLB.
\((h)\): TLB hit rate (probability that the TLB contains the required translation).
\((t)\): Time to access the page table.
\((m)\): Time to access memory.
Paging and Caching Combined
In a combined paging and caching system, we consider the effects of both TLB and data caching:
$$EMAT = h \times T_{TLB} + (1 - h) \times (t + m) + c_{hit} \times T_{cache} + c_{miss} \times m$$
Where:
\((T_{cache})\): Time to access the cache.
\((c_{hit})\): Cache hit rate (probability that the data is found in the cache).
\((c_{miss})\): Cache miss rate (probability that the data is not found in the cache).
Virtual and Physical Indexing in Caches
Modern CPUs use various indexing techniques for caching to enhance memory access performance. There are four main types:
Virtually Indexed, Physically Tagged (VIPT): In VIPT caches, the cache index is derived from the virtual address, but the tag is from the physical address. This approach allows for a fast lookup using the virtual address, but the physical tag ensures correctness. It combines some benefits of both virtual and physical indexing.
Physically Indexed, Physically Tagged (PIPT): PIPT caches are indexed and tagged using physical addresses, ensuring no aliasing issues. However, PIPT introduces higher latency compared to VIPT caches because the physical address translation must complete before accessing the cache.
Virtually Indexed, Virtually Tagged (VIVT): In VIVT caches, both indexing and tagging are done using virtual addresses. While this approach is fast, it can lead to synonym problems, where different virtual addresses map to the same physical address. VIVT caches require extra mechanisms to handle these issues, making them less practical.
Physically Indexed, Virtually Tagged (PIVT): This approach makes little sense in modern systems, as it would result in inconsistencies between cache tags and physical memory, introducing additional complexity without providing clear advantages.
In general, VIPT caches are preferred due to their ability to balance speed and correctness, allowing parallel TLB lookups and cache indexing. PIPT caches, while simpler and free of aliasing issues, have longer access latencies. VIVT and PIVT designs are largely avoided due to their inherent limitations and the complexities they introduce.
The operating system and architecture join forces to allow processes to share the hardware yet not interfere with each other. To do this, the architecture must limit what a process can access when running a user process yet allow an operating system process to access more. At a minimum, the architecture must:
Provide at least two modes, indicating whether the running process is a user process or an operating system process. This latter process is sometimes called a kernel process or a supervisor process.
Provide a portion of the processor state that a user process can use but not write. This state includes a user/supervisor mode bit, an exception enable/disable bit, and memory protection information. Users are prevented from writing this state because the operating system cannot control user processes if users can give themselves supervisor privileges, disable exceptions, or change memory protection.
Provide mechanisms whereby the processor can go from user mode to supervisor mode and vice versa. The first direction is typically accomplished by a system call, implemented as a special instruction that transfers control to a dedicated location in supervisor code space. The PC is saved from the point of the system call, and the processor is placed in supervisor mode. The return to user mode is like a subroutine return that restores the previous user/supervisor mode.
Provide mechanisms to limit memory accesses to protect the memory state of a process without having to swap the process to disk on a context switch.
However, every memory access in a paged virtual memory system theoretically requires two lookups: one to determine the physical address and another to access the data. To mitigate the performance cost of this double lookup, Translation Lookaside Buffers (TLBs) are used. A TLB is a specialized cache that stores recent translations of virtual to physical addresses, enabling quick address resolution and improving system performance.
Paged virtual memory means that every memory access logically takes at least twice as long, with one memory access to obtain the physical address and a second access to get the data. This cost would be far too dear. The solution is to rely on the principle of locality; if the accesses have locality, then the address translations for the accesses must also have locality. By keeping these address translations in a special cache, a memory access rarely requires a second access to translate the address. This special address translation cache is referred to as a translation lookaside buffer (TLB).
A TLB entry is like a cache entry where the tag holds portions of the virtual address and the data portion holds a physical page address, protection field, valid bit, and usually a use bit and a dirty bit. The operating system changes these bits by changing the value in the page table and then invalidating the corresponding TLB entry. When the entry is reloaded from the page table, the TLB gets an accurate copy of the bits.
The Role of Virtual Machines in System Protection
Virtual machines are another crucial innovation that helps in protecting computing systems. A Virtual Machine (VM) is essentially a software-based simulation of a physical computer, providing an isolated execution environment. VMs offer three key benefits when it comes to security and system management:
Isolation and Containment: Virtual machines provide complete isolation between different instances running on the same physical hardware. This isolation ensures that even if one VM is compromised, the others remain unaffected. This characteristic is especially useful in cloud computing environments where multiple users share the same physical hardware.
Managing Hardware and Software Complexity: Virtual machines provide a unified interface for running different operating systems and applications. This abstraction simplifies the management of both hardware and software stacks, allowing systems to run legacy software alongside new versions and providing a controlled environment to test potentially risky applications without affecting the host system.
Security by Design: The Virtual Machine Monitor (VMM), also known as a hypervisor, plays a critical role in managing VMs. The VMM runs at the highest privilege level and controls how guest VMs access physical resources. This ensures that only authorized operations are executed, offering an additional layer of security over traditional operating systems. With a significantly smaller code base than a complete OS, the VMM is less prone to vulnerabilities, making it a robust security mechanism.
The software that supports VMs is called a virtual machine monitor (VMM) or hypervisor; the VMM is the heart of virtual machine technology. The underlying hardware platform is called the host, and its resources are shared among the guest VMs. The VMM determines how to map virtual resources to physical resources: A physical resource may be time-shared, partitioned, or even emulated in software. The VMM is much smaller than a traditional OS; the isolation portion of a VMM is perhaps only 10,000 lines of code.
How Virtual Memory and Virtual Machines Work Together
Both virtual memory and virtual machines aim to provide isolation and controlled access to resources, albeit in different contexts. Virtual memory isolates processes in a single operating system environment, while virtual machines isolate entire operating system instances. Together, they complement each other to provide a comprehensive solution for resource management and security.
For instance, when a guest operating system runs in a VM, it uses virtual memory to manage the processes it executes. The VMM, in turn, provides a layer of abstraction that manages how this guest OS itself accesses the underlying physical memory. This layered approach ensures that each virtual machine believes it has complete control over its memory while the VMM transparently manages the actual resource allocation.
Challenges in Virtual Machine Implementation
The implementation of virtual machines poses unique challenges, especially in terms of performance. Executing privileged instructions that interact with hardware directly can be particularly problematic. When a guest operating system in a VM attempts to execute such instructions, they must be intercepted by the VMM to prevent unauthorized changes to the host system. This is known as trap-and-emulate, where the VMM catches these privileged operations and emulates them in a controlled manner.
Modern virtual machine technologies, like paravirtualization used in the Xen VMM, attempt to mitigate these overheads. In paravirtualization, the guest operating system is modified slightly to be aware that it is running inside a virtualized environment. This allows the VMM and guest OS to cooperate more efficiently, reducing the performance penalty typically associated with full virtualization.
Early in the development of VMs, a number of inefficiencies became apparent. For example, a guest OS manages its virtual to real page mapping, but this mapping is ignored by the VMM, which performs the actual mapping to physical pages. In other words, a significant amount of wasted effort is expended just to keep the guest OS happy. To reduce such inefficiencies, VMM developers decided that it may be worthwhile to allow the guest OS to be aware that it is running on a VM. For example, a guest OS could assume a real memory as large as its virtual memory so that no memory management is required by the guest OS.
Allowing small modifications to the guest OS to simplify virtualization is referred to as paravirtualization, and the open source Xen VMM is a good example. The Xen VMM, which is used in Amazon’s Web services data centers, provides a guest OS with a virtual machine abstraction that is similar to the physical hardware, but it drops many of the troublesome pieces. For example, to avoid flushing the TLB, Xen maps itself into the upper 64 MB of the address space of each VM. It allows the guest OS to allocate pages, just checking to be sure it does not violate protection restrictions. To protect the guest OS from the user programs in the VM, Xen takes advantage of the four protection levels available in the 80x86. The Xen VMM runs at the highest privilege level (0), the guest OS runs at the next level (1), and the applications run at the lowest privilege level (3). Most OSes for the 80x86 keep everything at privilege levels 0 or 3.
Applications and Use Cases of Virtual Machines
Virtual machines have become a cornerstone of modern computing infrastructure, especially in cloud computing and data center environments. Major cloud service providers like Amazon Web Services (AWS) rely heavily on virtualization to provide scalable, secure, and flexible computing environments for their clients.
Managing Software Stacks: Virtual machines allow multiple versions of operating systems to run on a single physical server. This feature is particularly useful for software development and testing, where developers need to test their applications across different operating system versions.
Hardware Independence: Virtual machines can also be used to migrate applications from outdated hardware to newer systems without requiring significant changes to the software. This hardware abstraction layer provides an effective way to extend the lifecycle of legacy applications.
Managing Hardware: One reason for multiple servers is to have each application running with its own compatible version of the operating system on separate computers, as this separation can improve dependability. VMs allow these separate software stacks to run independently yet share hardware, thereby consolidating the number of servers. Another example is that some VMMs support migration of a running VM to a different computer, either to balance load or to evacuate from failing hardware.
The Future of Virtualization in Security
As threats to computing systems evolve, virtualization technologies like virtual memory and virtual machines will continue to play a critical role in securing information systems. Innovations in hardware-based virtualization support are already providing performance improvements by allowing VMs to execute privileged instructions more efficiently. Future trends include the integration of virtualization capabilities directly into the hardware to further minimize the overheads of virtualization.
The combination of virtual memory and virtual machines provides a robust mechanism to ensure that system resources are used efficiently and securely. As the demand for computing power continues to grow, these technologies will be indispensable for managing complexity, isolating processes, and safeguarding data against a wide range of threats.
Subscribe to my newsletter
Read articles from Jyotiprakash Mishra directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Jyotiprakash Mishra
Jyotiprakash Mishra
I am Jyotiprakash, a deeply driven computer systems engineer, software developer, teacher, and philosopher. With a decade of professional experience, I have contributed to various cutting-edge software products in network security, mobile apps, and healthcare software at renowned companies like Oracle, Yahoo, and Epic. My academic journey has taken me to prestigious institutions such as the University of Wisconsin-Madison and BITS Pilani in India, where I consistently ranked among the top of my class. At my core, I am a computer enthusiast with a profound interest in understanding the intricacies of computer programming. My skills are not limited to application programming in Java; I have also delved deeply into computer hardware, learning about various architectures, low-level assembly programming, Linux kernel implementation, and writing device drivers. The contributions of Linus Torvalds, Ken Thompson, and Dennis Ritchie—who revolutionized the computer industry—inspire me. I believe that real contributions to computer science are made by mastering all levels of abstraction and understanding systems inside out. In addition to my professional pursuits, I am passionate about teaching and sharing knowledge. I have spent two years as a teaching assistant at UW Madison, where I taught complex concepts in operating systems, computer graphics, and data structures to both graduate and undergraduate students. Currently, I am an assistant professor at KIIT, Bhubaneswar, where I continue to teach computer science to undergraduate and graduate students. I am also working on writing a few free books on systems programming, as I believe in freely sharing knowledge to empower others.