Memory Management Strategies I - Comprehensive Foundations - 5 | Module 5: Memory Management Strategies I - Comprehensive Foundations | Operating Systems
K12 Students

Academics

AI-Powered learning for Grades 8–12, aligned with major Indian and international curricula.

Academics
Professionals

Professional Courses

Industry-relevant training in Business, Technology, and Design to help professionals and graduates upskill for real-world careers.

Professional Courses
Games

Interactive Games

Fun, engaging games to boost memory, math fluency, typing speed, and English skillsβ€”perfect for learners of all ages.

games

5 - Memory Management Strategies I - Comprehensive Foundations

Practice

Interactive Audio Lesson

Listen to a student-teacher conversation explaining the topic in a relatable way.

Address Binding and Translation

Unlock Audio Lesson

Signup and Enroll to the course for listening the Audio Lesson

0:00
Teacher
Teacher

Today, we'll discuss address binding, the method by which a logical address is converted into a physical address. Can anyone tell me what logical addresses are?

Student 1
Student 1

Are they the addresses the CPU generates for a program?

Teacher
Teacher

Exactly! Great job, Student_1. The CPU works with logical addresses, but they need to be converted into physical addresses in the RAM. This is where address binding comes into play. There are three types of address binding: compile time, load time, and execution time. Can anyone explain why execution time binding is beneficial?

Student 2
Student 2

Because it allows programs to be more flexible and can be loaded anywhere in memory, right?

Teacher
Teacher

Correct! Execution time binding allows processes to be repositioned in memory while they run, which is essential for modern systems to utilize memory efficiently. We also need to consider memory-related hardware, like the MMU, which performs this translation.

Student 3
Student 3

Can MMU also help with memory protection?

Teacher
Teacher

Yes, Student_3! The MMU can enforce memory protection by utilizing relocation and limit registers.

Teacher
Teacher

To summarize, understanding address binding is crucial for process management, ensuring both flexibility and security in memory utilization.

Dynamic Loading and Linking

Unlock Audio Lesson

Signup and Enroll to the course for listening the Audio Lesson

0:00
Teacher
Teacher

Next, let's explore dynamic loading and linking. Can someone explain what dynamic loading is?

Student 4
Student 4

Is it when parts of a program are loaded only when needed instead of loading the entire program at once?

Teacher
Teacher

Exactly right, Student_4! This strategy saves memory, especially with large executables. Now, what do you think is a potential disadvantage of dynamic loading?

Student 2
Student 2

There might be a slight overhead when the first call to the routine is made?

Teacher
Teacher

That's spot on! Now, what about dynamic linking? How does it differ from static linking?

Student 1
Student 1

Dynamic linking resolves function calls at runtime instead of embedding them at compile time, right?

Teacher
Teacher

Correct! This allows programs to share memory for common libraries and makes updates easier. However, what’s a common issue related to dynamic linking?

Student 3
Student 3

DLL Hell? Where new versions of libraries can break compatibility with older applications?

Teacher
Teacher

Exactly, Student_3! To wrap up, dynamic loading and linking enhance efficiency but require careful management to avoid potential pitfalls.

Contiguous Memory Allocation

Unlock Audio Lesson

Signup and Enroll to the course for listening the Audio Lesson

0:00
Teacher
Teacher

Now, let’s discuss contiguous memory allocation. Can anyone define what this means?

Student 2
Student 2

That's when each process is allocated a single, continuous block of memory, right?

Teacher
Teacher

Exactly. This method has advantages like simplicity, but what is the major challenge it faces?

Student 3
Student 3

Fragmentation! Both internal and external fragmentation can waste memory.

Teacher
Teacher

Correct! Internal fragmentation occurs when allocated memory is larger than needed, while external fragmentation occurs when free memory is scattered in chunks. How can we possibly mitigate external fragmentation?

Student 1
Student 1

Compaction! Moving all occupied blocks to one end to create larger free blocks.

Teacher
Teacher

Great! Compaction can help, but it is also time-consuming. In summary, while contiguous allocation is simple, it requires strategies to handle fragmentation.

Paging vs. Segmentation

Unlock Audio Lesson

Signup and Enroll to the course for listening the Audio Lesson

0:00
Teacher
Teacher

Next up is paging. How does paging solve the problem of external fragmentation?

Student 4
Student 4

By allowing a process's memory to be non-contiguous and dividing it into fixed-size pages?

Teacher
Teacher

Exactly! This means we can fill any free frame with pages, which eliminates external fragmentation. Can anyone describe an aspect of paging that might lead to internal fragmentation?

Student 2
Student 2

That would be when the last page allocated is not completely filled, right?

Teacher
Teacher

Yes! Now, how does segmentation differ from paging in terms of addressing?

Student 3
Student 3

Segmentation uses variable sizes for segments, matched to the logical structure of programs, while paging uses fixed sizes?

Teacher
Teacher

Very good, Student_3! Segmentation aligns closely with how programmers structure their code. In conclusion, both paging and segmentation offer unique advantages, making them suitable for different scenarios.

Combination of Paging and Segmentation

Unlock Audio Lesson

Signup and Enroll to the course for listening the Audio Lesson

0:00
Teacher
Teacher

Finally, let’s look at segmented paging. What do you understand about this hybrid approach?

Student 1
Student 1

It combines the strengths of both approaches, allowing segments to be paged, which avoids external fragmentation?

Teacher
Teacher

Exactly! It keeps the logical organization of segments while enabling fixed-size pages to manage physical memory effectively. Can anyone think of a potential downside to this complexity?

Student 4
Student 4

Well, it adds complexity to address translation, which could slow things down?

Teacher
Teacher

Exactly right! Each address translation can involve multiple steps. To summarize, segmented paging represents a powerful method for managing memory efficiently while maintaining the logical structure expected by programmers.

Introduction & Overview

Read a summary of the section's main ideas. Choose from Basic, Medium, or Detailed.

Quick Overview

This section provides a comprehensive overview of memory management strategies, focusing on hardware mechanisms, address translation, and the challenges of memory allocation.

Standard

In this section, we explore the foundational concepts of memory management, including address translation techniques, contiguous and non-contiguous memory allocation strategies like paging and segmentation, and the implications of fragmentation. Critical mechanisms such as dynamic loading and linking, as well as the significance of memory management hardware, are also discussed.

Detailed

Memory Management Strategies I - Comprehensive Foundations

This module delves into key aspects of memory management in operating systems, focusing on hardware underpinnings and essential strategies for managing memory effectively. Starting with address translation, we discuss how logical addresses generated by the CPU are mapped onto physical addresses in RAM through different techniques of address binding:
- Compile Time Binding: The physical address is determined at compile-time, which is inflexible for modern multiprogramming environments.
- Load Time Binding: The physical address is assigned during program loading, enhancing flexibility, but leads to inefficiencies if the program is moved post-execution.
- Execution Time Binding: The most flexible method, leveraging a Memory Management Unit (MMU) to translate addresses in real-time, enabling advanced techniques like virtual memory and providing memory protection.

We then examine contiguous memory allocation, characterized by fixed and variable partition allocation strategies. Each must grapple with internal and external fragmentation, which can severely impact efficient space utilization. Techniques to mitigate fragmentation, such as compaction, are essential for operations.

The section transitions to non-contiguous memory management via paging and segmentation. Paging, with its elimination of external fragmentation, offers a flexible framework for physical memory allocation. Segmentation provides a logical structure corresponding more closely with programmer expectations. Lastly, we discuss combining paging with segmentation for optimized performance in modern systems, with an emphasis on shared code utilization and improved memory protection.

Audio Book

Dive deep into the subject with an immersive audiobook experience.

Background - The Essential Memory Landscape

Unlock Audio Book

Signup and Enroll to the course for listening the Audio Book

Effective memory management is not just about allocating space; it's about translating
addresses, ensuring isolation between processes, and dynamically adapting to program
needs and memory availability. This section lays the groundwork by discussing the core
hardware mechanisms and fundamental software techniques that enable efficient memory
utilization.

Detailed Explanation

Memory management involves several key aspects that work together to help the computer system manage its resources effectively. It starts with translating addresses, which means converting the logical addresses that programs use (the addresses a program thinks it is using) into physical addresses (the actual locations in memory). This translation ensures that processes are kept isolated from one another, allowing them to run without interfering with each other. Finally, memory management has to adapt dynamically to the needs of the programs and the available memory, ensuring that each program has the space it needs to operate effectively.

Examples & Analogies

Think of memory management like a hotel management system. Each guest (program) has a room (memory space) that is uniquely assigned to them. The management system (memory management hardware) makes sure that guests have the right room (logical to physical address translation) and that guests do not enter each other's rooms (ensuring isolation). The staff (memory management techniques) also adapts to changes, like guests checking in or out, making sure every room is used efficiently.

Basic Hardware: The Bridge Between Logical and Physical Addresses

Unlock Audio Book

Signup and Enroll to the course for listening the Audio Book

The CPU operates using logical addresses, which are abstract references within a program's
perceived memory space. However, the actual main memory (RAM) is accessed using
physical addresses, which pinpoint specific memory cells. The crucial role of memory
management hardware is to translate these logical addresses into their corresponding
physical counterparts, ensuring correct and protected memory access.

Detailed Explanation

The CPU generates logical addresses when a program runs, and these addresses need to be translated into physical addresses to access the actual data. The translation process is crucial because it allows the system to protect memory space for different processes and ensures that programs can access the required data without running into each other. The memory management unit (MMU) is the hardware component responsible for this translation.

Examples & Analogies

Imagine you are in a library that has both books (logical addresses) and actual shelves where those books are stored (physical addresses). You cannot go directly to the shelf without knowing exactly where the book is located. The cataloging system (MMU) helps you find the right shelf based on the book's title (logical address), ensuring you access the right book without getting lost or messing up the library's organization.

Address Binding: The Act of Translation

Unlock Audio Book

Signup and Enroll to the course for listening the Audio Book

Address binding is the process by which a logical address generated by the CPU is
mapped to a physical address in main memory. This binding can occur at various
points in a program's lifecycle, each with implications for flexibility and performance:

  • Compile Time Binding:
  • Mechanism: If the starting physical memory location of a program is known definitively at the time the program is compiled, the compiler can generate absolute code. This means all memory references within the program (e.g., jump instructions, data access) are directly hardcoded with physical addresses.
  • Example: If a program is always loaded at physical address 0x10000, then an instruction JMP label_X where label_X is at logical offset 0x50 within the program, would be compiled as JMP 0x10050.
  • Advantages: Simple, no run-time overhead for address translation.
  • Disadvantages: Extremely inflexible. The program can only run if it is loaded at precisely that fixed physical address. If the starting address changes or if multiple programs need to run simultaneously, this method is impractical. Not used in modern multiprogramming operating systems.
  • Load Time Binding:
  • Mechanism: If the program's starting physical address is not known at compile time, but is determined when the program is loaded into memory, the compiler generates relocatable code. This code contains relative addresses (e.g., JMP +50 bytes from current instruction). A special program called a loader takes this relocatable code and, knowing the actual base physical address where the program is being loaded, modifies all relative addresses within the program to absolute physical addresses before execution begins.
  • Advantages: Allows the program to be loaded anywhere in memory, as long as it gets a contiguous block.
  • Disadvantages: If the program needs to be moved in memory after it has started executing (e.g., for swapping), it would need to be reloaded and all addresses re-bound, which is inefficient.
  • Execution Time (Run Time) Binding:
  • Mechanism: This is the most prevalent and flexible method used in modern operating systems that support true multiprogramming and virtual memory. The binding of logical addresses to physical addresses is deferred until the very last moment – during program execution. This means that an instruction's address is translated every time it is executed.
  • Hardware Requirement: This method necessitates dedicated hardware support, primarily the Memory Management Unit (MMU), to perform the address translation quickly and efficiently.
  • Advantages:
    • Flexibility: A process can be moved around in physical memory during its execution (e.g., swapped out and back in, compacted).
    • Dynamic Relocation: Enables advanced memory management techniques like virtual memory (paging, segmentation), which allow a process to use a logical address space much larger than its allocated physical memory.
    • Memory Protection: The MMU can enforce memory protection, preventing processes from accessing memory regions outside their allocated space.
  • Disadvantages: Adds a small overhead to every memory access due to the translation process (though minimized by MMU hardware).

Detailed Explanation

Address binding is essential for translating logical addresses (generated by the CPU) into physical addresses (actual locations in memory). This can happen at different times:
1. Compile Time Binding: If the program's memory location is known ahead of time, the code is hardcoded which is efficient but inflexible as it limits the program to a specific location.
2. Load Time Binding: This allows for some flexibility during program loading, as the actual memory location can be determined at that time, but it is inefficient if the program needs to be relocated later.
3. Execution Time Binding: This is implemented in modern systems, where translation occurs during execution, allowing greater flexibility and optimization but adding slight overhead due to the translation process each time code is run.

Examples & Analogies

Think of address binding like a student finding their way to different classrooms in a school. Compile time binding is like a student always sitting in the same spot in class, while load time binding allows them to move around at the beginning of the term, but if they need to change their class part way through, it's complicated. Execution time binding is like having a guide who tells them exactly where to go for every new class, allowing them the flexibility to change rooms without worry.

Logical vs. Physical Address Space

Unlock Audio Book

Signup and Enroll to the course for listening the Audio Book

  • Logical Address (Virtual Address): This is the address generated by the CPU. It's the address that the program "sees" and refers to. Each process operates within its own logical address space, which typically starts from address 0 for that process. This provides an abstraction, allowing programs to be written independently of the actual physical memory layout.
  • Physical Address: This is the actual address presented to the memory hardware (RAM). It is the real address in main memory where data is stored. Only the MMU and the memory controller interact directly with physical addresses.
  • Relocation Register (Base Register) and Limit Register: These are crucial hardware components in simple run-time address binding systems:
  • Relocation Register (Base Register): This register holds the starting physical address (base address) where the current process is loaded in main memory. Every logical address generated by the CPU for this process has this relocation register's value added to it by the MMU to produce the physical address.
  • Limit Register: This register specifies the size (or length) of the logical address space allocated to the current process. When the MMU translates a logical address, it first checks if the logical address is less than the value in the limit register. If the logical address is greater than or equal to the limit register, it signifies an attempt to access memory outside the process's allocated range. In such cases, the MMU triggers a "trap" (a hardware interrupt), indicating an addressing error (e.g., "segmentation fault").
  • Combined Operation: For a given logical address L, the MMU calculates the physical address P = Relocation_Register + L. Simultaneously, it checks if L < Limit_Register. If L is within the bounds, the physical address is accessed; otherwise, an error occurs. This pair of registers provides effective dynamic relocation and basic memory protection for contiguous memory blocks.

Detailed Explanation

Logical addresses are those generated by the CPU during program execution. They allow the program to run independently without needing to know where data is physically stored in RAM β€” that's the job of the memory management unit (MMU). The physical address is where this data is actually located in memory. The Relocation Register helps the MMU to convert logical addresses into physical addresses, while the Limit Register ensures that a program does not try to access beyond its allotted memory space, triggering an error if it does.

Examples & Analogies

Think of logical addresses as the chapter numbers in a book (what a program sees) while physical addresses are the actual pages in the book (what is stored in memory). The Relocation Register can be compared to the book's index, telling you where to find the chapters, while the Limit Register acts like a bookmark, ensuring you don't continue reading beyond the book's end.

Dynamic Loading and Linking: Optimizing Program Startup and Resource Use

Unlock Audio Book

Signup and Enroll to the course for listening the Audio Book

Traditionally, an entire program, including all its libraries, had to be loaded into memory
before execution could begin. Dynamic loading and linking are techniques that improve
memory utilization and program flexibility by deferring parts of the loading and linking
process until they are actually needed.

  • Dynamic Loading:
  • Concept: Instead of loading the entire executable program into main memory at once, dynamic loading loads routines (functions or modules) only when they are actually called or referenced during program execution.
  • Mechanism: The main program contains a small, executable piece of code called a "stub" for each routine that might be dynamically loaded. When a call is made to a dynamically loaded routine:
    • The stub is executed first.
    • The stub checks if the actual routine is already loaded in memory.
    • If the routine is not in memory, the stub requests the operating system's dynamic loader to load it from disk into a free memory region.
    • Once loaded, the stub updates the call instruction to directly point to the newly loaded routine for all future calls, avoiding the stub overhead.
  • Advantages:
    • Efficient Memory Utilization: Only the necessary portions of a program are loaded into memory, which is highly beneficial for large programs with many rarely used features (e.g., error handling routines, specialized tools). This reduces the memory footprint.
    • Faster Program Startup: The program can begin execution without waiting for the entire code base to be loaded, leading to a quicker user experience.
    • Reduced I/O: Less data needs to be read from disk initially.
  • Disadvantages:
    • Increased complexity in the loader and program design.
    • A slight performance overhead for the first call to a dynamically loaded routine due to the loading process.
  • Dynamic Linking:
  • Concept: Linking is the process of resolving references between different parts of a program and external libraries. Dynamic linking postpones the linking of some external library routines (e.g., shared libraries, DLLs) until run time, rather than embedding a copy of the library directly into the executable file at compile time (static linking).
  • Mechanism: Instead of including the actual code for a library function (like printf()) in the executable, the compiler and linker create a small "stub" in the executable for each dynamically linked function. This stub effectively tells the operating system where to find the real function. When a program makes a call to a dynamically linked function:
    • The stub is executed.
    • The stub asks the dynamic linker (a part of the OS or a system library) to locate the required library routine in memory.
    • If the routine is not already in memory (because another program is using it), the dynamic linker loads the shared library containing that routine from disk into main memory.
    • The dynamic linker then modifies the program's jump table (or the stub itself) so that future calls to that function directly jump to the loaded library routine's address.
  • Advantages:
    • Reduced Executable File Size: Executable files are much smaller as they don't contain redundant copies of commonly used library routines. This saves disk space.
    • Reduced Memory Consumption: Multiple processes can share a single physical copy of a dynamically linked library in main memory (e.g., libc.so on Linux, kernel32.dll on Windows). This significantly conserves RAM, especially in systems running many instances of common applications.
    • Easier Software Updates: If a bug is fixed or an improvement is made in a shared library, only the library file needs to be updated. All programs using that library will automatically benefit from the update without needing to be recompiled, re-linked, or reinstalled.
  • Disadvantages:
    • Run-time Overhead: There's a slight overhead for resolving the first call to a dynamically linked function.
    • Dependency Issues ("DLL Hell"): If a new version of a shared library is incompatible with older applications, updating the library can break existing programs. This is a common problem in complex software environments.
    • Program might not run if a required shared library is missing or an incorrect version is present.

Detailed Explanation

Dynamic loading and linking enhance the memory efficiency and startup speed of programs. Instead of loading everything into memory upfront, dynamic loading allows portions to be loaded only when needed, optimizing RAM usage and speeding up execution startup. Dynamic linking resolves the references to external libraries during runtime, reducing the program file size and allowing for shared use of libraries by multiple programs, which further conserves memory. However, these techniques introduce complexity into program design and may cause issues if library versions conflict.

Examples & Analogies

Consider dynamic loading and linking like a restaurant menu. A restaurant (the program) has many dishes (functions), but it doesn't prepare all dishes at once. Instead, it only makes the dishes when ordered (dynamic loading). Similarly, the kitchens (libraries) may only have certain common ingredients available (dynamic linking). If one restaurant needs a certain ingredient that isn't available, it may depend on another restaurant for that ingredient. If the ingredient's supplier changes (library versions), the restaurant may face challenges if the new ingredient doesn't fit with their dishes.

Swapping: A Basic Memory Extension Technique

Unlock Audio Book

Signup and Enroll to the course for listening the Audio Book

Swapping is a fundamental memory management technique that allows the operating
system to temporarily move an entire process (or its address space) from main memory to
secondary storage (a backing store, usually a hard disk or SSD) and then bring it back
when needed. It is a precursor to more advanced virtual memory concepts.

  • Concept: The primary goal of swapping is to enable a higher degree of multiprogramming than would otherwise be possible given the limited amount of physical RAM. When memory resources are tight, or a process becomes inactive for a long period, the OS can "swap out" that process to free up memory for others.
  • Backing Store (Swap Space): This is a dedicated, fast secondary storage area (often a partition on a disk) used to hold copies of memory images for processes that have been swapped out. It must be large enough to accommodate multiple process images and fast enough for efficient transfer.
  • Mechanism:
  • Selection for Swap Out: The operating system's medium-term scheduler (or swapper daemon) decides which process to swap out. This might be a process that has been inactive for a while, has a low priority, or is currently blocked waiting for a slow I/O operation.
  • Swap Out Operation: The selected process's entire logical address space (all its code, data, stack, heap) is copied from main memory to a designated area on the backing store. The process's state in the process table is updated to "swapped out" or "suspended."
  • Memory Release: The physical memory pages (or contiguous block) previously occupied by the swapped-out process are marked as free and become available for other processes.
  • Selection for Swap In: When the swapped-out process becomes ready to run again, or when the scheduler decides it's its turn, the medium-term scheduler selects it for swap-in.
  • Swap In Operation: The process's memory image is copied back from the backing store into an available contiguous block of physical memory. The process's state is updated, and it is moved back to the ready queue.
  • Advantages:
  • Increased Degree of Multiprogramming: Allows the system to run more processes than the physical memory can hold simultaneously, as inactive ones can be moved out.
  • Memory Extension (Conceptual): Provides the illusion of more physical memory than is actually present, though with performance implications.
  • Disadvantages:
  • High Performance Overhead: Swapping involves significant disk I/O, which is orders of magnitude slower than CPU operations or RAM access. Frequent swapping can lead to "thrashing," where the system spends more time swapping than executing useful work, drastically reducing performance.
  • Increased Context Switch Time: The time required to perform a context switch for a swapped-out process includes the time to swap it back into memory, which is substantial.
  • Contiguous Allocation Dependency: In simple swapping systems with contiguous memory allocation, a swapped-in process needs to find a contiguous block of memory large enough to hold its entire image, which can exacerbate external fragmentation issues.

Detailed Explanation

Swapping is a method that allows the operating system to manage memory effectively by moving inactive processes to disk storage, thus freeing up physical RAM for active processes. When the OS decides to swap a process out, it moves the entire process image to a fast secondary storage (the backing store), making room in memory for other processes. Conversely, when the swapped-out process is needed again, it is brought back into RAM. While swapping enhances the degree of multiprogramming, frequent swapping can cause performance issues, like thrashing, which slows the system significantly.

Examples & Analogies

Think of swapping as a crowded parking lot. If all parking spaces (RAM) are full, the parking attendant (operating system) might ask some cars (inactive processes) to leave by parking them offsite (on a hard disk). When a car is needed again, it is fetched back to fill the next available space. While this makes room for more cars, if too many are going in and out quickly, it can traffic jam the parking lot, making it impossible for cars to move efficiently.

Definitions & Key Concepts

Learn essential terms and foundational ideas that form the basis of the topic.

Key Concepts

  • Address Binding: The critical process that maps logical addresses to physical memory.

  • Dynamic Loading: Loading only necessary parts of programs to save memory and speed up execution.

  • Paging: A method that eliminates external fragmentation by allowing non-contiguous memory allocation.

  • Segmentation: A logical view of memory that matches program structure and allows for variable-sized segments.

  • Fragmentation: Inefficient memory utilization due to the allocation and deallocation of memory blocks.

Examples & Real-Life Applications

See how the concepts apply in real-world scenarios to understand their practical implications.

Examples

  • In dynamic loading, a graphics program may load its rendering engine only when a specific function to render graphics is called, saving memory resources.

  • In paging, if a program uses 10KB and operates with 4KB pages, it will occupy three pages, resulting in internal fragmentation of 4KB if the last page is not fully utilized.

  • In segmentation, a program can be divided into sections like code, data, stack, etc., reflecting how it operates logically, with each section having its own size.

Memory Aids

Use mnemonics, acronyms, or visual cues to help remember key information more easily.

🎡 Rhymes Time

  • Memory binding is quite divine, logical to physical, all must align.

πŸ“– Fascinating Stories

  • Imagine a library where books (programs) are only brought out when needed, saving space and effort until someone asks for a specific title (dynamic loading).

🧠 Other Memory Gems

  • Remember the acronym DAMP: Dynamic Loading, Address Binding, Memory Management to recall the main aspects of memory strategies.

🎯 Super Acronyms

Use **PALM**

  • Paging And Logical Memory for remembering the importance of paging in memory management.

Flash Cards

Review key concepts with flashcards.

Glossary of Terms

Review the Definitions for terms.

  • Term: Address Binding

    Definition:

    The process of mapping logical addresses to physical addresses in memory.

  • Term: Memory Management Unit (MMU)

    Definition:

    Hardware that facilitates the mapping of logical addresses to physical addresses and enforces memory protection.

  • Term: Dynamic Loading

    Definition:

    A technique where routines are loaded into memory only when they are called during execution.

  • Term: Dynamic Linking

    Definition:

    The process of linking library functions at runtime rather than at compile time.

  • Term: Contiguous Memory Allocation

    Definition:

    A memory management method where each process is allocated a single block of contiguous memory.

  • Term: Fragmentation

    Definition:

    The phenomenon where memory is inefficiently used due to small gaps in allocation.

  • Term: Paging

    Definition:

    A memory management technique that divides physical memory into fixed-size blocks called frames, allowing non-contiguous memory allocation.

  • Term: Segmentation

    Definition:

    A memory management strategy that divides memory into variable-sized blocks based on logical program units.