Processes, threads, scheduling, synchronization, memory management and deadlock handling.
| Concept | Description |
|---|---|
| Process | A program in execution. Has its own PCB, address space, stack, and heap. |
| Process Control Block (PCB) | Data structure containing process ID, state, PC, registers, memory limits, I/O status, scheduling info. |
| Process State | New → Ready → Running → Waiting (Blocked) → Ready → Running → Terminated |
| Context Switch | Saving the state of the running process and loading the state of the next. Pure overhead — no useful work done. |
| PC (Program Counter) | Points to the next instruction to execute. Saved/restored on context switch. |
| Heavyweight | Processes are heavyweight due to separate address space; creation/switching is expensive. |
| State | Description |
|---|---|
| New | Process being created. |
| Ready | In memory, waiting for CPU. |
| Running | Instructions being executed. |
| Waiting/Blocked | Waiting for I/O or event. |
| Terminated | Finished execution. |
| Cost Factor | Details |
|---|---|
| Time | Depends on hardware support; typically microseconds. |
| Registers | All CPU registers must be saved/restored. |
| TLB Flush | Address space change invalidates TLB entries. |
| Cache | New process data replaces old in CPU cache. |
| OS Involvement | Kernel must run scheduler, update PCBs, manage queues. |
| Algorithm | Preemptive? | Starvation? | Complexity | Key Idea |
|---|---|---|---|---|
| FCFS | No | No | O(1) | First come, first served. Simple but convoy effect. |
| SJF (SPN) | No | Yes | O(n log n) | Shortest job first. Optimal avg waiting time (non-preemptive). |
| SRTF | Yes | Yes | O(n log n) | Shortest remaining time first. Preemptive SJF. |
| Round Robin | Yes | No | O(1) | Time quantum based. Good fairness, quantum = critical parameter. |
| Priority | Both | Yes | O(1) | Based on priority. Aging solves starvation. |
| Multilevel Queue | Yes | Yes | O(1) | Separate queues for different classes (foreground/background). |
| MLFQ | Yes | No | O(1) | Processes move between queues based on behavior. |
| P1 | P2 | P3 | P4 | P5 |
|---|---|---|---|---|
| 0--6 | 6--8 | 8--9 | 9--16 | 16--21 |
| P1 | P3 | P2 | P5 | P4 |
|---|---|---|---|---|
| 0--6 | 6--7 | 7--9 | 9--14 | 14--21 |
| P1 | P2 | P3 | P2 | P5 | P1 | P5 | P4 |
|---|---|---|---|---|---|---|---|
| 0--1 | 1--3 | 3--4 | 4--5 | 5--10 | 10--11 | 11--16 | 16--23 |
| P1 | P2 | P3 | P4 | P1 | P5 | P2 | P4 | P5 | P1 | P5 | P4 |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 0--2 | 2--4 | 4--5 | 5--7 | 7--9 | 9--11 | 11--12 | 12--14 | 14--16 | 16--18 | 18--20 | 20--23 |
| Algorithm | Avg Waiting | Avg Turnaround |
|---|---|---|
| FCFS | 5.80 | 10.00 |
| SJF | 5.00 | 10.00 |
| SRTF | 4.40 | 9.80 |
| Round Robin (Q=2) | 9.40 | 13.40 |
| Concept | Description |
|---|---|
| Thread | A lightweight unit of execution within a process. Threads share code, data, files, but have their own stack, PC, registers. |
| Thread Control Block (TCB) | Contains thread ID, program counter, register set, stack pointer, scheduling info, thread state. |
| User-Level Thread | Managed entirely by the user-level library. OS is unaware. Fast creation/switching but one blocking call blocks all threads. |
| Kernel-Level Thread | Managed by the OS. Slower creation/switching but blocking one thread does not block the entire process. |
| Multithreading | Multiple threads within a single process executing concurrently. Enables better CPU utilization. |
| Thread Pool | Pre-created set of threads waiting for tasks. Avoids overhead of creating/destroying threads per request. |
| Aspect | Process | Thread |
|---|---|---|
| Creation | Slow (copy address space) | Fast (share address space) |
| Context Switch | Expensive (flush TLB, cache) | Cheap (shared address space) |
| Memory | Separate address space | Shared address space |
| Communication | IPC required (pipes, sockets) | Shared memory directly |
| Failure | Independent | One crash can kill all threads |
| Overhead | Heavyweight | Lightweight |
| Model | User:Kernel | Pros | Cons |
|---|---|---|---|
| 1:1 | 1:1 | True parallelism, blocking is fine | Slow creation, limited threads |
| N:1 | N:1 | Fast creation, portable | No parallelism, one block = all blocked |
| M:N | M:N | Best of both, flexible | Complex to implement |
| Library | Language | Model | Key Features |
|---|---|---|---|
| pthreads | C/C++ | Kernel-level (POSIX) | pthread_create, pthread_join, pthread_mutex, widely used on Linux/Unix |
| Java Threads | Java | Kernel-level | extends Thread or implements Runnable, built-in synchronized, wait/notify |
| std::thread | C++11+ | Kernel-level | std::thread, std::mutex, std::condition_variable, std::async, futures |
| threading module | Python | Kernel-level (GIL) | Thread class, Lock, RLock, Semaphore, but limited by GIL |
| goroutines | Go | M:N (green threads) | Lightweight (2KB stack), multiplexed onto OS threads, channels for communication |
#include <pthread.h>
#include <stdio.h>
int shared_counter = 0;
pthread_mutex_t lock;
void* increment(void* arg) {
int id = *(int*)arg;
for (int i = 0; i < 100000; i++) {
pthread_mutex_lock(&lock);
shared_counter++; // Critical section
pthread_mutex_unlock(&lock);
}
printf("Thread %d done\n", id);
return NULL;
}
int main() {
pthread_t t1, t2;
int id1 = 1, id2 = 2;
pthread_mutex_init(&lock, NULL);
pthread_create(&t1, NULL, increment, &id1);
pthread_create(&t2, NULL, increment, &id2);
pthread_join(t1, NULL);
pthread_join(t2, NULL);
printf("Counter = %d\n", shared_counter); // Expected: 200000
pthread_mutex_destroy(&lock);
return 0;
}from concurrent.futures import ThreadPoolExecutor
import threading
# ── Thread Pool Pattern ──
def fetch_url(url: str) -> str:
"""Simulate fetching a URL."""
print(f"Thread {threading.current_thread().name} fetching {url}")
# ... actual HTTP request ...
return f"Response from {url}"
urls = ["http://a.com", "http://b.com", "http://c.com"] * 10
# Create a pool with max 4 worker threads
with ThreadPoolExecutor(max_workers=4) as pool:
results = list(pool.map(fetch_url, urls))
# ── Thread Pool Benefits ──
# 1. Avoids thread creation overhead per request
# 2. Limits max concurrent threads (resource control)
# 3. Reuses threads (no create/destroy per task)
# 4. Queue overflow handling (submit returns Future)multiprocessing for CPU-bound work and threadingfor I/O-bound work. Go's goroutines and Java's virtual threads (Loom) avoid this issue entirely.| Concept | Description |
|---|---|
| Critical Section | Code segment where shared resources are accessed. Only one thread can execute in its critical section at a time. |
| Race Condition | Output depends on the order of thread execution. Occurs when multiple threads access shared data without synchronization. |
| Mutual Exclusion | If thread Ti is in critical section, no other thread can be in its critical section. |
| Progress | If no thread is in critical section and some want to enter, only those NOT in remainder section decide who enters. |
| Bounded Waiting | There exists a bound on how many times other threads enter after a thread requests. No indefinite starvation. |
| Solution | Mutual Exclusion | Progress | Bounded Wait | Busy Wait? |
|---|---|---|---|---|
| Peterson's Algorithm | Yes | Yes | Yes | Yes (spinlock) |
| Test-and-Set (TSL) | Yes | No | No | Yes (hardware) |
| Compare-and-Swap (CAS) | Yes | No | No | Yes (hardware) |
| Mutex Lock | Yes | Yes | Yes | No (blocks) |
| Semaphore | Yes | Yes | Yes | No (blocks) |
| Monitor | Yes | Yes | Yes | No (blocks) |
// Peterson's Solution (Two Processes) — Software Solution
// Uses shared variables: flag[2] and turn
int flag[2] = {0, 0};
int turn;
// Process i (i = 0 or 1, j = 1 - i)
void process(int i) {
while (1) {
flag[i] = 1; // I want to enter
turn = j; // Give turn to other (courtesy)
while (flag[j] && turn == j)
; // Busy wait (spinlock)
// ── CRITICAL SECTION ──
// ... access shared resource ...
flag[i] = 0; // I'm done
// ── REMAINDER SECTION ──
}
}
// Works on modern hardware only if memory accesses are sequential.
// May fail on out-of-order CPUs without memory barriers.| Feature | Mutex | Semaphore |
|---|---|---|
| Ownership | Lock owner must unlock | No ownership concept |
| Type | Binary (locked/unlocked) | Binary or Counting |
| Value Range | 0 or 1 | 0 to N (any integer) |
| Purpose | Mutual exclusion | Synchronization / resource counting |
| Release | Only owner can unlock | Any thread can signal |
| Priority Inversion | Possible | Possible (priority inheritance helps) |
| Deadlock Risk | If lock not released | If used incorrectly (spurious wakeups) |
| Operation | Description | Effect |
|---|---|---|
| wait(S) / P(S) | Decrement semaphore; block if S < 0 | S = S - 1 |
| signal(S) / V(S) | Increment semaphore; wake a blocked thread | S = S + 1 |
| Binary Semaphore | S = 0 or 1. Used as mutex. | Same as mutex but no ownership |
| Counting Semaphore | S = 0 to N. Controls access to N resources. | Tracks available resources |
// ── Producer-Consumer (Bounded Buffer) Using Semaphores ──
#define BUFFER_SIZE 5
int buffer[BUFFER_SIZE];
int in = 0, out = 0;
sem_t empty; // counts empty slots (initially BUFFER_SIZE)
sem_t full; // counts filled slots (initially 0)
pthread_mutex_t mutex;
void* producer(void* arg) {
int item;
while (1) {
item = produce_item(); // generate item
sem_wait(&empty); // wait for empty slot
pthread_mutex_lock(&mutex); // enter critical section
buffer[in] = item;
in = (in + 1) % BUFFER_SIZE;
pthread_mutex_unlock(&mutex); // exit critical section
sem_post(&full); // signal: one more filled
}
}
void* consumer(void* arg) {
int item;
while (1) {
sem_wait(&full); // wait for filled slot
pthread_mutex_lock(&mutex); // enter critical section
item = buffer[out];
out = (out + 1) % BUFFER_SIZE;
pthread_mutex_unlock(&mutex); // exit critical section
sem_post(&empty); // signal: one more empty
consume_item(item);
}
}
// Initialize: sem_init(&empty, 0, BUFFER_SIZE);
// sem_init(&full, 0, 0);// ── Readers-Writers Problem (Readers Preference) ──
int read_count = 0;
sem_t mutex; // protects read_count
sem_t rw_mutex; // protects shared data (writer lock)
void* reader(void* arg) {
while (1) {
sem_wait(&mutex);
read_count++;
if (read_count == 1)
sem_wait(&rw_mutex); // First reader locks
sem_post(&mutex);
// ── READING ──
read_data();
sem_wait(&mutex);
read_count--;
if (read_count == 0)
sem_post(&rw_mutex); // Last reader unlocks
sem_post(&mutex);
}
}
void* writer(void* arg) {
while (1) {
sem_wait(&rw_mutex); // Exclusive access
// ── WRITING ──
write_data();
sem_post(&rw_mutex);
}
}
// Problem: Readers preference can starve writers.
// Solution: Writers preference or fair (read-write lock).// ── Dining Philosophers (Semaphore Solution) ──
#define N 5
sem_t chopstick[N];
void init() {
for (int i = 0; i < N; i++)
sem_init(&chopstick[i], 0, 1); // each fork available
}
void* philosopher(void* arg) {
int i = *(int*)arg;
int left = i;
int right = (i + 1) % N;
while (1) {
think();
// Pick up chopsticks
sem_wait(&chopstick[left]);
sem_wait(&chopstick[right]);
eat();
// Put down chopsticks
sem_post(&chopstick[left]);
sem_post(&chopstick[right]);
}
}
// DEADLOCK possible if all pick left simultaneously!
// Solutions:
// 1. Allow at most 4 philosophers to sit (sem_t room = 4)
// 2. Odd philosopher picks left first, even picks right first
// 3. Pick both chopsticks atomically (break deadlock condition)| Condition | Description | Break Strategy |
|---|---|---|
| Mutual Exclusion | At least one resource is non-sharable (only one process at a time). | Use sharable resources where possible (e.g., read-only files). |
| Hold and Wait | Process holds at least one resource and is waiting for another. | Acquire all resources at once before execution. |
| No Preemption | Resources cannot be forcibly taken from a process. | Allow preemption: if a process can't get a resource, release what it holds. |
| Circular Wait | Circular chain of processes, each waiting for a resource held by the next. | Assign a global ordering to resource types; acquire in ascending order only. |
| Condition Addressed | Method |
|---|---|
| Mutual Exclusion | Use spooling for printers. Not always possible (e.g., mutex locks). |
| Hold and Wait | Request all resources at once. Low utilization. |
| No Preemption | If a process can't get a resource, release all held resources. |
| Circular Wait | Number resources globally. Acquire in ascending order only. |
| Aspect | Prevention | Avoidance |
|---|---|---|
| Approach | Ensure at least one condition never holds | Dynamically check if granting a request leads to deadlock |
| Knowledge Needed | None (conservative) | Max resource needs of each process in advance |
| Overhead | Low | High (runtime checks) |
| Utilization | Low (restrictive) | Higher (more flexible) |
| Algorithm | N/A | Banker's Algorithm |
| Feasibility | Always possible | Requires knowing maximum claims a priori |
Allocation: P0=(0,1,0) P1=(2,0,0) P2=(3,0,2) P3=(2,1,1) P4=(0,0,2)
Maximum: P0=(7,5,3) P1=(3,2,2) P2=(9,0,2) P3=(2,2,2) P4=(4,3,3)
Need = Max - Allocation: P0=(7,4,3) P1=(1,2,2) P2=(6,0,0) P3=(0,1,1) P4=(4,3,1)
| Step | Process | Work (Available + Allocation) | Available After |
|---|---|---|---|
| 1 | P1 | (3,3,2) + (2,0,0) = (5,3,2) | (5,3,2) |
| 2 | P3 | (5,3,2) + (2,1,1) = (7,4,3) | (7,4,3) |
| 3 | P4 | (7,4,3) + (0,0,2) = (7,4,5) | (7,4,5) |
| 4 | P0 | (7,4,5) + (0,1,0) = (7,5,5) | (7,5,5) |
| 5 | P2 | (7,5,5) + (3,0,2) = (10,5,7) | (10,5,7) |
| Component | Meaning |
|---|---|
| Process Node (circle) | Represents a process. |
| Resource Node (square) | Represents a resource type. Dots inside = instances. |
| Request Edge (P → R) | Process is waiting for that resource. |
| Assignment Edge (R → P) | Resource instance is allocated to that process. |
| Cycle | If each resource has only 1 instance: cycle = deadlock. Multiple instances: cycle is necessary but not sufficient. |
| Strategy | Description | Trade-off |
|---|---|---|
| Process Termination | Abort all deadlocked processes (drastic) or abort one at a time until cycle broken. | Data loss, rollback needed. |
| Resource Preemption | Forcibly take resources from processes and give to waiting processes. | Need to rollback process to safe state. |
| Checkpoint/Rollback | Periodically save process state. On deadlock, rollback to last checkpoint. | Overhead of checkpointing. |
| Kill the youngest | Terminate the process that has done the least work. | Minimizes wasted computation. |
| Concept | Description |
|---|---|
| Logical Address | Address generated by the CPU (program view). Also called virtual address. |
| Physical Address | Actual address in RAM (hardware view). The MMU translates logical to physical. |
| MMU | Memory Management Unit. Hardware that translates logical addresses to physical at runtime. |
| Contiguous Allocation | Entire process loaded in one contiguous block of memory. |
| Fragmentation | Wasted memory space. Internal (within a block) vs External (between blocks). |
| Paging | Non-contiguous allocation. Divide physical memory into fixed-size frames, logical into same-size pages. |
| Segmentation | Divide memory into variable-size segments based on logical divisions (code, data, stack). |
| Algorithm | Description | Pros | Cons |
|---|---|---|---|
| First Fit | Allocate the first hole that is big enough. | Fast, simple. | Leaves small fragments at the start. |
| Best Fit | Allocate the smallest hole that is big enough. | Minimizes wasted space. | Slow, creates tiny unusable fragments. |
| Worst Fit | Allocate the largest available hole. | Leaves large remaining holes. | Slow, wastes large blocks. |
| Term | Description |
|---|---|
| Page | Fixed-size block of logical memory (typically 4 KB). |
| Frame | Fixed-size block of physical memory (same size as page). |
| Page Table | Maps page numbers to frame numbers. One per process. |
| Page Table Base Register (PTBR) | Points to the page table of the current process. |
| Page Table Length Register (PTLR) | Stores the size of the page table. |
| Valid/Invalid Bit | Valid = page is in memory. Invalid = not yet loaded or not in process address space. |
| Dirty Bit | Set when page is modified. Used to avoid writing clean pages back to disk. |
| Reference Bit | Set when page is accessed. Used in page replacement algorithms. |
Logical Address = (page_number, page_offset)
Given a logical address and page size:page_number = logical_address / page_sizepage_offset = logical_address % page_size
Physical Address = (frame_number, page_offset)physical_address = frame_number * page_size + page_offset
Example: Logical address = 13200, Page size = 4 KB = 4096
Page number = 13200 / 4096 = 3, Offset = 13200 % 4096 = 912
If page table maps page 3 -> frame 7:
Physical = 7 * 4096 + 912 = 29664
| Aspect | Details |
|---|---|
| What | High-speed cache for page table entries. Small, fast, hardware-managed. |
| Hit Ratio | Typically 90-99%. TLB hit = no memory access for translation. |
| Miss Penalty | Extra memory access to walk page table. Can be reduced with multi-level page tables. |
| Effective Access Time | EAT = (TLB hit ratio * TLB access time) + (miss ratio * (TLB access + memory access * levels)) |
| Associativity | Fully associative (any entry maps any page) vs Set-associative. |
| Flush | TLB flushed on context switch (or tagged with process ID to avoid flush). |
| Example EAT | TLB access = 10ns, Memory = 100ns, Hit = 80%: EAT = 0.8*(10+100) + 0.2*(10+200) = 130ns |
Why? Single-level page table for 32-bit address space with 4 KB pages = 2^20 entries x 4 bytes = 4 MB per process. With 64-bit address space, this is impossibly large. Multi-level paging reduces the table size by only allocating entries for used regions.
| Level | Address Bits Used | Entries | Table Size (4-byte entries) |
|---|---|---|---|
| 2-Level (32-bit) | 10 + 10 + 12 (offset) | 1024 per level | 4 KB per inner table |
| 3-Level (64-bit) | 9 + 9 + 9 + 12 | 512 per level | 2 KB per table |
| 4-Level (x86-64) | 9 + 9 + 9 + 9 + 12 | 512 per level | 4 KB per table (PML4) |
| Aspect | Paging | Segmentation |
|---|---|---|
| Size | Fixed (e.g., 4 KB) | Variable (based on logical division) |
| Fragmentation | Internal only | External only |
| User View | No relation to program structure | Matches program structure (code, data, stack) |
| Sharing | Difficult (arbitrary page boundaries) | Easy (share a whole segment) |
| Protection | At page level | At segment level (per-segment permissions) |
| Hardware | Simple (just page number + offset) | Complex (segment table + offset + limit check) |
| Algorithm | Description | Optimal? | Practical? | Belady's Anomaly? |
|---|---|---|---|---|
| Optimal (OPT/MIN) | Replace page not used for the longest time in future. | Yes (lowest faults) | No (needs future) | No |
| FIFO | Replace the oldest page in memory (first-in, first-out). | No | Yes | Yes! |
| LRU (Least Recently Used) | Replace page not used for the longest time in past. | Near-optimal | Yes (costly) | No |
| Clock (Second Chance) | Circular queue; use reference bit to give pages a second chance. | No | Yes | No |
| LFU (Least Frequently Used) | Replace page with the lowest access count. | No | Yes | No |
| MFU (Most Frequently Used) | Replace page with the highest access count. | No | Yes | No |
| Step | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Ref | 7 | 0 | 1 | 2 | 0 | 3 | 0 | 4 | 2 | 3 | 0 | 3 | 2 | 1 | 2 | 0 | 1 | 7 | 0 | 1 |
| F1 | 7 | 7 | 7 | 2 | 2 | 2 | 2 | 4 | 4 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 | 7 | 7 |
| F2 | 0 | 0 | 0 | 0 | 3 | 3 | 3 | 2 | 2 | 2 | 2 | 2 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | |
| F3 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 3 | 3 | 3 | 3 | 3 | 2 | 2 | 2 | 2 | 2 | 1 | ||
| Hit? | F | F | F | F | H | F | H | F | F | F | F | H | H | F | F | H | H | F | H | F |
| Step | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Ref | 7 | 0 | 1 | 2 | 0 | 3 | 0 | 4 | 2 | 3 | 0 | 3 | 2 | 1 | 2 | 0 | 1 | 7 | 0 | 1 |
| F1 | 7 | 7 | 7 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 7 | 7 | 7 |
| F2 | 0 | 0 | 0 | 0 | 0 | 0 | 4 | 4 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
| F3 | 1 | 1 | 1 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 3 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | ||
| Hit? | F | F | F | F | H | F | H | F | H | H | F | H | H | F | H | H | H | F | H | H |
| Step | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Ref | 7 | 0 | 1 | 2 | 0 | 3 | 0 | 4 | 2 | 3 | 0 | 3 | 2 | 1 | 2 | 0 | 1 | 7 | 0 | 1 |
| F1 | 7 | 7 | 7 | 2 | 2 | 2 | 2 | 4 | 4 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 | 7 | 7 |
| F2 | 0 | 0 | 0 | 0 | 3 | 3 | 3 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 0 | 0 | |
| F3 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 3 | 3 | 3 | 3 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | ||
| Hit? | F | F | F | F | H | F | H | F | F | F | F | H | H | F | H | H | H | F | H | H |
| Algorithm | 3 Frames | 4 Frames | Belady's? |
|---|---|---|---|
| FIFO | 15 faults | 14 faults | Yes (more frames can increase faults!) |
| Optimal | 9 faults | 8 faults | No |
| LRU | 12 faults | 10 faults | No |
Belady's Anomaly: For FIFO, increasing the number of frames can sometimes increase page faults. This does not happen with Optimal or LRU (both are stack algorithms). Reference string: 1,2,3,4,1,2,5,1,2,3,4,5 with 3 frames = 9 faults, with 4 frames = 10 faults.
| Concept | Description |
|---|---|
| File | Named collection of related information stored on secondary storage. |
| Directory | A special file that maps file names to their metadata and data blocks. |
| File Allocation Table (FAT) | Table with one entry per disk block, linking blocks together (used in FAT16/32/exFAT). |
| Inode | Data structure storing file metadata (size, permissions, timestamps, block pointers). Used in Unix/Linux. |
| Mount Point | Directory where a file system is attached to the existing directory tree. |
| Journaling | Writing changes to a log (journal) before committing to disk. Prevents corruption on crash (ext4, NTFS). |
| Method | Description | Access Type | External Frag? | Pros | Cons |
|---|---|---|---|---|---|
| Contiguous | Each file occupies a set of contiguous blocks. | Sequential + Direct | Yes | Fast access, simple. | Fragmentation, hard to grow. |
| Linked | Each block has a pointer to the next block. Directory stores start + length. | Sequential only | No | No fragmentation, easy to grow. | No random access, pointer overhead. |
| Indexed | Index block contains pointers to all data blocks. Directory stores index block number. | Sequential + Direct | No | Direct access, no fragmentation. | Index block size limit. |
| Multilevel Indexed | Index block points to other index blocks (like multi-level page tables). | Direct | No | Supports large files. | Extra indirection overhead. |
| Method | Description | Best For |
|---|---|---|
| Bit Vector (Bitmap) | Each block is represented by 1 bit (0=free, 1=allocated). Simple, fast. | General purpose. Efficient for finding contiguous free blocks. |
| Linked List | Free blocks linked together. Each free block stores pointer to next free block. | Small disks. Slow to traverse. |
| Grouping | First free block stores addresses of n free blocks; last of those stores n more, etc. | Medium-sized disks. |
| Counting | Keep address of first free block + count of contiguous free blocks after it. | Disks with mostly contiguous free space. |
| Structure | Description | Pros | Cons |
|---|---|---|---|
| Single-Level | All files in one directory. No subdirectories. | Simple. | Naming conflicts, unscalable. |
| Two-Level | Separate directory per user (UFD) + master file directory (MFD). | Naming isolation between users. | No grouping within a user. |
| Tree-Structured | Hierarchical directories. Standard in modern OS (Unix, Windows). | Logical grouping, easy navigation. | Can lead to deep paths. |
| Acyclic Graph | Allows shared files/directories (hard links). No cycles. | Sharing without duplication. | Complex deletion (reference counting). |
| General Graph | Allows cycles (symbolic links). | True sharing. | Garbage collection needed, traversals can loop. |
| RAID Level | Strips | Mirrors | Parity | Min Disks | Fault Tolerance | Use Case |
|---|---|---|---|---|---|---|
| RAID 0 | Yes | No | No | 1 | None (faster, riskier) | Performance, scratch space |
| RAID 1 | No | Yes | No | 2 | 1 disk failure | Critical data, OS boot |
| RAID 5 | Yes | No | Distributed | 3 | 1 disk failure | Balanced performance + redundancy |
| RAID 6 | Yes | No | Double distributed | 4 | 2 disk failures | High-value data, enterprise |
| RAID 10 (1+0) | Yes | Yes | No | 4 | 1 disk per mirror | High performance + redundancy |
# ── File System Commands ──
df -hT # Disk free space (human-readable, with type)
du -sh /path # Disk usage of directory (summary, human-readable)
ls -li # List files with inode numbers
stat /path/to/file # Show inode information (permissions, timestamps, blocks)
fsck /dev/sda1 # Check and repair file system (unmount first!)
mkfs.ext4 /dev/sdb1 # Create ext4 file system
mount /dev/sdb1 /mnt # Mount a file system
umount /mnt # Unmount
tune2fs -l /dev/sda1 # Show ext4 superblock info
fdisk -l # List disk partitions
lsblk # List block devices (tree view)
dd if=/dev/zero of=/file bs=1M count=100 # Create 100MB test file
# ── Inode Structure (Linux) ──
# Each inode stores:
# - File type (regular, directory, symlink, device)
# - File permissions (rwxrwxrwx)
# - Owner UID and Group GID
# - File size (bytes)
# - Timestamps: atime (access), mtime (modify), ctime (change)
# - Number of hard links
# - Block pointers (15 total):
# 12 direct blocks
# 1 single indirect block
# 1 double indirect block
# 1 triple indirect block| Concept | Description |
|---|---|
| Seek Time | Time for the disk arm to move to the correct track/cylinder. Dominant cost in disk access. |
| Rotational Latency | Time for the disk to rotate the desired sector under the read/write head. |
| Transfer Time | Time to actually read/write data once the head is positioned. Usually small. |
| Disk Access Time | Seek Time + Rotational Latency + Transfer Time. |
| Disk Bandwidth | Total bytes transferred divided by total time between first request and last completion. |
| Algorithm | Direction | Fair? | Starvation? | Overhead |
|---|---|---|---|---|
| FCFS | Request order | Yes | No | None |
| SSTF | Nearest track | No | Yes | None (greedy) |
| SCAN (Elevator) | One direction, then reverse | Fair | No (for interior) | None |
| C-SCAN | One direction, jump to start | Very fair | No | Small (jump back) |
| LOOK | Like SCAN but only goes to last request | Fair | No | None |
| C-LOOK | Like C-SCAN but only goes to last request | Very fair | No | None |
Order: 53 -> 98 -> 183 -> 37 -> 122 -> 14 -> 124 -> 65 -> 67
Movements: 45 + 85 + 146 + 85 + 108 + 110 + 59 + 2 = 640
Order: 53 -> 65 -> 67 -> 37 -> 14 -> 98 -> 122 -> 124 -> 183
Movements: 12 + 2 + 30 + 23 + 84 + 24 + 2 + 59 = 236
Note: Greedy — picks nearest. Much better than FCFS but can starve distant requests.
Direction: initially toward higher tracks.
Order: 53 -> 65 -> 67 -> 98 -> 122 -> 124 -> 183 -> 199 (end) -> 37 -> 14
Movements: 12 + 2 + 31 + 24 + 2 + 59 + 16 + 162 + 23 = 331
Note: SCAN goes all the way to the end of the disk before reversing. LOOK would stop at 183 (the last request).
Moves in one direction only, then jumps to start.
Order: 53 -> 65 -> 67 -> 98 -> 122 -> 124 -> 183 -> 199 (end) -> 0 (start) -> 14 -> 37
Movements: 12 + 2 + 31 + 24 + 2 + 59 + 16 + 199 + 14 + 23 = 382
Note: More uniform wait times than SCAN. C-LOOK would stop at 183 instead of going to 199, and jump to 14 instead of 0.
Like SCAN but reverses at the last request, not the disk end.
Order: 53 -> 65 -> 67 -> 98 -> 122 -> 124 -> 183 -> 37 -> 14
Movements: 12 + 2 + 31 + 24 + 2 + 59 + 146 + 23 = 299
Like C-SCAN but reverses at the last request.
Order: 53 -> 65 -> 67 -> 98 -> 122 -> 124 -> 183 -> 14 -> 37
Movements: 12 + 2 + 31 + 24 + 2 + 59 + 169 + 23 = 322
| Algorithm | Total Head Movement | Ranking |
|---|---|---|
| FCFS | 640 | Worst — random access pattern |
| SSTF | 236 | Best here but unfair (starvation) |
| SCAN | 331 | Fair, goes to disk end |
| C-SCAN | 382 | Fair, uniform wait times |
| LOOK | 299 | Better than SCAN (no unnecessary travel) |
| C-LOOK | 322 | Better than C-SCAN (no unnecessary travel) |
A process is an independent program in execution with its own address space, PCB, and system resources. A threadis a lightweight sub-process that shares the process's address space (code, data, heap, files) but has its own stack, registers, and program counter. Process creation is expensive (copy-on-write, separate page tables), while thread creation is fast. Context switch between processes is slower (TLB flush, cache invalidation) vs. threads (same address space). Communication between processes requires IPC (pipes, shared memory, sockets); threads communicate via shared memory directly (but need synchronization). A thread crash can kill the entire process; a process crash is isolated.
Deadlock is a state where a set of processes are permanently blocked because each is waiting for a resource held by another in the set. All four necessary conditions (mutual exclusion, hold-and-wait, no preemption, circular wait) must hold simultaneously. Starvation is when a process is indefinitely prevented from making progress because other processes always get priority (e.g., SJF scheduling can starve long jobs). The key difference: deadlock requires a cycle and all processes are stuck; starvation is about unfair scheduling. Solutions: deadlock (prevention, avoidance, detection/recovery); starvation (aging, fair scheduling algorithms like Round Robin).
Virtual memory is a memory management technique that creates the illusion of a very large, contiguous address space for each process, even when physical RAM is limited. It allows execution of processes larger than available physical memory. Implementation: the OS divides logical memory into pages and physical memory into frames. Only the pages currently needed are loaded into frames; the rest stay on disk (swap space). The page table maps logical pages to physical frames. When a process accesses a page not in memory (page fault), the OS loads it from disk into a frame (potentially evicting another page first using a replacement algorithm like LRU). Demand pagingloads pages only when they are accessed, never proactively. This decouples the program's view of memory from physical reality, enabling efficient memory utilization, process isolation, and memory overcommitment (total virtual memory can exceed physical RAM).
Paging divides memory into fixed-size blocks (pages/frames), while segmentationdivides it into variable-size logical units (code, data, stack, heap). Paging eliminates external fragmentation (only internal, from the last page), while segmentation eliminates internal fragmentation but can cause external fragmentation. Paging is transparent to the programmer (compiler/linker handle it); segmentation reflects the program's logical structure. Paging address: page number + offset (simple arithmetic). Segmentation address: segment number + offset + limit check (bounds checking required). Modern systems (x86-64) combine both: segments are divided into pages. Linux and modern OSes effectively use flat paging with segmentation disabled for simplicity.
FCFS: Simple, fair (FIFO), but convoy effect (short jobs stuck behind long ones). Good for batch systems.
SJF: Optimal average waiting time (proven), but requires knowing burst time in advance (impractical). Can starve long jobs. Used when burst times are predictable.
SRTF: Preemptive SJF. Better average waiting time than SJF. High context switch overhead.
Round Robin: Fair, time-sharing. Quantum is critical: too small = excessive overhead, too large = degenerates to FCFS. Rule of thumb: quantum ~80% of burst time. Good for interactive systems.
Priority: Useful for real-time systems. Aging prevents starvation. Can cause indefinite blocking.
MLFQ: Best for general-purpose OS (Unix, Linux). Multiple feedback queues with decreasing priority and increasing quantum. I/O-bound processes stay in high-priority queues (they block often, releasing CPU quickly). CPU-bound processes sink to lower-priority queues with longer quanta.
A mutex (mutual exclusion lock) is a locking mechanism with ownership — the thread that acquires the lock must be the one to release it. It is always binary (locked/unlocked). Used to protect critical sections.
A semaphore is a signaling mechanism with no ownership — any thread can signal (increment). It can be binary (0 or 1, similar to mutex but without ownership) or counting (0 to N, to manage multiple instances of a resource). Used for both synchronization and resource counting.
Key differences:(1) Mutex has ownership, semaphore does not. (2) Mutex is always binary; semaphore can count. (3) Mutex is for mutual exclusion; semaphore is for synchronization. (4) A mutex can have priority inheritance to avoid priority inversion; semaphores typically don't. (5) Releasing an unowned semaphore is valid; releasing an unowned mutex is undefined behavior. (6) Semaphores can be used across processes (named semaphores); mutexes are typically intra-process (though shared mutexes exist).