Unit 3
Unit 3: Distributed Resource Management - Complete Notes
1. Distributed Shared Memory (DSM)
Definition:
Distributed Shared Memory (DSM) is a mechanism that allows multiple nodes in a distributed system to share a common memory space, even though they are physically separated.
How It Works:
Memory Abstraction:
DSM provides an abstraction of shared memory, making it appear as if all nodes are accessing the same memory.
Data Distribution:
Data is distributed across nodes, and DSM ensures consistency and coherence.
Advantages:
Ease of Programming:
Developers can write programs as if they are running on a single machine.
Example: Parallel programming using shared memory.
Scalability:
DSM allows adding more nodes to increase memory capacity.
Example: Distributed databases using DSM for caching.
Challenges:
Consistency:
Ensuring all nodes see the same data at the same time.
Example: Two nodes updating the same memory location simultaneously.
Performance Overhead:
Maintaining coherence across nodes can introduce latency.
Example: Frequent updates to shared memory can slow down the system.
Mind Map for Distributed Shared Memory:
Distributed Shared Memory (DSM)
├── Definition
│ └── Shared memory across distributed nodes
├── Advantages
│ ├── Ease of programming (Example: Parallel programming)
│ └── Scalability (Example: Distributed databases)
└── Challenges
├── Consistency (Example: Simultaneous updates)
└── Performance Overhead (Example: Latency in updates)
2. Data-Centric Consistency Models
Definition:
Data-centric consistency models define how updates to shared data are propagated and observed by nodes in a distributed system.
Types of Consistency Models:
Strong Consistency:
All nodes see the same data at the same time.
Example: Financial systems where account balances must be consistent.
Weak Consistency:
Nodes may see stale data for some time.
Example: Social media feeds where slight delays in updates are acceptable.
Eventual Consistency:
All nodes will eventually see the same data if no new updates are made.
Example: DNS (Domain Name System) updates.
Real-World Example:
Amazon DynamoDB: Uses eventual consistency to provide high availability and low latency.
Mind Map for Data-Centric Consistency Models:
Data-Centric Consistency Models
├── Strong Consistency
│ └── Example: Financial systems
├── Weak Consistency
│ └── Example: Social media feeds
└── Eventual Consistency
└── Example: DNS updates
3. Client-Centric Consistency Models
Definition:
Client-centric consistency models ensure that a client sees a consistent view of the data, even if the data is distributed across multiple servers.
Types of Client-Centric Consistency Models:
Read-Your-Writes Consistency:
A client always sees the updates it has made.
Example: Editing a document and immediately seeing the changes.
Monotonic Reads:
A client never sees older data after seeing newer data.
Example: Reading a news article and always seeing the latest version.
Monotonic Writes:
Writes by a client are applied in the order they were made.
Example: Posting comments on a blog in the correct order.
Real-World Example:
Google Docs: Ensures that users always see their own edits and the latest version of the document.
Mind Map for Client-Centric Consistency Models:
Client-Centric Consistency Models
├── Read-Your-Writes Consistency
│ └── Example: Editing a document
├── Monotonic Reads
│ └── Example: Reading a news article
└── Monotonic Writes
└── Example: Posting blog comments
4. Distributed File Systems
Definition:
A distributed file system (DFS) allows files to be stored and accessed across multiple nodes in a distributed system.
Key Features:
Transparency:
Users should not be aware that files are distributed.
Example: Accessing files on Google Drive feels like accessing local files.
Scalability:
The system should handle growth in users and files.
Example: Adding more storage servers to a cloud file system.
Fault Tolerance:
The system should continue functioning even if some nodes fail.
Example: Replicating files across multiple servers in Hadoop HDFS.
Real-World Example:
Hadoop HDFS (Hadoop Distributed File System): Used for storing and processing large datasets in distributed environments.
Mind Map for Distributed File Systems:
Distributed File Systems
├── Key Features
│ ├── Transparency (Example: Google Drive)
│ ├── Scalability (Example: Cloud file systems)
│ └── Fault Tolerance (Example: Hadoop HDFS)
└── Real-World Example
└── Hadoop HDFS
5. Sun NFS (Network File System)
Definition:
Sun NFS is a distributed file system protocol that allows users to access files over a network as if they were local.
How It Works:
Remote File Access:
Files are stored on a remote server but appear as if they are on the local machine.
Mounting:
Remote directories are mounted on the local file system.
Example: Mounting a shared folder on a network drive.
Advantages:
Transparency:
Users can access remote files as if they are local.
Example: Accessing files on a shared network drive.
Scalability:
Multiple users can access files simultaneously.
Example: Shared project folders in an organization.
Challenges:
Performance:
Network latency can affect file access speed.
Example: Slow file access over a congested network.
Security:
Files transmitted over the network can be intercepted.
Example: Encrypting NFS traffic to prevent unauthorized access.
Mind Map for Sun NFS:
Sun NFS (Network File System)
├── How It Works
│ ├── Remote file access
│ └── Mounting (Example: Shared network drive)
├── Advantages
│ ├── Transparency (Example: Accessing remote files)
│ └── Scalability (Example: Shared project folders)
└── Challenges
├── Performance (Example: Network latency)
└── Security (Example: Encrypting NFS traffic)
Summary of Unit 3: Distributed Resource Management
Distributed Shared Memory (DSM) provides a shared memory abstraction across distributed nodes but faces consistency and performance challenges.
Data-Centric Consistency Models (strong, weak, eventual) define how updates are propagated in distributed systems.
Client-Centric Consistency Models ensure that clients see a consistent view of data (e.g., read-your-writes, monotonic reads).
Distributed File Systems (e.g., Hadoop HDFS) enable transparent, scalable, and fault-tolerant file storage.
Sun NFS is a protocol for accessing remote files as if they are local, with advantages like transparency and scalability but challenges like performance and security.
Final Mind Map for Unit 3:
Unit 3: Distributed Resource Management
├── Distributed Shared Memory (DSM)
│ ├── Advantages (Ease of programming, Scalability)
│ └── Challenges (Consistency, Performance overhead)
├── Data-Centric Consistency Models
│ ├── Strong Consistency (Example: Financial systems)
│ ├── Weak Consistency (Example: Social media feeds)
│ └── Eventual Consistency (Example: DNS updates)
├── Client-Centric Consistency Models
│ ├── Read-Your-Writes (Example: Editing a document)
│ ├── Monotonic Reads (Example: Reading a news article)
│ └── Monotonic Writes (Example: Posting blog comments)
├── Distributed File Systems
│ ├── Key Features (Transparency, Scalability, Fault Tolerance)
│ └── Real-World Example (Hadoop HDFS)
└── Sun NFS
├── Advantages (Transparency, Scalability)
└── Challenges (Performance, Security)
Last updated