SlideShare a Scribd company logo
Distributed Computing Seminar Lecture 1: Introduction to Distributed  Computing & Systems Background Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet Summer 2007 Except where otherwise noted, the contents of this presentation are © Copyright 2007 University of Washington and are licensed under the Creative Commons Attribution 2.5 License.
Course Overview 5 lectures 1 Introduction 2 Technical Side: MapReduce & GFS 2 Theoretical: Algorithms for distributed computing Readings + Questions nightly Readings: http://code.google.com/edu/content/submissions/mapreduce-minilecture/listing.html Questions: http://code.google.com/edu/content/submissions/mapreduce-minilecture/MapReduceMiniSeriesReadingQuestions.doc
Outline Introduction to Distributed Computing Parallel vs. Distributed Computing History of Distributed Computing Parallelization and Synchronization Networking Basics
Computer Speedup Moore’s Law: “ The density of transistors on a chip doubles every 18 months, for the same cost”  (1965) Image: Tom’s Hardware and not subject to the Creative Commons license applicable to the rest of this work.  Image: Tom’s Hardware
Scope of problems What can you do with 1 computer? What can you do with 100 computers? What can you do with an entire data center?
Distributed problems Rendering multiple frames of high-quality animation Image: DreamWorks Animation and not subject to the Creative Commons license applicable to the rest of this work.
Distributed problems Simulating several hundred or thousand characters  Happy Feet  © Kingdom Feature Productions; Lord of the Rings  © New Line Cinema, neither image is subject to the Creative Commons license applicable to the rest of the work.
Distributed problems Indexing the web (Google) Simulating an Internet-sized network for networking experiments (PlanetLab) Speeding up content delivery (Akamai) What is the key attribute that all these examples have in common?
Parallel vs. Distributed Parallel computing can mean: Vector processing of data Multiple CPUs in a single computer Distributed computing is multiple CPUs across many computers over the network
A Brief History… 1975-85 Parallel computing was favored in the early years Primarily vector-based at first Gradually more thread-based parallelism was introduced Image:  Computer Pictures Database and Cray Research Corp and is not subject to the Creative Commons license applicable to the rest of this work.
“ Massively parallel architectures” start rising in prominence Message Passing Interface (MPI) and other libraries developed Bandwidth was a big problem A Brief History… 1985-95
A Brief History… 1995-Today Cluster/grid architecture increasingly dominant Special node machines eschewed in favor of COTS technologies Web-wide cluster software Companies like Google take this to the extreme
Parallelization & Synchronization
Parallelization Idea Parallelization is “easy” if processing can be cleanly split into n units:
Parallelization Idea (2) In a parallel computation, we would like to have as many threads as we have processors. e.g., a four-processor computer would be able to run four threads at the same time.
Parallelization Idea (3)
Parallelization Idea (4)
Parallelization Pitfalls But this model is too simple!  How do we assign work units to worker threads? What if we have more work units than threads? How do we aggregate the results at the end? How do we know all the workers have finished? What if the work cannot be divided into completely separate tasks? What is the common theme of all of these problems?
Parallelization Pitfalls (2) Each of these problems represents a point at which multiple threads must communicate with one another, or access a shared resource. Golden rule: Any memory that can be used by multiple threads must have an associated  synchronization system !
What is Wrong With This? Thread 1: void foo() { x++; y = x; } Thread 2: void bar() { y++; x+=3; } If the initial state is y = 0, x = 6, what happens after these threads finish running?
Multithreaded = Unpredictability When we run a multithreaded program, we don’t know what order threads run in, nor do we know when they will interrupt one another. Thread 1: void foo() { eax = mem[x]; inc eax; mem[x] = eax; ebx = mem[x]; mem[y] = ebx; } Thread 2: void bar() { eax = mem[y]; inc eax; mem[y] = eax; eax = mem[x]; add eax, 3; mem[x] = eax; } Many things that look like “one step” operations actually take several steps under the hood:
Multithreaded = Unpredictability This applies to more than just integers: Pulling work units from a queue Reporting work back to master unit Telling another thread that it can begin the “next phase” of processing …  All require synchronization!
Synchronization Primitives A  synchronization primitive  is a special shared variable that guarantees that it can only be accessed  atomically .  Hardware support guarantees that operations on synchronization primitives only ever take one step
Semaphores A semaphore is a flag that can be raised or lowered in one step Semaphores were flags that railroad engineers would use when entering a shared track Only one side of the semaphore can ever be red! (Can both be green?)
Semaphores set() and reset() can be thought of as lock() and unlock() Calls to lock() when the semaphore is already locked cause the thread to  block . Pitfalls: Must “bind” semaphores to particular objects; must remember to unlock correctly
The “corrected” example Thread 1: void foo() { sem.lock(); x++; y = x; sem.unlock(); } Thread 2: void bar() { sem.lock(); y++; x+=3; sem.unlock(); } Global var “Semaphore sem = new Semaphore();” guards access to x & y
Condition Variables A condition variable notifies threads that a particular condition has been met  Inform another thread that a queue now contains elements to pull from (or that it’s empty – request more elements!) Pitfall: What if nobody’s listening?
The final example Thread 1: void foo() { sem.lock(); x++; y = x; fooDone = true; sem.unlock(); fooFinishedCV.notify(); } Thread 2: void bar() { sem.lock(); if(!fooDone) fooFinishedCV.wait(sem); y++; x+=3; sem.unlock(); } Global vars: Semaphore sem = new Semaphore(); ConditionVar fooFinishedCV = new ConditionVar(); boolean fooDone = false;
Too Much Synchronization? Deadlock Synchronization becomes even more complicated when multiple locks can be used Can cause entire system to “get stuck” Thread A: semaphore1.lock(); semaphore2.lock(); /* use data guarded by  semaphores */ semaphore1.unlock();  semaphore2.unlock(); Thread B: semaphore2.lock(); semaphore1.lock(); /* use data guarded by  semaphores */ semaphore1.unlock();  semaphore2.unlock(); (Image: RPI CSCI.4210 Operating Systems notes)
The Moral: Be Careful! Synchronization is hard Need to consider all possible shared state Must keep locks organized and use them consistently and correctly Knowing there are bugs may be tricky; fixing them can be even worse! Keeping shared state to a minimum reduces total system complexity
Fundamentals of Networking
Sockets: The Internet = tubes? A socket is the basic network interface Provides a two-way “pipe” abstraction between two applications Client creates a socket, and connects to the server, who receives a socket representing the other side
Ports Within an IP address, a  port  is a sub-address identifying a listening program Allows multiple clients to connect to a server at once
What makes this work? Underneath the socket layer are several more protocols Most important are TCP and IP (which are used hand-in-hand so often, they’re often spoken of as one protocol: TCP/IP) Even more low-level protocols handle how data is sent over Ethernet wires, or how bits are sent through the air using 802.11 wireless…
Why is This Necessary? Not actually tube-like “underneath the hood” Unlike phone system (circuit switched), the  packet switched  Internet uses many routes at once
Networking Issues If a party to a socket disconnects, how much data did they receive? …  Did they crash? Or did a machine in the middle? Can someone in the middle intercept/modify our data? Traffic congestion makes switch/router topology important for efficient throughput
Conclusions Processing more data means using more machines at the same time Cooperation between processes requires synchronization Designing real distributed systems requires consideration of networking topology Next time: How MapReduce works

More Related Content

PPT
Distributed file systems (from Google)
PPT
Google: Cluster computing and MapReduce: Introduction to Distributed System D...
PDF
Distributed computing for new bloods
PPTX
PPT
Interprocess communication
PDF
Operating Systems 1 (7/12) - Threads
PDF
Inter process communication
PDF
Computer Networks Omnet
Distributed file systems (from Google)
Google: Cluster computing and MapReduce: Introduction to Distributed System D...
Distributed computing for new bloods
Interprocess communication
Operating Systems 1 (7/12) - Threads
Inter process communication
Computer Networks Omnet

What's hot (20)

PPT
PDF
WiMAX implementation in ns3
PPTX
5.distributed file systems
PPT
PPTX
Distributed Shared Memory Systems
PPT
Interprocess communication (IPC) IN O.S
PDF
ITFT_Inter process communication
PDF
Porting dmtcp mac_slides
PPTX
Apple continuity
PPTX
PThreads Vs Win32 Threads
PPT
Unit 1
PDF
INET for Starters
PDF
PPT
Processes and Threads in Windows Vista
PDF
From Mainframe to Microservice: An Introduction to Distributed Systems
PDF
Ns 3 simulation of wi max networks
PPTX
PPT
Chap 4
PPTX
Inter Process Communication
WiMAX implementation in ns3
5.distributed file systems
Distributed Shared Memory Systems
Interprocess communication (IPC) IN O.S
ITFT_Inter process communication
Porting dmtcp mac_slides
Apple continuity
PThreads Vs Win32 Threads
Unit 1
INET for Starters
Processes and Threads in Windows Vista
From Mainframe to Microservice: An Introduction to Distributed Systems
Ns 3 simulation of wi max networks
Chap 4
Inter Process Communication
Ad

Similar to Introduction to Cluster Computing and Map Reduce (from Google) (20)

PPT
Lec1 Intro
PPT
Lec1 Intro
PPT
Introduction & Parellelization on large scale clusters
PPT
Distributed computing presentation
PPT
Parallel Programming Primer
PPT
Parallel Programming Primer 1
ODP
Multithreading 101
PPTX
20090720 smith
PPT
Parallel Programming: Beyond the Critical Section
PPT
Parallel architecture
PPT
Parallel Processing Concepts
PDF
Our Concurrent Past; Our Distributed Future
PDF
parallel-computation.pdf
PDF
Parallel computation
PPT
Migration To Multi Core - Parallel Programming Models
PPTX
Underlying principles of parallel and distributed computing
PPTX
Parallel architecture &programming
PPT
Threaded Programming
PDF
doing_parallel.pdf
PPT
Parallel Programming Models: Shared variable model, Message passing model, Da...
Lec1 Intro
Lec1 Intro
Introduction & Parellelization on large scale clusters
Distributed computing presentation
Parallel Programming Primer
Parallel Programming Primer 1
Multithreading 101
20090720 smith
Parallel Programming: Beyond the Critical Section
Parallel architecture
Parallel Processing Concepts
Our Concurrent Past; Our Distributed Future
parallel-computation.pdf
Parallel computation
Migration To Multi Core - Parallel Programming Models
Underlying principles of parallel and distributed computing
Parallel architecture &programming
Threaded Programming
doing_parallel.pdf
Parallel Programming Models: Shared variable model, Message passing model, Da...
Ad

More from Sri Prasanna (20)

PDF
Qr codes para tech radar
PDF
Qr codes para tech radar 2
DOC
DOC
PDF
PDF
PDF
PDF
PDF
About stacks
PDF
About Stacks
PDF
About Stacks
PDF
About Stacks
PDF
About Stacks
PDF
About Stacks
PDF
About Stacks
PDF
About Stacks
PPT
Network and distributed systems
PPT
Mapreduce: Theory and implementation
PPT
Other distributed systems
PPT
Distributed file systems
Qr codes para tech radar
Qr codes para tech radar 2
About stacks
About Stacks
About Stacks
About Stacks
About Stacks
About Stacks
About Stacks
About Stacks
Network and distributed systems
Mapreduce: Theory and implementation
Other distributed systems
Distributed file systems

Recently uploaded (20)

PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Modernizing your data center with Dell and AMD
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
solutions_manual_-_materials___processing_in_manufacturing__demargo_.pdf
PDF
Advanced IT Governance
PPTX
breach-and-attack-simulation-cybersecurity-india-chennai-defenderrabbit-2025....
PDF
Machine learning based COVID-19 study performance prediction
PDF
GamePlan Trading System Review: Professional Trader's Honest Take
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
NewMind AI Monthly Chronicles - July 2025
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Advanced Soft Computing BINUS July 2025.pdf
PPTX
Spectroscopy.pptx food analysis technology
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Electronic commerce courselecture one. Pdf
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Advanced methodologies resolving dimensionality complications for autism neur...
Diabetes mellitus diagnosis method based random forest with bat algorithm
Modernizing your data center with Dell and AMD
“AI and Expert System Decision Support & Business Intelligence Systems”
solutions_manual_-_materials___processing_in_manufacturing__demargo_.pdf
Advanced IT Governance
breach-and-attack-simulation-cybersecurity-india-chennai-defenderrabbit-2025....
Machine learning based COVID-19 study performance prediction
GamePlan Trading System Review: Professional Trader's Honest Take
The Rise and Fall of 3GPP – Time for a Sabbatical?
NewMind AI Monthly Chronicles - July 2025
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Spectral efficient network and resource selection model in 5G networks
Per capita expenditure prediction using model stacking based on satellite ima...
Advanced Soft Computing BINUS July 2025.pdf
Spectroscopy.pptx food analysis technology
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Electronic commerce courselecture one. Pdf

Introduction to Cluster Computing and Map Reduce (from Google)

  • 1. Distributed Computing Seminar Lecture 1: Introduction to Distributed Computing & Systems Background Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet Summer 2007 Except where otherwise noted, the contents of this presentation are © Copyright 2007 University of Washington and are licensed under the Creative Commons Attribution 2.5 License.
  • 2. Course Overview 5 lectures 1 Introduction 2 Technical Side: MapReduce & GFS 2 Theoretical: Algorithms for distributed computing Readings + Questions nightly Readings: http://code.google.com/edu/content/submissions/mapreduce-minilecture/listing.html Questions: http://code.google.com/edu/content/submissions/mapreduce-minilecture/MapReduceMiniSeriesReadingQuestions.doc
  • 3. Outline Introduction to Distributed Computing Parallel vs. Distributed Computing History of Distributed Computing Parallelization and Synchronization Networking Basics
  • 4. Computer Speedup Moore’s Law: “ The density of transistors on a chip doubles every 18 months, for the same cost” (1965) Image: Tom’s Hardware and not subject to the Creative Commons license applicable to the rest of this work. Image: Tom’s Hardware
  • 5. Scope of problems What can you do with 1 computer? What can you do with 100 computers? What can you do with an entire data center?
  • 6. Distributed problems Rendering multiple frames of high-quality animation Image: DreamWorks Animation and not subject to the Creative Commons license applicable to the rest of this work.
  • 7. Distributed problems Simulating several hundred or thousand characters Happy Feet © Kingdom Feature Productions; Lord of the Rings © New Line Cinema, neither image is subject to the Creative Commons license applicable to the rest of the work.
  • 8. Distributed problems Indexing the web (Google) Simulating an Internet-sized network for networking experiments (PlanetLab) Speeding up content delivery (Akamai) What is the key attribute that all these examples have in common?
  • 9. Parallel vs. Distributed Parallel computing can mean: Vector processing of data Multiple CPUs in a single computer Distributed computing is multiple CPUs across many computers over the network
  • 10. A Brief History… 1975-85 Parallel computing was favored in the early years Primarily vector-based at first Gradually more thread-based parallelism was introduced Image: Computer Pictures Database and Cray Research Corp and is not subject to the Creative Commons license applicable to the rest of this work.
  • 11. “ Massively parallel architectures” start rising in prominence Message Passing Interface (MPI) and other libraries developed Bandwidth was a big problem A Brief History… 1985-95
  • 12. A Brief History… 1995-Today Cluster/grid architecture increasingly dominant Special node machines eschewed in favor of COTS technologies Web-wide cluster software Companies like Google take this to the extreme
  • 14. Parallelization Idea Parallelization is “easy” if processing can be cleanly split into n units:
  • 15. Parallelization Idea (2) In a parallel computation, we would like to have as many threads as we have processors. e.g., a four-processor computer would be able to run four threads at the same time.
  • 18. Parallelization Pitfalls But this model is too simple! How do we assign work units to worker threads? What if we have more work units than threads? How do we aggregate the results at the end? How do we know all the workers have finished? What if the work cannot be divided into completely separate tasks? What is the common theme of all of these problems?
  • 19. Parallelization Pitfalls (2) Each of these problems represents a point at which multiple threads must communicate with one another, or access a shared resource. Golden rule: Any memory that can be used by multiple threads must have an associated synchronization system !
  • 20. What is Wrong With This? Thread 1: void foo() { x++; y = x; } Thread 2: void bar() { y++; x+=3; } If the initial state is y = 0, x = 6, what happens after these threads finish running?
  • 21. Multithreaded = Unpredictability When we run a multithreaded program, we don’t know what order threads run in, nor do we know when they will interrupt one another. Thread 1: void foo() { eax = mem[x]; inc eax; mem[x] = eax; ebx = mem[x]; mem[y] = ebx; } Thread 2: void bar() { eax = mem[y]; inc eax; mem[y] = eax; eax = mem[x]; add eax, 3; mem[x] = eax; } Many things that look like “one step” operations actually take several steps under the hood:
  • 22. Multithreaded = Unpredictability This applies to more than just integers: Pulling work units from a queue Reporting work back to master unit Telling another thread that it can begin the “next phase” of processing … All require synchronization!
  • 23. Synchronization Primitives A synchronization primitive is a special shared variable that guarantees that it can only be accessed atomically . Hardware support guarantees that operations on synchronization primitives only ever take one step
  • 24. Semaphores A semaphore is a flag that can be raised or lowered in one step Semaphores were flags that railroad engineers would use when entering a shared track Only one side of the semaphore can ever be red! (Can both be green?)
  • 25. Semaphores set() and reset() can be thought of as lock() and unlock() Calls to lock() when the semaphore is already locked cause the thread to block . Pitfalls: Must “bind” semaphores to particular objects; must remember to unlock correctly
  • 26. The “corrected” example Thread 1: void foo() { sem.lock(); x++; y = x; sem.unlock(); } Thread 2: void bar() { sem.lock(); y++; x+=3; sem.unlock(); } Global var “Semaphore sem = new Semaphore();” guards access to x & y
  • 27. Condition Variables A condition variable notifies threads that a particular condition has been met Inform another thread that a queue now contains elements to pull from (or that it’s empty – request more elements!) Pitfall: What if nobody’s listening?
  • 28. The final example Thread 1: void foo() { sem.lock(); x++; y = x; fooDone = true; sem.unlock(); fooFinishedCV.notify(); } Thread 2: void bar() { sem.lock(); if(!fooDone) fooFinishedCV.wait(sem); y++; x+=3; sem.unlock(); } Global vars: Semaphore sem = new Semaphore(); ConditionVar fooFinishedCV = new ConditionVar(); boolean fooDone = false;
  • 29. Too Much Synchronization? Deadlock Synchronization becomes even more complicated when multiple locks can be used Can cause entire system to “get stuck” Thread A: semaphore1.lock(); semaphore2.lock(); /* use data guarded by semaphores */ semaphore1.unlock(); semaphore2.unlock(); Thread B: semaphore2.lock(); semaphore1.lock(); /* use data guarded by semaphores */ semaphore1.unlock(); semaphore2.unlock(); (Image: RPI CSCI.4210 Operating Systems notes)
  • 30. The Moral: Be Careful! Synchronization is hard Need to consider all possible shared state Must keep locks organized and use them consistently and correctly Knowing there are bugs may be tricky; fixing them can be even worse! Keeping shared state to a minimum reduces total system complexity
  • 32. Sockets: The Internet = tubes? A socket is the basic network interface Provides a two-way “pipe” abstraction between two applications Client creates a socket, and connects to the server, who receives a socket representing the other side
  • 33. Ports Within an IP address, a port is a sub-address identifying a listening program Allows multiple clients to connect to a server at once
  • 34. What makes this work? Underneath the socket layer are several more protocols Most important are TCP and IP (which are used hand-in-hand so often, they’re often spoken of as one protocol: TCP/IP) Even more low-level protocols handle how data is sent over Ethernet wires, or how bits are sent through the air using 802.11 wireless…
  • 35. Why is This Necessary? Not actually tube-like “underneath the hood” Unlike phone system (circuit switched), the packet switched Internet uses many routes at once
  • 36. Networking Issues If a party to a socket disconnects, how much data did they receive? … Did they crash? Or did a machine in the middle? Can someone in the middle intercept/modify our data? Traffic congestion makes switch/router topology important for efficient throughput
  • 37. Conclusions Processing more data means using more machines at the same time Cooperation between processes requires synchronization Designing real distributed systems requires consideration of networking topology Next time: How MapReduce works

Editor's Notes

  • #21: There are multiple possible final states. Y is definitely a problem, because we don’t know if it will be “1” or “7”… but X can also be 7, 10, or 11!
  • #22: Inform students that the term we want here is “race condition”
  • #27: Ask: are there still any problems? (Yes – we still have two possible outcomes. We want some other system that allows us to serialize access on an event.)
  • #28: Go over wait() / notify() / broadcast() --- must be combined with a semaphore!