SlideShare a Scribd company logo
Parallel
programming in
modern world
.NET TECHNICS
parallelism vs concurrency
 Concurrency
 existence of multiple threads of execution
 goal of concurrency is to prevent thread starvation
 concurrency is required operationally
 Parallelism
 concurrent threads execute at the same time on multiple cores
 parallelism is only about throughput
 It is an optimization, not a functional requirement
Limitations to linear speedup of
parallel code
 Serial code
 Overhead from parallelization
 Synchronization
 Sequential input/output
Parallel Speedup Calculation
 Amdahl’s Law:
 Speedup =
1
1 − P + (P/N)
 Gustafson’s Law:
 Speedup =
S + N (1 − S)
S + (1 − S)
- 0n
Phases of parallel development
 Finding Concurrency
 Task Decomposition pattern
 Data Decomposition pattern
 Group Tasks Pattern
 Order Tasks Pattern
 Data Sharing pattern
 Algorithm Structures
 Support Structures
 Implementation Mechanisms
The Algorithm Structure Pattern
 Task Parallelism Pattern
 Divide and Conquer Pattern
 Geometric Decomposition Pattern
 Recursive Data Pattern
 Pipeline Pattern
The Supporting Structures Pattern
 SPMD (Single Program/Multiple Data)
 Master/Worker
 Loop Parallelism
 Fork/Join
Data Parallelism
 Search for Loops
 Unroll Sequential Loops
 Evaluating Performance Considerations:
 Conduct performance benchmarks to confirm potential performance
improvements
 When there is minimal or no performance gain, one solution is to
change the chunk size
 Parallel.For and Parallel.ForEach
 ParallelLoopState for breaking
Reduction/Aggregation
Variations of Reduce
 Scan pattern - each iteration of a loop depends on data computed
in the previous iteration.
 Pack pattern - uses a parallel loop to select elements to retain or
discard => The result is a subset of the original input.
 Map Reduce
MapReduce Pattern
Elements:
 1) input - a collection of key and
value pairs;
 2) intermediate collection - a non-
unique collection of key and value
pairs;
 3) third collection - a reduction of
the non-unique keys from the
intermediate collection.
MapReduce Example
 Counting Words across multiple documents
Map/reduce via PLINQ
Futures
 Future is a stand-in for a computational result that is initially unknown
but becomes available at a later time.
 A future in .NET is a Task<TResult> that returns a value.
 A .NET continuation task is a task that automatically starts when
other tasks, known as its antecedents, complete.
Futures example
Dynamic Task Parallelism
 Dynamic Tasks (decomposition or “divide and conquer”) - tasks that
are dynamically added to the work queue as the computation
proceeds.
 Most known instance – recursion.
Dynamic Task example
Pipelines
 Each task implements a stage of the pipeline, and the queues act
as buffers that allow the stages of the pipeline to execute
concurrently, even though the values are processed in order.
 The buffers BlockingCollection<T>
Pipeline example
C# 5 : async and await
 asynchronous pattern,
 event-based asynchronous pattern,
 task-based asynchronous pattern (TAP) : async&await!
Asynchronous Pattern
Event-Based Asynchronous Pattern
Task-Based Asynchronous Pattern
Using Multiple Asynchronous
Methods
 Using Multiple Asynchronous Methods:
vs
 Using Combinators:
Converting the Asynchronous
Pattern
 The TaskFactory class defines the FromAsync method that allows
converting methods using the asynchronous pattern to the TAP.
ERROR HANDLING
Multiple tasks error handling
CANCELLATION
Cancellation with Framework Features
CANCELLATION
Cancellation with custom tasks
Literature
 Concurrent Programming on Windows
 “MapReduce: Simplified Data Processing on Large Clusters” by
Jeffrey Dean and Sanjay Ghemawat. 2004
 Parallel Programming with Microsoft Visual Studio 2010 Step by Step
 Parallel Programming with Microsoft®.NET: Design Patterns for
Decomposition and Coordination on Multicore Architectures
 Pro .NET 4 Parallel Programming in C# [Adam_Freeman]
 Professional Parallel Programming with C# [Gaston_Hillar]
 Professional Parallel Programming with C# Master Parallel Extensions
with NET 4
 .NET 4.5 Parallel Extensions Cookbook | Packt Publishing
Questions?
 Parallel programming
in modern world
 .NET technics
Thanks!

More Related Content

What's hot (20)

PDF
Libro de MATLAB
guestecaca7
 
PPS
PRAM algorithms from deepika
guest1f4fb3
 
PPT
Parallel Algorithm Models
Martin Coronel
 
PPT
Parallel Computing 2007: Bring your own parallel application
Geoffrey Fox
 
PDF
Parallel Algorithms
Dr Sandeep Kumar Poonia
 
PDF
2015-06-15 Large-Scale Elastic-Net Regularized Generalized Linear Models at S...
DB Tsai
 
PDF
Large data with Scikit-learn - Boston Data Mining Meetup - Alex Perrier
Alexis Perrier
 
PPT
Chap3 slides
BaliThorat1
 
PDF
Mikio Braun – Data flow vs. procedural programming
Flink Forward
 
PPT
Chap5 slides
BaliThorat1
 
PPT
Chap4 slides
BaliThorat1
 
PDF
Challenges on Distributed Machine Learning
jie cao
 
PDF
PPoPP15
Yuan Tang
 
PPTX
Parallel Execution of Model Management Programs (STAF 2017)
Sina Madani
 
PDF
steppy: lightweight, open source, Python library for fast and reproducible ex...
neptune.ml
 
PPT
Math content conversion
JB Online
 
PPT
Chap2 slides
BaliThorat1
 
PDF
Generalized Linear Models in Spark MLlib and SparkR
Databricks
 
PPT
Chap3 slides
Divya Grover
 
PPTX
Task and Data Parallelism: Real-World Examples
Sasha Goldshtein
 
Libro de MATLAB
guestecaca7
 
PRAM algorithms from deepika
guest1f4fb3
 
Parallel Algorithm Models
Martin Coronel
 
Parallel Computing 2007: Bring your own parallel application
Geoffrey Fox
 
Parallel Algorithms
Dr Sandeep Kumar Poonia
 
2015-06-15 Large-Scale Elastic-Net Regularized Generalized Linear Models at S...
DB Tsai
 
Large data with Scikit-learn - Boston Data Mining Meetup - Alex Perrier
Alexis Perrier
 
Chap3 slides
BaliThorat1
 
Mikio Braun – Data flow vs. procedural programming
Flink Forward
 
Chap5 slides
BaliThorat1
 
Chap4 slides
BaliThorat1
 
Challenges on Distributed Machine Learning
jie cao
 
PPoPP15
Yuan Tang
 
Parallel Execution of Model Management Programs (STAF 2017)
Sina Madani
 
steppy: lightweight, open source, Python library for fast and reproducible ex...
neptune.ml
 
Math content conversion
JB Online
 
Chap2 slides
BaliThorat1
 
Generalized Linear Models in Spark MLlib and SparkR
Databricks
 
Chap3 slides
Divya Grover
 
Task and Data Parallelism: Real-World Examples
Sasha Goldshtein
 

Similar to Parallel Programming In Modern World .NET Technics (20)

PDF
Parallel Programming With Microsoft Net Design Patterns For Decomposition And...
foegeartemfz
 
PPTX
Parallel and Asynchronous Programming - ITProDevConnections 2012 (Greek)
Panagiotis Kanavos
 
PPTX
Parallel and Asynchronous Programming - ITProDevConnections 2012 (English)
Panagiotis Kanavos
 
PPTX
Concurrency in c#
RezaHamidpour
 
PPTX
Parallel Programming
Mindfire Solutions
 
PPTX
Training – Going Async
Betclic Everest Group Tech Team
 
PPTX
End to-end async and await
vfabro
 
PPT
Parallel Extentions to the .NET Framework
ukdpe
 
PPTX
Coding For Cores - C# Way
Bishnu Rawal
 
ZIP
.Net 4.0 Threading and Parallel Programming
Alex Moore
 
PPTX
Dot net parallelism and multicore computing
Aravindhan Gnanam
 
PPTX
Parallel Computing in .NET
meghantaylor
 
PPT
MTaulty_DevWeek_Parallel
ukdpe
 
PPTX
C# Parallel programming
Umeshwaran V
 
PPTX
Parallel programming in .NET
Peter Csala
 
PDF
Arc 300-3 ade miller-en
lonegunman
 
PPTX
Parallel extensions in .Net 4.0
Dima Maleev
 
PPTX
Multi core programming 1
Robin Aggarwal
 
PPTX
NDC Sydney 2019 - Async Demystified -- Karel Zikmund
Karel Zikmund
 
PPTX
Concurrency - responsiveness in .NET
Mårten Rånge
 
Parallel Programming With Microsoft Net Design Patterns For Decomposition And...
foegeartemfz
 
Parallel and Asynchronous Programming - ITProDevConnections 2012 (Greek)
Panagiotis Kanavos
 
Parallel and Asynchronous Programming - ITProDevConnections 2012 (English)
Panagiotis Kanavos
 
Concurrency in c#
RezaHamidpour
 
Parallel Programming
Mindfire Solutions
 
Training – Going Async
Betclic Everest Group Tech Team
 
End to-end async and await
vfabro
 
Parallel Extentions to the .NET Framework
ukdpe
 
Coding For Cores - C# Way
Bishnu Rawal
 
.Net 4.0 Threading and Parallel Programming
Alex Moore
 
Dot net parallelism and multicore computing
Aravindhan Gnanam
 
Parallel Computing in .NET
meghantaylor
 
MTaulty_DevWeek_Parallel
ukdpe
 
C# Parallel programming
Umeshwaran V
 
Parallel programming in .NET
Peter Csala
 
Arc 300-3 ade miller-en
lonegunman
 
Parallel extensions in .Net 4.0
Dima Maleev
 
Multi core programming 1
Robin Aggarwal
 
NDC Sydney 2019 - Async Demystified -- Karel Zikmund
Karel Zikmund
 
Concurrency - responsiveness in .NET
Mårten Rånge
 
Ad

More from IT Weekend (20)

PPTX
Quality attributes testing. From Architecture to test acceptance
IT Weekend
 
PPTX
Mobile development for JavaScript developer
IT Weekend
 
PPTX
Building an Innovation & Strategy Process
IT Weekend
 
PDF
IT Professionals – The Right Time/The Right Place
IT Weekend
 
PDF
Building a Data Driven Organization
IT Weekend
 
PPT
7 Tools for the Product Owner
IT Weekend
 
PPTX
Hacking your Doorbell
IT Weekend
 
PPTX
An era of possibilities, a window in time
IT Weekend
 
PPTX
Web services automation from sketch
IT Weekend
 
PPTX
Why Ruby?
IT Weekend
 
PDF
REST that won't make you cry
IT Weekend
 
PPTX
Как договариваться с начальником и заказчиком: выбираем нужный протокол общения
IT Weekend
 
PPTX
Обзор программы SAP HANA Startup Focus
IT Weekend
 
PPTX
World of Agile: Kanban
IT Weekend
 
PPTX
Risk Management
IT Weekend
 
PPTX
«Spring Integration as Integration Patterns Provider»
IT Weekend
 
PPTX
Cutting edge of Machine Learning
IT Weekend
 
PPTX
Parallel programming in modern world .net technics shared
IT Weekend
 
PPTX
Maximize Effectiveness of Human Capital
IT Weekend
 
PDF
“Using C#/.NET – “Controversial Topics & Common Mistakes”
IT Weekend
 
Quality attributes testing. From Architecture to test acceptance
IT Weekend
 
Mobile development for JavaScript developer
IT Weekend
 
Building an Innovation & Strategy Process
IT Weekend
 
IT Professionals – The Right Time/The Right Place
IT Weekend
 
Building a Data Driven Organization
IT Weekend
 
7 Tools for the Product Owner
IT Weekend
 
Hacking your Doorbell
IT Weekend
 
An era of possibilities, a window in time
IT Weekend
 
Web services automation from sketch
IT Weekend
 
Why Ruby?
IT Weekend
 
REST that won't make you cry
IT Weekend
 
Как договариваться с начальником и заказчиком: выбираем нужный протокол общения
IT Weekend
 
Обзор программы SAP HANA Startup Focus
IT Weekend
 
World of Agile: Kanban
IT Weekend
 
Risk Management
IT Weekend
 
«Spring Integration as Integration Patterns Provider»
IT Weekend
 
Cutting edge of Machine Learning
IT Weekend
 
Parallel programming in modern world .net technics shared
IT Weekend
 
Maximize Effectiveness of Human Capital
IT Weekend
 
“Using C#/.NET – “Controversial Topics & Common Mistakes”
IT Weekend
 
Ad

Recently uploaded (20)

PPTX
arctitecture application system design os dsa
za241967
 
PDF
Code Once; Run Everywhere - A Beginner’s Journey with React Native
Hasitha Walpola
 
PDF
Telemedicine App Development_ Key Factors to Consider for Your Healthcare Ven...
Mobilityinfotech
 
PDF
Best Software Development at Best Prices
softechies7
 
PPTX
Foundations of Marketo Engage - Programs, Campaigns & Beyond - June 2025
BradBedford3
 
PDF
Writing Maintainable Playwright Tests with Ease
Shubham Joshi
 
PPTX
IObit Driver Booster Pro Crack Download Latest Version
chaudhryakashoo065
 
PDF
Humans vs AI Call Agents - Qcall.ai's Special Report
Udit Goenka
 
PPTX
IObit Driver Booster Pro 12.4-12.5 license keys 2025-2026
chaudhryakashoo065
 
PDF
From Data Preparation to Inference: How Alluxio Speeds Up AI
Alluxio, Inc.
 
PDF
Rewards and Recognition (2).pdf
ethan Talor
 
PDF
What Is an Internal Quality Audit and Why It Matters for Your QMS
BizPortals365
 
PDF
Automated Test Case Repair Using Language Models
Lionel Briand
 
PDF
Automated Testing and Safety Analysis of Deep Neural Networks
Lionel Briand
 
PDF
Designing Accessible Content Blocks (1).pdf
jaclynmennie1
 
PDF
Best Practice for LLM Serving in the Cloud
Alluxio, Inc.
 
PDF
OpenChain Webinar - AboutCode - Practical Compliance in One Stack – Licensing...
Shane Coughlan
 
PDF
CodeCleaner: Mitigating Data Contamination for LLM Benchmarking
arabelatso
 
PPTX
Avast Premium Security crack 25.5.6162 + License Key 2025
HyperPc soft
 
PDF
AWS Consulting Services: Empowering Digital Transformation with Nlineaxis
Nlineaxis IT Solutions Pvt Ltd
 
arctitecture application system design os dsa
za241967
 
Code Once; Run Everywhere - A Beginner’s Journey with React Native
Hasitha Walpola
 
Telemedicine App Development_ Key Factors to Consider for Your Healthcare Ven...
Mobilityinfotech
 
Best Software Development at Best Prices
softechies7
 
Foundations of Marketo Engage - Programs, Campaigns & Beyond - June 2025
BradBedford3
 
Writing Maintainable Playwright Tests with Ease
Shubham Joshi
 
IObit Driver Booster Pro Crack Download Latest Version
chaudhryakashoo065
 
Humans vs AI Call Agents - Qcall.ai's Special Report
Udit Goenka
 
IObit Driver Booster Pro 12.4-12.5 license keys 2025-2026
chaudhryakashoo065
 
From Data Preparation to Inference: How Alluxio Speeds Up AI
Alluxio, Inc.
 
Rewards and Recognition (2).pdf
ethan Talor
 
What Is an Internal Quality Audit and Why It Matters for Your QMS
BizPortals365
 
Automated Test Case Repair Using Language Models
Lionel Briand
 
Automated Testing and Safety Analysis of Deep Neural Networks
Lionel Briand
 
Designing Accessible Content Blocks (1).pdf
jaclynmennie1
 
Best Practice for LLM Serving in the Cloud
Alluxio, Inc.
 
OpenChain Webinar - AboutCode - Practical Compliance in One Stack – Licensing...
Shane Coughlan
 
CodeCleaner: Mitigating Data Contamination for LLM Benchmarking
arabelatso
 
Avast Premium Security crack 25.5.6162 + License Key 2025
HyperPc soft
 
AWS Consulting Services: Empowering Digital Transformation with Nlineaxis
Nlineaxis IT Solutions Pvt Ltd
 

Parallel Programming In Modern World .NET Technics

  • 2. parallelism vs concurrency  Concurrency  existence of multiple threads of execution  goal of concurrency is to prevent thread starvation  concurrency is required operationally  Parallelism  concurrent threads execute at the same time on multiple cores  parallelism is only about throughput  It is an optimization, not a functional requirement
  • 3. Limitations to linear speedup of parallel code  Serial code  Overhead from parallelization  Synchronization  Sequential input/output
  • 4. Parallel Speedup Calculation  Amdahl’s Law:  Speedup = 1 1 − P + (P/N)  Gustafson’s Law:  Speedup = S + N (1 − S) S + (1 − S) - 0n
  • 5. Phases of parallel development  Finding Concurrency  Task Decomposition pattern  Data Decomposition pattern  Group Tasks Pattern  Order Tasks Pattern  Data Sharing pattern  Algorithm Structures  Support Structures  Implementation Mechanisms
  • 6. The Algorithm Structure Pattern  Task Parallelism Pattern  Divide and Conquer Pattern  Geometric Decomposition Pattern  Recursive Data Pattern  Pipeline Pattern
  • 7. The Supporting Structures Pattern  SPMD (Single Program/Multiple Data)  Master/Worker  Loop Parallelism  Fork/Join
  • 8. Data Parallelism  Search for Loops  Unroll Sequential Loops  Evaluating Performance Considerations:  Conduct performance benchmarks to confirm potential performance improvements  When there is minimal or no performance gain, one solution is to change the chunk size  Parallel.For and Parallel.ForEach  ParallelLoopState for breaking
  • 10. Variations of Reduce  Scan pattern - each iteration of a loop depends on data computed in the previous iteration.  Pack pattern - uses a parallel loop to select elements to retain or discard => The result is a subset of the original input.  Map Reduce
  • 11. MapReduce Pattern Elements:  1) input - a collection of key and value pairs;  2) intermediate collection - a non- unique collection of key and value pairs;  3) third collection - a reduction of the non-unique keys from the intermediate collection.
  • 12. MapReduce Example  Counting Words across multiple documents
  • 14. Futures  Future is a stand-in for a computational result that is initially unknown but becomes available at a later time.  A future in .NET is a Task<TResult> that returns a value.  A .NET continuation task is a task that automatically starts when other tasks, known as its antecedents, complete.
  • 16. Dynamic Task Parallelism  Dynamic Tasks (decomposition or “divide and conquer”) - tasks that are dynamically added to the work queue as the computation proceeds.  Most known instance – recursion.
  • 18. Pipelines  Each task implements a stage of the pipeline, and the queues act as buffers that allow the stages of the pipeline to execute concurrently, even though the values are processed in order.  The buffers BlockingCollection<T>
  • 20. C# 5 : async and await  asynchronous pattern,  event-based asynchronous pattern,  task-based asynchronous pattern (TAP) : async&await!
  • 24. Using Multiple Asynchronous Methods  Using Multiple Asynchronous Methods: vs  Using Combinators:
  • 25. Converting the Asynchronous Pattern  The TaskFactory class defines the FromAsync method that allows converting methods using the asynchronous pattern to the TAP.
  • 30. Literature  Concurrent Programming on Windows  “MapReduce: Simplified Data Processing on Large Clusters” by Jeffrey Dean and Sanjay Ghemawat. 2004  Parallel Programming with Microsoft Visual Studio 2010 Step by Step  Parallel Programming with Microsoft®.NET: Design Patterns for Decomposition and Coordination on Multicore Architectures  Pro .NET 4 Parallel Programming in C# [Adam_Freeman]  Professional Parallel Programming with C# [Gaston_Hillar]  Professional Parallel Programming with C# Master Parallel Extensions with NET 4  .NET 4.5 Parallel Extensions Cookbook | Packt Publishing
  • 31. Questions?  Parallel programming in modern world  .NET technics

Editor's Notes

  • #5: Amdahl’s Law calculates the speedup of parallel code based on three variables: ■ Duration of running the application on a single-core machine ■ The percentage of the application that is parallel ■ The number of processor cores This formula uses the duration of the application on a single-core machine as the benchmark. The numerator of the equation represents that base duration, which is always one. The dynamic portion of the calculation is in the denominator. The variable P is the percent of the application that runs in parallel, and N is the number of processor cores. The serial and parallel portions each remain half of the program. But in the real world, as computing power increases, more work gets completed, so the relative duration of the sequential portion is reduced. In addition, Amdahl’s Law does not account for the overhead required to schedule, manage, and execute parallel tasks. Gustafson’s Law takes both of these additional factors into account.
  • #16: Suppose that F1, F2, F3, and F4 are processor-intensive functions that communicate with one another using arguments and return values instead of reading and updating shared state variables. Suppose, also, that you want to distribute the work of these functions across available cores, and you want your code to run correctly no matter how many cores are available. When you look at the inputs and outputs, you can see that F1 can run in parallel with F2 and F3 but that F3 can’t start until after F2 finishes. How do you know this? The possible orderings become apparent when you visualize the function calls as a graph. Figure 1 illustrates this. The nodes of the graph are the functions F1, F2, F3, and F4. The incoming arrows for each node are the inputs required by the function, and the outgoing arrows are values calculated by each function. It’s easy to see that F1 and F2 can run at the same time but that F3 must follow F2. Here’s an example that shows how to create futures for this example. For simplicity, the code assumes that the values being calculated are integers and that the value of variable a has already been supplied, perhaps as an argument to the current method. This code creates a future that begins to asynchronously calculate the value of F1(a). On a multicore system, F1 will be able to run in parallel with the current thread. This means that F2 can begin executing without waiting for F1. The function F4 will execute as soon as the data it needs becomes available. It doesn’t matter whether F1 or F3 finishes first, because the results of both functions are required before F4 can be invoked. (Recall that the Result property does not return until the future’s value is available.) Note that the calls to F2, F3, and F4 do not need to be wrapped inside of a future because a single additional asynchronous operation is all that is needed to take advantage of the parallelism of this example.
  • #20: Each stage of the pipeline reads from a dedicated input and writes to a particular output. For example, the “Read Strings” task reads from a source and writes to buffer 1. All the stages of the pipeline can execute at the same time because concurrent queues buffer any shared inputs and outputs. If there are four available cores, the stages can run in parallel. As long as there is room in its output buffer, a stage of the pipeline can add the value it produces to its output queue. If the output buffer is full, the producer of the new value waits until space becomes available. Stages can also wait (that is, block) on inputs. An input wait is familiar from other programming contexts—if an enumeration or a stream does not have a value, the consumer of that enumeration or stream waits until a value is available or an “end of file” condition occurs. Blocking a collection works the same way. Using buffers that hold more than one value at a time compensates for variability in the time it takes to process each value. The BlockingCollection<T> class lets you signal the “end of file” condition with the CompleteAdding method. This method tells the consumer that it can end its processing loop after all the data previously added to the collection is removed or processed. The following code demonstrates how to implement a pipeline that uses the BlockingCollection class for the buffers and tasks for the stages of the pipeline.