SlideShare a Scribd company logo
Cloud Technologies and Their ApplicationsThe Bioinformatics Open Source Conference (BOSC 2010) Boston, MassachusettsJudy Qiuhttp://salsahpc.indiana.edu  Assistant Director, Pervasive Technology InstituteAssistant Professor, School of Informatics and ComputingIndiana University
Data Explosion and ChallengesData DelugeCloud TechnologiesWhy ?How ?Life Science ApplicationsParallel ComputingWhat ?
Data We’re Looking atPublic Health Data   (IU Medical School & IUPUI Polis Center)(65535 Patient/GIS records / 54 dimensions each)Biology DNA sequence alignments  (IU Medical School & CGB)    (10 million Sequences / at least 300 to 400 base pair each)NIH PubChem  (IU Cheminformatics)    (60 million chemical compounds/166 fingerprints each)High volume and high dimension require new efficient computing approaches!
Some Life Sciences ApplicationsEST (Expressed Sequence Tag)sequence assembly program using DNA sequence assembly program software CAP3.Metagenomics and Alu repetition alignment using Smith Waterman dissimilarity computations followed by MPI applications for Clustering and MDS (Multi Dimensional Scaling) for dimension reduction before visualizationMapping the 60 million entries in PubCheminto two or three dimensions to aid selection of related chemicals with convenient Google Earth like Browser. This uses either hierarchical MDS (which cannot be applied directly as O(N2)) or GTM (Generative Topographic Mapping).Correlating Childhood obesity with environmental factorsby combining medical records with Geographical Information data with over 100 attributes using correlation computation, MDS and genetic algorithms for choosing optimal environmental factors.
DNA Sequencing PipelineMapReduceIllumina/Solexa           Roche/454 Life Sciences     Applied Biosystems/SOLiDPairwiseclusteringBlocking MDSMPIModern Commerical Gene SequencesVisualizationPlotvizSequencealignmentDissimilarityMatrixN(N-1)/2 valuesblockPairingsFASTA FileN SequencesRead AlignmentThis chart illustrate our research of a pipeline mode to provide services on demand (Software as a Service SaaS)
User submit their jobs to the pipeline.  The components are services and so is the whole pipeline.Internet
Cloud Services and MapReduceCloud TechnologiesData DelugeLife ScienceApplicationsParallel Computing
Clouds as Cost Effective Data Centers7Builds giant data centers with 100,000’s of computers; ~ 200-1000 to a shipping container with Internet access    “Microsoft will cram between 150 and 220 shipping containers filled with data center gear into a new 500,000 square foot Chicago facility. This move marks the most significant, public use of the shipping container systems popularized by the likes of Sun Microsystems and Rackable Systems to date.”―News Release from Web
Clouds hide Complexity8CyberinfrastructureIs “Research as a Service”SaaS: Software as a Service(e.g. Clustering is a service)PaaS: Platform as a ServiceIaaS plus core software capabilities on which you build  SaaS(e.g. Azure is a PaaS; MapReduce is a Platform)IaaS(HaaS): Infrasturcture as a Service (get computer time with a credit card and with a Web interface like EC2)
Commercial CloudSoftware
MapReduceMap(Key, Value)  Reduce(Key, List<Value>)  A parallel Runtime coming from Information RetrievalData PartitionsA hash function maps the results of the map tasks to r  reduce tasksReduce OutputsImplementations support:Splitting of dataPassing the output of map functions to reduce functionsSorting the inputs to the reduce function based on the intermediate keysQuality of services
Edge : communication pathVertex :execution task  Hadoop & DryadLINQApache HadoopMicrosoft DryadLINQStandard LINQ operationsData/Compute NodesMaster NodeDryadLINQ operationsJobTrackerMMMMRRRRHDFSNameNodeDatablocks12DryadLINQ Compiler2334Directed Acyclic Graph (DAG) based execution flowsDryad process the DAG executing vertices on compute clustersLINQ provides a query interface for structured dataProvide Hash, Range, and Round-Robin partition patterns Apache Implementation of Google’s MapReduceHadoop Distributed File System (HDFS) manage dataMap/Reduce tasks are scheduled based on data locality in HDFS (replicated data blocks)Dryad Execution EngineJob creation; Resource management; Fault tolerance& re-execution of failed taskes/vertices
Applications using Dryad & DryadLINQInput files (FASTA)CAP3 - Expressed Sequence Tag assembly  to re-construct full-length mRNACAP3CAP3CAP3DryadLINQOutput filesPerform using DryadLINQ and Apache Hadoop implementationsSingle “Select” operation in DryadLINQ“Map only” operation in Hadoop X. Huang, A. Madan, “CAP3: A DNA Sequence Assembly Program,” Genome Research, vol. 9, no. 9, pp. 868-877, 1999.
Classic Cloud ArchitectureAmazon EC2 and Microsoft AzureMapReduce ArchitectureApache Hadoop and Microsoft DryadLINQHDFSInput Data SetData FileMap()Map()ExecutableOptionalReducePhaseReduceResultsHDFS
Usability and Performance of Different Cloud ApproachesCap3 PerformanceCap3 EfficiencyEfficiency = absolute sequential run time / (number of cores * parallel run time)
Hadoop, DryadLINQ  - 32 nodes (256 cores IDataPlex)
EC2 - 16 High CPU extra large instances (128 cores)
Azure- 128 small instances (128 cores)
Ease of Use – Dryad/Hadoop are easier than EC2/Azure as higher level models
Lines of code including  file copyAzure : ~300  Hadoop: ~400  Dyrad: ~450  EC2 : ~700
Table 1 : Selected EC2 Instance Types
4096 Cap3 data files :  1.06 GB / 1875968 reads (458 readsX4096)..Following is the cost to process 4096 CAP3 files..Amortized cost in Tempest  (24 core X 32 nodes, 48 GB per node)    = 9.43$(Assume 70% utilization, write off over 3 years, include support)
Data Intensive ApplicationsCloud TechnologiesData DelugeLife Science ApplicationsParallel Computing
Alu and Metagenomics Workflow“All pairs” problem                Data is a collection of N sequences. Need to calcuate N2dissimilarities (distances) between sequnces (all pairs).These cannot be thought of as vectors because there are missing characters
“Multiple Sequence Alignment” (creating vectors of characters) doesn’t seem to work if N larger than O(100), where 100’s of characters long.Step 1: Can calculate N2 dissimilarities (distances) between sequencesStep 2: Find families by clustering (using much better methods than Kmeans). As no vectors, use vector free O(N2) methodsStep 3: Map to 3D for visualization using Multidimensional Scaling (MDS) – also O(N2)Results:        N = 50,000 runs in 10 hours (the complete pipeline above) on 768 coresDiscussions:Need to address millions of sequences …..Currently using a mix of MapReduce and MPITwister will do all steps as MDS, Clustering just need MPI Broadcast/Reduce
All-Pairs Using DryadLINQ125 million distances4 hours & 46 minutesCalculate  Pairwise Distances (Smith Waterman Gotoh)Calculate pairwise distances for a collection of genes (used for clustering, MDS)Fine grained tasks in MPICoarse grained tasks in DryadLINQPerformed on 768 cores (Tempest Cluster)Moretti, C., Bui, H., Hollingsworth, K., Rich, B., Flynn, P., & Thain, D. (2009). All-Pairs: An Abstraction for Data Intensive Computing on Campus Grids. IEEE Transactions on Parallel and Distributed Systems, 21, 21-36.
Biology MDS and Clustering ResultsAlu FamiliesThis visualizes results of Alu repeats from Chimpanzee and Human Genomes. Young families (green, yellow) are seen as tight clusters. This is projection of MDS dimension reduction to 3D of 35399 repeats – each with about  400 base pairsMetagenomicsThis visualizes results of dimension reduction to 3D of 30000 gene sequences from an environmental sample. The many different genes are classified by clustering algorithm and visualized by MDS dimension reduction
Hadoop/Dryad ComparisonInhomogeneous Data IInhomogeneity of data does not have a significant effect when the sequence lengths are randomly distributedDryad with Windows HPCS compared to Hadoop with Linux RHEL on Idataplex (32 nodes)
Hadoop/Dryad ComparisonInhomogeneous Data IIThis shows the natural load balancing of Hadoop MR dynamic task assignment using a global pipe line in contrast to the  DryadLinq static assignmentDryad with Windows HPCS compared to Hadoop with Linux RHEL on Idataplex (32 nodes)
Hadoop VM Performance DegradationPerf. Degradation = (Tvm – Tbaremetal)/Tbaremetal15.3% Degradation at largest data set size
Parallel Computing and SoftwareCloud TechnologiesData DelugeLife Science ApplicationsParallel Computing
Twister(MapReduce++)Pub/Sub Broker NetworkMap WorkerStreaming based communication
Intermediate results are directly transferred from the map tasks to the reduce tasks – eliminates local files
Cacheablemap/reduce tasks
Static data remains in memory
Combine phase to combine reductions

More Related Content

PDF
Eg4301808811
PDF
LARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENT
PPTX
Beyond Hadoop 1.0: A Holistic View of Hadoop YARN, Spark and GraphLab
PDF
Paper id 25201498
PDF
Implementation of p pic algorithm in map reduce to handle big data
PDF
A data aware caching 2415
PDF
A NOBEL HYBRID APPROACH FOR EDGE DETECTION
PDF
Real-Time Pedestrian Detection Using Apache Storm in a Distributed Environment
Eg4301808811
LARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENT
Beyond Hadoop 1.0: A Holistic View of Hadoop YARN, Spark and GraphLab
Paper id 25201498
Implementation of p pic algorithm in map reduce to handle big data
A data aware caching 2415
A NOBEL HYBRID APPROACH FOR EDGE DETECTION
Real-Time Pedestrian Detection Using Apache Storm in a Distributed Environment

What's hot (18)

PPTX
Big dataanalyticsbeyondhadoop public_20_june_2013
PDF
The Matsu Project - Open Source Software for Processing Satellite Imagery Data
PDF
Introduction to Big Data and Science Clouds (Chapter 1, SC 11 Tutorial)
PDF
useR 2014 jskim
PDF
Survey of Parallel Data Processing in Context with MapReduce
PDF
Big Graph : Tools, Techniques, Issues, Challenges and Future Directions
PDF
Big Data & Hadoop. Simone Leo (CRS4)
PDF
Managing Big Data (Chapter 2, SC 11 Tutorial)
PDF
Processing Big Data (Chapter 3, SC 11 Tutorial)
PDF
JovianDATA MDX Engine Comad oct 22 2011
DOCX
SciVisHalosFinalPaper
PDF
IRJET - Evaluating and Comparing the Two Variation with Current Scheduling Al...
PPTX
Introduction to HADOOP
PDF
Leveraging Map Reduce With Hadoop for Weather Data Analytics
PDF
A0360109
PPTX
Big Data Analytics with Storm, Spark and GraphLab
PDF
C0312023
Big dataanalyticsbeyondhadoop public_20_june_2013
The Matsu Project - Open Source Software for Processing Satellite Imagery Data
Introduction to Big Data and Science Clouds (Chapter 1, SC 11 Tutorial)
useR 2014 jskim
Survey of Parallel Data Processing in Context with MapReduce
Big Graph : Tools, Techniques, Issues, Challenges and Future Directions
Big Data & Hadoop. Simone Leo (CRS4)
Managing Big Data (Chapter 2, SC 11 Tutorial)
Processing Big Data (Chapter 3, SC 11 Tutorial)
JovianDATA MDX Engine Comad oct 22 2011
SciVisHalosFinalPaper
IRJET - Evaluating and Comparing the Two Variation with Current Scheduling Al...
Introduction to HADOOP
Leveraging Map Reduce With Hadoop for Weather Data Analytics
A0360109
Big Data Analytics with Storm, Spark and GraphLab
C0312023
Ad

Viewers also liked (20)

PPTX
Dmu scope 3 emissions
PPT
Qivana Ibo Presentation Vietnamese
PPT
G5 Mr Kanter group2
PPT
Animations
PPS
Living Environments
 
PDF
Prime Corpakis Regions2006
PPTX
ITGM8. Сергей Атрощенков (Еpam) Buzzword driven development и место тестировщ...
PDF
Mobile Levers for Retail
PPTX
LCIA Training Section 3
PDF
11
PDF
Shannon Smith Cv 201109
PPTX
Ibe presentation sept 2011
PDF
Linq to-sql-tutorial
PPT
Presentacio sense titol
PPT
WeonTV at the EuroITV 2009
PPTX
Bibliotheken moeten naar buiten toe
PPT
Presentation on future of libraries for 50th library week program in Ankara, ...
PDF
iPad integration through an assessment lens
PPT
Plodinec nola-082610
Dmu scope 3 emissions
Qivana Ibo Presentation Vietnamese
G5 Mr Kanter group2
Animations
Living Environments
 
Prime Corpakis Regions2006
ITGM8. Сергей Атрощенков (Еpam) Buzzword driven development и место тестировщ...
Mobile Levers for Retail
LCIA Training Section 3
11
Shannon Smith Cv 201109
Ibe presentation sept 2011
Linq to-sql-tutorial
Presentacio sense titol
WeonTV at the EuroITV 2009
Bibliotheken moeten naar buiten toe
Presentation on future of libraries for 50th library week program in Ankara, ...
iPad integration through an assessment lens
Plodinec nola-082610
Ad

Similar to Qiu bosc2010 (20)

PPTX
Slide 1
PPTX
Slide 1
PPT
Cyberinfrastructure and Applications Overview: Howard University June22
PPT
CLOUD BIOINFORMATICS Part1
PPT
HDFS-HC: A Data Placement Module for Heterogeneous Hadoop Clusters
PDF
LARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENT
PPTX
HPC with Clouds and Cloud Technologies
PPTX
My Other Computer is a Data Center: The Sector Perspective on Big Data
PDF
MAP-REDUCE IMPLEMENTATIONS: SURVEY AND PERFORMANCE COMPARISON
PPTX
عصر کلان داده، چرا و چگونه؟
PPTX
Architecture and Performance of Runtime Environments for Data Intensive Scala...
PDF
Achieving Portability and Efficiency in a HPC Code Using Standard Message-pas...
PDF
Seminar_Report_hadoop
PDF
MAD skills for analysis and big data Machine Learning
PPTX
Application-Aware Big Data Deduplication in Cloud Environment
PPTX
An Introduction to Cloud Computing by Robert Grossman 08-06-09 (v19)
DOCX
Cross cloud map reduce for big data
PPTX
Embarrassingly/Delightfully Parallel Problems
PPT
DIET_BLAST
PPT
Big Data & Hadoop
Slide 1
Slide 1
Cyberinfrastructure and Applications Overview: Howard University June22
CLOUD BIOINFORMATICS Part1
HDFS-HC: A Data Placement Module for Heterogeneous Hadoop Clusters
LARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENT
HPC with Clouds and Cloud Technologies
My Other Computer is a Data Center: The Sector Perspective on Big Data
MAP-REDUCE IMPLEMENTATIONS: SURVEY AND PERFORMANCE COMPARISON
عصر کلان داده، چرا و چگونه؟
Architecture and Performance of Runtime Environments for Data Intensive Scala...
Achieving Portability and Efficiency in a HPC Code Using Standard Message-pas...
Seminar_Report_hadoop
MAD skills for analysis and big data Machine Learning
Application-Aware Big Data Deduplication in Cloud Environment
An Introduction to Cloud Computing by Robert Grossman 08-06-09 (v19)
Cross cloud map reduce for big data
Embarrassingly/Delightfully Parallel Problems
DIET_BLAST
Big Data & Hadoop

More from BOSC 2010 (20)

PPTX
Mercer bosc2010 microsoft_framework
PPT
Langmead bosc2010 cloud-genomics
PDF
Schultheiss bosc2010 persistance-web-services
PPT
Swertz bosc2010 molgenis
PPT
Rice bosc2010 emboss
PDF
Morris bosc2010 evoker
PPT
Kono bosc2010 pathway_projector
PPTX
Kanterakis bosc2010 molgenis
PDF
Gautier bosc2010 pythonbioconductor
PDF
Gardler bosc2010 community_developmentattheasf
PDF
Friedberg bosc2010 iprstats
PDF
Fields bosc2010 bio_perl
PDF
Chapman bosc2010 biopython
PDF
Bonnal bosc2010 bio_ruby
PDF
Puton bosc2010 bio_python-modules-rna
PPT
Bader bosc2010 cytoweb
PDF
Talevich bosc2010 bio-phylo
PPTX
Zmasek bosc2010 aptx
PPTX
Wilkinson bosc2010 moby-to-sadi
PPT
Venkatesan bosc2010 onto-toolkit
Mercer bosc2010 microsoft_framework
Langmead bosc2010 cloud-genomics
Schultheiss bosc2010 persistance-web-services
Swertz bosc2010 molgenis
Rice bosc2010 emboss
Morris bosc2010 evoker
Kono bosc2010 pathway_projector
Kanterakis bosc2010 molgenis
Gautier bosc2010 pythonbioconductor
Gardler bosc2010 community_developmentattheasf
Friedberg bosc2010 iprstats
Fields bosc2010 bio_perl
Chapman bosc2010 biopython
Bonnal bosc2010 bio_ruby
Puton bosc2010 bio_python-modules-rna
Bader bosc2010 cytoweb
Talevich bosc2010 bio-phylo
Zmasek bosc2010 aptx
Wilkinson bosc2010 moby-to-sadi
Venkatesan bosc2010 onto-toolkit

Recently uploaded (20)

PDF
Encapsulation theory and applications.pdf
PPTX
A Presentation on Artificial Intelligence
PPTX
OMC Textile Division Presentation 2021.pptx
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PPTX
Group 1 Presentation -Planning and Decision Making .pptx
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PPTX
cloud_computing_Infrastucture_as_cloud_p
PDF
August Patch Tuesday
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PPTX
SOPHOS-XG Firewall Administrator PPT.pptx
PDF
Machine learning based COVID-19 study performance prediction
PDF
Heart disease approach using modified random forest and particle swarm optimi...
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
NewMind AI Weekly Chronicles - August'25-Week II
PPTX
Tartificialntelligence_presentation.pptx
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PPTX
1. Introduction to Computer Programming.pptx
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Spectral efficient network and resource selection model in 5G networks
Encapsulation theory and applications.pdf
A Presentation on Artificial Intelligence
OMC Textile Division Presentation 2021.pptx
gpt5_lecture_notes_comprehensive_20250812015547.pdf
Group 1 Presentation -Planning and Decision Making .pptx
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
cloud_computing_Infrastucture_as_cloud_p
August Patch Tuesday
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
MIND Revenue Release Quarter 2 2025 Press Release
SOPHOS-XG Firewall Administrator PPT.pptx
Machine learning based COVID-19 study performance prediction
Heart disease approach using modified random forest and particle swarm optimi...
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
NewMind AI Weekly Chronicles - August'25-Week II
Tartificialntelligence_presentation.pptx
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
1. Introduction to Computer Programming.pptx
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Spectral efficient network and resource selection model in 5G networks

Qiu bosc2010

  • 1. Cloud Technologies and Their ApplicationsThe Bioinformatics Open Source Conference (BOSC 2010) Boston, MassachusettsJudy Qiuhttp://salsahpc.indiana.edu Assistant Director, Pervasive Technology InstituteAssistant Professor, School of Informatics and ComputingIndiana University
  • 2. Data Explosion and ChallengesData DelugeCloud TechnologiesWhy ?How ?Life Science ApplicationsParallel ComputingWhat ?
  • 3. Data We’re Looking atPublic Health Data (IU Medical School & IUPUI Polis Center)(65535 Patient/GIS records / 54 dimensions each)Biology DNA sequence alignments (IU Medical School & CGB) (10 million Sequences / at least 300 to 400 base pair each)NIH PubChem (IU Cheminformatics) (60 million chemical compounds/166 fingerprints each)High volume and high dimension require new efficient computing approaches!
  • 4. Some Life Sciences ApplicationsEST (Expressed Sequence Tag)sequence assembly program using DNA sequence assembly program software CAP3.Metagenomics and Alu repetition alignment using Smith Waterman dissimilarity computations followed by MPI applications for Clustering and MDS (Multi Dimensional Scaling) for dimension reduction before visualizationMapping the 60 million entries in PubCheminto two or three dimensions to aid selection of related chemicals with convenient Google Earth like Browser. This uses either hierarchical MDS (which cannot be applied directly as O(N2)) or GTM (Generative Topographic Mapping).Correlating Childhood obesity with environmental factorsby combining medical records with Geographical Information data with over 100 attributes using correlation computation, MDS and genetic algorithms for choosing optimal environmental factors.
  • 5. DNA Sequencing PipelineMapReduceIllumina/Solexa Roche/454 Life Sciences Applied Biosystems/SOLiDPairwiseclusteringBlocking MDSMPIModern Commerical Gene SequencesVisualizationPlotvizSequencealignmentDissimilarityMatrixN(N-1)/2 valuesblockPairingsFASTA FileN SequencesRead AlignmentThis chart illustrate our research of a pipeline mode to provide services on demand (Software as a Service SaaS)
  • 6. User submit their jobs to the pipeline. The components are services and so is the whole pipeline.Internet
  • 7. Cloud Services and MapReduceCloud TechnologiesData DelugeLife ScienceApplicationsParallel Computing
  • 8. Clouds as Cost Effective Data Centers7Builds giant data centers with 100,000’s of computers; ~ 200-1000 to a shipping container with Internet access “Microsoft will cram between 150 and 220 shipping containers filled with data center gear into a new 500,000 square foot Chicago facility. This move marks the most significant, public use of the shipping container systems popularized by the likes of Sun Microsystems and Rackable Systems to date.”―News Release from Web
  • 9. Clouds hide Complexity8CyberinfrastructureIs “Research as a Service”SaaS: Software as a Service(e.g. Clustering is a service)PaaS: Platform as a ServiceIaaS plus core software capabilities on which you build SaaS(e.g. Azure is a PaaS; MapReduce is a Platform)IaaS(HaaS): Infrasturcture as a Service (get computer time with a credit card and with a Web interface like EC2)
  • 11. MapReduceMap(Key, Value) Reduce(Key, List<Value>) A parallel Runtime coming from Information RetrievalData PartitionsA hash function maps the results of the map tasks to r reduce tasksReduce OutputsImplementations support:Splitting of dataPassing the output of map functions to reduce functionsSorting the inputs to the reduce function based on the intermediate keysQuality of services
  • 12. Edge : communication pathVertex :execution task Hadoop & DryadLINQApache HadoopMicrosoft DryadLINQStandard LINQ operationsData/Compute NodesMaster NodeDryadLINQ operationsJobTrackerMMMMRRRRHDFSNameNodeDatablocks12DryadLINQ Compiler2334Directed Acyclic Graph (DAG) based execution flowsDryad process the DAG executing vertices on compute clustersLINQ provides a query interface for structured dataProvide Hash, Range, and Round-Robin partition patterns Apache Implementation of Google’s MapReduceHadoop Distributed File System (HDFS) manage dataMap/Reduce tasks are scheduled based on data locality in HDFS (replicated data blocks)Dryad Execution EngineJob creation; Resource management; Fault tolerance& re-execution of failed taskes/vertices
  • 13. Applications using Dryad & DryadLINQInput files (FASTA)CAP3 - Expressed Sequence Tag assembly to re-construct full-length mRNACAP3CAP3CAP3DryadLINQOutput filesPerform using DryadLINQ and Apache Hadoop implementationsSingle “Select” operation in DryadLINQ“Map only” operation in Hadoop X. Huang, A. Madan, “CAP3: A DNA Sequence Assembly Program,” Genome Research, vol. 9, no. 9, pp. 868-877, 1999.
  • 14. Classic Cloud ArchitectureAmazon EC2 and Microsoft AzureMapReduce ArchitectureApache Hadoop and Microsoft DryadLINQHDFSInput Data SetData FileMap()Map()ExecutableOptionalReducePhaseReduceResultsHDFS
  • 15. Usability and Performance of Different Cloud ApproachesCap3 PerformanceCap3 EfficiencyEfficiency = absolute sequential run time / (number of cores * parallel run time)
  • 16. Hadoop, DryadLINQ - 32 nodes (256 cores IDataPlex)
  • 17. EC2 - 16 High CPU extra large instances (128 cores)
  • 18. Azure- 128 small instances (128 cores)
  • 19. Ease of Use – Dryad/Hadoop are easier than EC2/Azure as higher level models
  • 20. Lines of code including file copyAzure : ~300 Hadoop: ~400 Dyrad: ~450 EC2 : ~700
  • 21. Table 1 : Selected EC2 Instance Types
  • 22. 4096 Cap3 data files : 1.06 GB / 1875968 reads (458 readsX4096)..Following is the cost to process 4096 CAP3 files..Amortized cost in Tempest (24 core X 32 nodes, 48 GB per node) = 9.43$(Assume 70% utilization, write off over 3 years, include support)
  • 23. Data Intensive ApplicationsCloud TechnologiesData DelugeLife Science ApplicationsParallel Computing
  • 24. Alu and Metagenomics Workflow“All pairs” problem Data is a collection of N sequences. Need to calcuate N2dissimilarities (distances) between sequnces (all pairs).These cannot be thought of as vectors because there are missing characters
  • 25. “Multiple Sequence Alignment” (creating vectors of characters) doesn’t seem to work if N larger than O(100), where 100’s of characters long.Step 1: Can calculate N2 dissimilarities (distances) between sequencesStep 2: Find families by clustering (using much better methods than Kmeans). As no vectors, use vector free O(N2) methodsStep 3: Map to 3D for visualization using Multidimensional Scaling (MDS) – also O(N2)Results: N = 50,000 runs in 10 hours (the complete pipeline above) on 768 coresDiscussions:Need to address millions of sequences …..Currently using a mix of MapReduce and MPITwister will do all steps as MDS, Clustering just need MPI Broadcast/Reduce
  • 26. All-Pairs Using DryadLINQ125 million distances4 hours & 46 minutesCalculate Pairwise Distances (Smith Waterman Gotoh)Calculate pairwise distances for a collection of genes (used for clustering, MDS)Fine grained tasks in MPICoarse grained tasks in DryadLINQPerformed on 768 cores (Tempest Cluster)Moretti, C., Bui, H., Hollingsworth, K., Rich, B., Flynn, P., & Thain, D. (2009). All-Pairs: An Abstraction for Data Intensive Computing on Campus Grids. IEEE Transactions on Parallel and Distributed Systems, 21, 21-36.
  • 27. Biology MDS and Clustering ResultsAlu FamiliesThis visualizes results of Alu repeats from Chimpanzee and Human Genomes. Young families (green, yellow) are seen as tight clusters. This is projection of MDS dimension reduction to 3D of 35399 repeats – each with about 400 base pairsMetagenomicsThis visualizes results of dimension reduction to 3D of 30000 gene sequences from an environmental sample. The many different genes are classified by clustering algorithm and visualized by MDS dimension reduction
  • 28. Hadoop/Dryad ComparisonInhomogeneous Data IInhomogeneity of data does not have a significant effect when the sequence lengths are randomly distributedDryad with Windows HPCS compared to Hadoop with Linux RHEL on Idataplex (32 nodes)
  • 29. Hadoop/Dryad ComparisonInhomogeneous Data IIThis shows the natural load balancing of Hadoop MR dynamic task assignment using a global pipe line in contrast to the DryadLinq static assignmentDryad with Windows HPCS compared to Hadoop with Linux RHEL on Idataplex (32 nodes)
  • 30. Hadoop VM Performance DegradationPerf. Degradation = (Tvm – Tbaremetal)/Tbaremetal15.3% Degradation at largest data set size
  • 31. Parallel Computing and SoftwareCloud TechnologiesData DelugeLife Science ApplicationsParallel Computing
  • 32. Twister(MapReduce++)Pub/Sub Broker NetworkMap WorkerStreaming based communication
  • 33. Intermediate results are directly transferred from the map tasks to the reduce tasks – eliminates local files
  • 36. Combine phase to combine reductions
  • 37. User Program is the composer of MapReduce computations
  • 38. Extendsthe MapReduce model to iterativecomputationsMStaticdataConfigure()Worker NodesReduce WorkerRDDMRDriverUserProgramIterateMRDeamonDMMMMData Read/WriteRRRRUser Programδ flowCommunicationMap(Key, Value) File SystemData SplitReduce (Key, List<Value>) Close()Combine (Key, List<Value>)Different synchronization and intercommunication mechanisms used by the parallel runtimes
  • 40. Iterative ComputationsK-meansMatrix MultiplicationPerformance of K-Means Parallel Overhead Matrix Multiplication
  • 41. Dimension Reduction AlgorithmsMultidimensional Scaling (MDS) [1]Given the proximity information among points.
  • 42. Optimization problem to find mapping in target dimension of the given data based on pairwise proximity information while minimize the objective function.
  • 43. Objective functions: STRESS (1) or SSTRESS (2)
  • 44. Only needs pairwise distances ijbetween original points (typically not Euclidean)
  • 45. dij(X) is Euclidean distance between mapped (3D) pointsGenerative Topographic Mapping (GTM) [2]Find optimal K-representations for the given data (in 3D), known as K-cluster problem (NP-hard)
  • 46. Original algorithm use EM method for optimization
  • 47. Deterministic Annealing algorithm can be used for finding a global solution
  • 48. Objective functions is to maximize log-likelihood:[1] I. Borg and P. J. Groenen. Modern Multidimensional Scaling: Theory and Applications. Springer, New York, NY, U.S.A., 2005.[2] C. Bishop, M. Svens´en, and C. Williams. GTM: The generative topographic mapping. Neural computation, 10(1):215–234, 1998.
  • 49. Science Cloud (Dynamic Virtual Cluster) ArchitectureSmith Waterman Dissimilarities, CAP-3 Gene Assembly, PhyloD Using DryadLINQ, High Energy Physics, Clustering, Multidimensional Scaling, Generative Topological MappingApplicationsServices and WorkflowMicrosoft DryadLINQ / MPIApache Hadoop / Twister/ MPIRuntimesLinux Bare-systemWindows Server 2008 HPCBare-systemLinux Virtual MachinesWindows Server 2008 HPCInfrastructure softwareXen VirtualizationXen VirtualizationXCAT InfrastructureHardwareiDataplex Bare-metal NodesDynamic Virtual Cluster provisioning via XCATSupports both stateful and stateless OS images
  • 50. Dynamic Virtual ClustersMonitoring & Control InfrastructureMonitoring InterfaceMonitoring InfrastructureDynamic Cluster ArchitecturePub/Sub Broker NetworkSW-G Using Hadoop SW-G Using Hadoop SW-G Using DryadLINQVirtual/Physical ClustersLinux Bare-systemLinux on XenWindows Server 2008 Bare-systemSwitchable clusters on the same hardware (~5 minutes between different OS such as Linux+Xen to Windows+HPCS)Support for virtual clustersSW-G : Smith Waterman Gotoh Dissimilarity Computation as an pleasingly parallel problem suitable for MapReduce style applicationsXCAT InfrastructureSummarizeriDataplex Bare-metal Nodes (32 nodes)XCAT InfrastructureSwitcheriDataplex Bare-metal Nodes
  • 51. SALSA HPC Dynamic Virtual Clusters DemoAt top, these 3 clusters are switching applications on fixed environment. Takes ~30 Seconds.
  • 52. At bottom, this cluster is switching between Environments – Linux; Linux +Xen; Windows + HPCS. Takes about ~7 minutes.
  • 53. It demonstrates the concept of Science on Clouds using a FutureGrid cluster.FutureGrid: a Grid Testbed IU Cray operational, IU IBM (iDataPlex) completed stability test May 6 UCSD IBM operational, UF IBM stability test completes ~ May 12Network, NID and PU HTC system operationalUC IBM stability test completes ~ May 27; TACC Dell awaiting delivery of componentsNID: Network Impairment DevicePrivatePublicFG Network
  • 54. Summary of Initial ResultsCloud technologies (Dryad/Hadoop/Azure/EC2) promising for Biology computationsDynamic Virtual Clusters allow one to switch between different modesOverhead of VM’s on Hadoop (15%) acceptableInhomogeneous problems currently favors Hadoop over DryadTwister allows iterative problems (classic linear algebra/datamining) to use MapReduce model efficientlyPrototype Twister released
  • 56. ReferencesTwister  Open Source Iterative MapReduce Softwarewww.iterativemapreduce.orgSALSA Project salsahpc.indiana.eduFutureGrid Projectfuturegrid.orgSponsorsMicrosoft, NIH, NSF, Pervasive Technology Institute
  • 57. MapReduce and Clouds for Science http://salsahpc.indiana.eduIndiana University BloomingtonJudy Qiu, SALSA GroupSALSA project (salsahpc.indiana.edu) investigates new programming models of parallel multicore computing and Cloud/Grid computing. It aims at developing and applying parallel and distributed Cyberinfrastructure to support large scale data analysis. We illustrate this with a study of usability and performance of different Cloud approaches. We will develop MapReduce technology for Azure that matches that available on FutureGrid in three stages: AzureMapReduce (where we already have a prototype), AzureTwister, and TwisterMPIReduce. These offer basic MapReduce, iterative MapReduce, and a library mapping a subset of MPI to Twister. They are matched by a set of applications that test the increasing sophistication of the environment and run on Azure, FutureGrid, or in a workflow linking them.Iterative MapReduce using Java Twisterhttp://www.iterativemapreduce.org/Twister supports iterative MapReduce Computations and allows MapReduce to achieve higher performance, perform faster data transfers, and reduce the time it takes to process vast sets of data for data mining and machine learning applications. Open source code supports streaming communication and long running processes. MPI is not generally suitable for clouds. But the subclass of MPI style operations supported by Twister – namely, the equivalent of MPI-Reduce, MPI-Broadcast (multicast), and MPI-Barrier – have large messages and offer the possibility of reasonable cloud performance. This hypothesis is supported by our comparison of JavaTwister with MPI and Hadoop. Many linear algebra and data mining algorithms need only this MPI subset, and we have used this in our initial choice of evaluating applications. We wish to compare Twister implementations on Azure with MPI implementations (running as a distributed workflow) on FutureGrid. Thus, we introduce a new runtime, TwisterMPIReduce, as a software library on top of Twister, which will map applications using the broadcast/reduce subset of MPI to Twister.Architecture of TwisterMapReduce on Azure − AzureMapReduceAzureMapReduce uses Azure Queues for map/reduce task scheduling, Azure Tables for metadata and monitoring data storage, Azure Blob Storage for input/output/intermediate data storage, and Azure Compute worker roles to perform the computations. The map/reduce tasks of the AzureMapReduce runtime are dynamically scheduled using a global queue.Usability and Performance of Different Cloud and MapReduce ModelsThe cost effectiveness of cloud data centers combined with the comparable performance reported here suggests that loosely coupled science applications will increasingly be implemented on clouds and that using MapReduce will offer convenient user interfaces with little overhead. We present three typical results with two applications (PageRank and SW-G for biological local pairwise sequence alignment) to evaluate performance and scalability of Twister and AzureMapReduce. Architecture of AzureMapReduceArchitecture of TwisterMPIReduceParallel Efficiency of the different parallel runtimes for the Smith Waterman Gotoh algorithmTotal running time for 20 iterations of Pagerank algorithm on ClueWeb data with Twister and Hadoop on 256 coresPerformance of AzureMapReduce on Smith Waterman Gotoh distance computation as a function of number of instances used

Editor's Notes

  • #15: Emerging technologies we cannot draw too much conclusion yet but all look promising at the momentEase of development. Dryad and Hadoop &gt;&gt; EC2 and AzureWhy Azure is worse than EC2 although less code lines?Simplest model
  • #16: #core x 1Ghz
  • #22: 10k data size
  • #23: 10k data size
  • #24: Overhead is independent of computation time. With the size of data go up, overall overhead is reduced.
  • #29: MDS implemented in C#; GTM in R and C/C++
  • #33: Support development of new applications and new middleware using Cloud, Grid and Parallel computing (Nimbus, Eucalyptus, Hadoop, Globus, Unicore, MPI, OpenMP. Linux, Windows …) looking at functionality, interoperability, performance Put the “science” back in the computer science of grid computing by enabling replicable experimentsOpen source software built around Moab/xCAT to support dynamic provisioning from Cloud to HPC environment, Linux to Windows ….. with monitoring, benchmarks and support of important existing middlewareJune 2010 Initial users; September 2010 All hardware (except IU shared memory system) accepted and major use starts; October 2011 FutureGrid allocatable via TeraGrid process