SPIDAL JavaHigh Performance Data Analytics with Java on Large Multicore
HPC Clusters
sekanaya@indiana.edu
https://github.com/DSC-SPIDAL | http://saliya.org
24th High Performance Computing Symposium (HPC 2016)
April 3-6, 2016, Pasadena, CA, USA
as part of the SCS Spring Simulation Multi-Conference (SpringSim'16)
Saliya Ekanayake | Supun Kamburugamuve | Geoffrey Fox
High Performance?
4/4/2016 HPC 2016 2
48 Nodes 128 Nodes
40x Speedup with SPIDAL Java
Typical Java with All MPI
Typical Java with Threads and MPI
64x Ideal (if life was so fair!)
We’ll discuss today
Intel Haswell HPC Clusterwith
40Gbps Infiniband
Introduction
• Big Data and HPC
 Big data + cloud is the norm, but not always
 Some applications demand significant computation and communication
 HPC clusters are ideal
 However, it’s not easy
• Java
 Unprecedented big data ecosystem
 Apache has over 300 big data systems, mostly written in Java
 Performance and APIs have improved greatly with 1.7x
 Not much used in HPC, but can give comparative performance (e.g. SPIDAL Java)
 Comparative performance to C (more on this later)
 Google query https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-
8#q=java%20faster%20than%20c will point to interesting discussions on why.
 Interoperable and productive
4/4/2016 HPC 2016 3
SPIDAL Java
• Scalable Parallel Interoperable Data Analytics Library (SPIDAL)
 Available at https://github.com/DSC-SPIDAL
• Includes Multidimensional Scaling (MDS) and Clustering Applications
 DA-MDS
 Y. Ruan and G. Fox, "A Robust and Scalable Solution for Interpolative Multidimensional Scaling with Weighting," eScience
(eScience), 2013 IEEE 9th International Conference on, Beijing, 2013, pp. 61-69.
doi: 10.1109/eScience.2013.30
 DA-PWC
 Fox, G. C. Deterministic annealing and robust scalable data mining for the data deluge. In Proceedings of the 2nd International
Workshop on Petascal Data Analytics: Challenges and Opportunities, PDAC ’11, ACM (New York, NY, USA, 2011), 39–40.
 DA-VS
 Fox, G., Mani, D., and Pyne, S. Parallel deterministic annealing clustering and its application to lc-ms data analysis. In Big
Data, 2013 IEEE International Conference on (Oct 2013), 665–673.
 MDSasChisq
 General MDS implementation using LevenbergMarquardt algorithm
 Levenberg, K. A method for the solution of certain non-linear problems in least squares. Quarterly Journal of Applied
Mathmatics II, 2 (1944), 164–168.)
4/4/2016 HPC 2016 4
SPIDAL Java Applications
• Gene Sequence Clustering and Visualization
 Results at WebPlotViz
 https://spidal-gw.dsc.soic.indiana.edu/resultsets/991946447
 https://spidal-gw.dsc.soic.indiana.edu/resultsets/795366853
 A few snapshots
4/4/2016 HPC 2016 5
Sequence
File
Pairwise
Alignment
DA-MDS
DA-PWC
3D Plot
100,000 fungi sequences 3D phylogenetic tree 3D plot of vector data
SPIDAL Java Applications
• Stocks Data Analysis
 Time series view of stocks
 E.g. with 1 year moving window https://spidal-
gw.dsc.soic.indiana.edu/public/timeseriesview/825496517
4/4/2016 HPC 2016 6
Performance Challenges
• Intra-node Communication
• Exploiting Fat Nodes
• Overhead of Garbage Collection
• Cost of Heap Allocated Objects
• Cache and Memory Access
4/4/2016 HPC 2016 7
Performance Challenges
• Intra-node Communication [1/3]
 Large core counts per node – 24 to 36
 Data analytics use global collective communication – Allreduce, Allgather, Broadcast, etc.
 HPC simulations, in contrast, typically, uses local communications for tasks like halo exchanges.
4/4/2016 HPC 2016 8
3 million double values distributed uniformly over 48 nodes
• Identical message size per node, yet 24
MPI is ~10 times slower than 1 MPI
• Suggests #ranks per node should be 1
for the best performance
• But how to exploit all available cores?
Performance Challenges
• Intra-node Communication [2/3]
 Solution: Shared memory
 Use threads?  didn’t work well (explained shortly)
 Processes with shared memory communication
 Custom implementation in SPIDAL Java outside of MPI framework
4/4/2016 HPC 2016 9
• Only 1 rank per node participates in the MPI collective call
• Others exchange data using shared memory maps
100K DA-MDS Run Communication
200K DA-MDS Run Communication
Performance Challenges
• Intra-node Communication [3/3]
 Heterogeneity support
 Nodes with 24 and 36 cores
 Automatically detects configuration and allocates memory maps
 Implementation
 Custom shared memory implementation using OpenHFT’s Bytes API
 Supports collective calls necessary within SPIDAL Java
4/4/2016 HPC 2016 10
Performance Challenges
• Exploiting Fat Nodes [1/2]
 Large #Cores per Node
 E.g. 1 Node in Juliet HPC cluster
 2 Sockets
 12 Cores each
 2 Hardware threads per core
 L1 and L2 per core
 L3 shared per socket
 Two approaches
 All processes  1 proc per core
 1 Process multiple threads
 Which is better?
4/4/2016 HPC 2016 11
Socket 0
Socket 1
1 Core – 2 HTs
Performance Challenges
• Exploiting Fat Nodes [2/2]
 Suggested thread model in literature  fork-join regions within a process
4/4/2016 HPC 2016 12
Iterations
1. Thread creation and scheduling
overhead accumulates over
iterations and significant (~5%)
• True for Java threads as well as
OpenMP in C/C++ (see
https://github.com/esaliya/JavaThreads
and
https://github.com/esaliya/CppStack/tree/
master/omp2/letomppar)
2. Long running threads do better
than this model, still have non-
negligible overhead
3. Solution is to use processes with
shared memory communications as
in SPIDAL Java
process
Prev. Optimization
Performance Challenges
• Garbage Collection
 “Stop the world” events are expensive
 Especially, for parallel processes with collective communications
 Typical OOP  allocate – use – forget
 Original SPIDAL code produced frequent garbage of small
arrays
 Solution: Zero-GC using
 Static allocation and reuse
 Off-heap static buffers (more on next slide)
 Advantage
 No GC – obvious
 Scale to larger problem sizes
 E.g. Original SPIDAL code required 5GB (x 24 = 120 GB per node)
heap per process to handle 200K MDS. Optimized code use < 1GB heap
to finish within the same timing.
 Note. Physical memory is 128GB, so with optimized SPIDAL can
now do 1 million point MDS within hardware limits.
4/4/2016 HPC 2016 13
Heap size per
process reaches
–Xmx (2.5GB)
early in the
computation
Frequent GC
Heap size per
process is well
below (~1.1GB)
of –Xmx (2.5GB)
Virtually no GC activity
after optimizing
Performance Challenges
4/4/2016 HPC 2016 14
• I/O with Heap Allocated Objects
 Java-to-native I/O creates copies of objects in heap
 Otherwise can’t guarantee object’s memory location due to GC
 Too expensive
 Solution: Off-heap buffers (memory maps)
 Initial data loading  significantly faster than Java stream API calls
 Intra-node messaging  gives the best performance
 MPI inter-node communications
• Cache and Memory Access
 Nested data structures are neat, but expensive
 Solution: Contiguous memory with 1D arrays over 2D structures
 Indirect memory references are costly
 Also, adopted from HPC
 Blocked loops and loop ordering
Evaluation
4/4/2016 HPC 2016 15
• HPC Cluster
 128 Intel Haswell nodes with 2.3GHz nominal frequency
 96 nodes with 24 cores on 2 sockets (12 cores each)
 32 nodes with 36 cores on 2 sockets (18 cores each)
 128GB memory per node
 40Gbps Infiniband
• Software
 Java 1.8
 OpenHFT JavaLang 6.7.2
 Habanero Java 0.1.4
 OpenMPI 1.10.1
Application: DA-MDS
• Computations grow 𝑂 𝑁2
• Communication global and is 𝑂(𝑁)
4/4/2016 HPC 2016 16
100K
DA-MDS
200K
DA-MDS 400K
DA-MDS
• 1152 Total Parallelism across 48 nodes
• All combinations of 24 way parallelism per node
• LHS is all processes
• RHS is all internal threads and MPI across nodes
1. With SM communications in SPIDAL, processes
outperform threads (blue line)
2. Other optimizations further improves performance
(green line)
4/4/2016 HPC 2016 17
• Speedup for varying data sizes
• All processes
• LHS is 1 proc per node across 48 nodes
• RHS is 24 procs per node across 128 nodes
(ideal 64x speedup)
Larger data sizes show better speedup
(400K – 45x, 200K – 40x, 100K – 38x)
• Speedup on 36 core nodes
• All processes
• LHS is 1 proc per node across 32 nodes
• RHS is 36 procs per node across 32 nodes
(ideal 36x speedup)
Speedup plateaus around 23x after 24
way parallelism per node
4/4/2016 HPC 2016 18
The effect of different optimizations on speedup
Java, is it worth it? – YES!
Also, with JIT some cases in MDS are better
than C

More Related Content

PPTX
High Performance Processing of Streaming Data
PPTX
Big Data HPC Convergence
PPTX
Matching Data Intensive Applications and Hardware/Software Architectures
PPTX
Visualizing and Clustering Life Science Applications in Parallel 
PPTX
Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...
PPTX
Comparing Big Data and Simulation Applications and Implications for Software ...
PPTX
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...
PPTX
Beyond Hadoop 1.0: A Holistic View of Hadoop YARN, Spark and GraphLab
High Performance Processing of Streaming Data
Big Data HPC Convergence
Matching Data Intensive Applications and Hardware/Software Architectures
Visualizing and Clustering Life Science Applications in Parallel 
Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...
Comparing Big Data and Simulation Applications and Implications for Software ...
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...
Beyond Hadoop 1.0: A Holistic View of Hadoop YARN, Spark and GraphLab

What's hot (20)

PPTX
Big Data Analytics with Storm, Spark and GraphLab
PPTX
Matching Data Intensive Applications and Hardware/Software Architectures
PDF
Simple, Modular and Extensible Big Data Platform Concept
PDF
Lessons Learned on Benchmarking Big Data Platforms
PPTX
PEARC 17: Spark On the ARC
PPTX
Next generation analytics with yarn, spark and graph lab
PPTX
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...
PPTX
Distributed Deep Learning + others for Spark Meetup
PDF
Deep Learning on Apache Spark at CERN’s Large Hadron Collider with Intel Tech...
PPTX
High Resolution Energy Modeling that Scales with Apache Spark 2.0 Spark Summi...
PDF
04 open source_tools
PDF
Parikshit Ram – Senior Machine Learning Scientist, Skytree at MLconf ATL
PDF
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
PDF
RISELab:Enabling Intelligent Real-Time Decisions
PPT
Scaling hadoopapplications
PDF
WBDB 2015 Performance Evaluation of Spark SQL using BigBench
PDF
ADMM-Based Scalable Machine Learning on Apache Spark with Sauptik Dhar and Mo...
PDF
Hadoop Summit 2010 Benchmarking And Optimizing Hadoop
PDF
Introduction to Big Data
PDF
Scalable Distributed Real-Time Clustering for Big Data Streams
Big Data Analytics with Storm, Spark and GraphLab
Matching Data Intensive Applications and Hardware/Software Architectures
Simple, Modular and Extensible Big Data Platform Concept
Lessons Learned on Benchmarking Big Data Platforms
PEARC 17: Spark On the ARC
Next generation analytics with yarn, spark and graph lab
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...
Distributed Deep Learning + others for Spark Meetup
Deep Learning on Apache Spark at CERN’s Large Hadron Collider with Intel Tech...
High Resolution Energy Modeling that Scales with Apache Spark 2.0 Spark Summi...
04 open source_tools
Parikshit Ram – Senior Machine Learning Scientist, Skytree at MLconf ATL
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
RISELab:Enabling Intelligent Real-Time Decisions
Scaling hadoopapplications
WBDB 2015 Performance Evaluation of Spark SQL using BigBench
ADMM-Based Scalable Machine Learning on Apache Spark with Sauptik Dhar and Mo...
Hadoop Summit 2010 Benchmarking And Optimizing Hadoop
Introduction to Big Data
Scalable Distributed Real-Time Clustering for Big Data Streams
Ad

Similar to Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC Clusters (20)

PPTX
Java Thread and Process Performance for Parallel Machine Learning on Multicor...
PDF
Towards a Systematic Study of Big Data Performance and Benchmarking
PPTX
2018 03 25 system ml ai and openpower meetup
PDF
The state of Hive and Spark in the Cloud (July 2017)
PDF
Spark Summit EU talk by Berni Schiefer
PDF
Performance Characterization and Optimization of In-Memory Data Analytics on ...
PDF
Manycores for the Masses
PDF
(Berkeley CS186 guest lecture) Big Data Analytics Systems: What Goes Around C...
PDF
PDF
What is Distributed Computing, Why we use Apache Spark
PDF
SF Big Analytics & SF Machine Learning Meetup: Machine Learning at the Limit ...
PDF
Mauricio breteernitiz hpc-exascale-iscte
PDF
Panda scalable hpc_bestpractices_tue100418
PDF
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
PDF
Large-Scale Stream Processing in the Hadoop Ecosystem
PDF
Large-Scale Stream Processing in the Hadoop Ecosystem - Hadoop Summit 2016
PDF
Sempala - Interactive SPARQL Query Processing on Hadoop
PDF
OpenPOWER Acceleration of HPCC Systems
PDF
Boosting spark performance: An Overview of Techniques
PDF
The state of SQL-on-Hadoop in the Cloud
Java Thread and Process Performance for Parallel Machine Learning on Multicor...
Towards a Systematic Study of Big Data Performance and Benchmarking
2018 03 25 system ml ai and openpower meetup
The state of Hive and Spark in the Cloud (July 2017)
Spark Summit EU talk by Berni Schiefer
Performance Characterization and Optimization of In-Memory Data Analytics on ...
Manycores for the Masses
(Berkeley CS186 guest lecture) Big Data Analytics Systems: What Goes Around C...
What is Distributed Computing, Why we use Apache Spark
SF Big Analytics & SF Machine Learning Meetup: Machine Learning at the Limit ...
Mauricio breteernitiz hpc-exascale-iscte
Panda scalable hpc_bestpractices_tue100418
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
Large-Scale Stream Processing in the Hadoop Ecosystem
Large-Scale Stream Processing in the Hadoop Ecosystem - Hadoop Summit 2016
Sempala - Interactive SPARQL Query Processing on Hadoop
OpenPOWER Acceleration of HPCC Systems
Boosting spark performance: An Overview of Techniques
The state of SQL-on-Hadoop in the Cloud
Ad

More from Geoffrey Fox (20)

PPTX
AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
PPTX
High Performance Computing and Big Data
PPTX
Data Science and Online Education
PPTX
Big Data HPC Convergence and a bunch of other things
PPTX
Lessons from Data Science Program at Indiana University: Curriculum, Students...
PPTX
Data Science Curriculum at Indiana University
PPTX
What is the "Big Data" version of the Linpack Benchmark? ; What is “Big Data...
DOCX
Experience with Online Teaching with Open Source MOOC Technology
PPTX
Cloud Services for Big Data Analytics
PDF
Big Data and Clouds: Research and Education
PDF
High Performance Data Analytics and a Java Grande Run Time
PDF
Multi-faceted Classification of Big Data Use Cases and Proposed Architecture ...
PPTX
Classification of Big Data Use Cases by different Facets
PPTX
Remarks on MOOC's
PPTX
FutureGrid Computing Testbed as a Service
PPTX
Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...
PPTX
NIST Big Data Public Working Group NBD-PWG
PPTX
51 Use Cases and implications for HPC & Apache Big Data Stack
PPT
Linking Programming models between Grids, Web 2.0 and Multicore
PPT
CTS Conference Web 2.0 Tutorial Part 2
AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
High Performance Computing and Big Data
Data Science and Online Education
Big Data HPC Convergence and a bunch of other things
Lessons from Data Science Program at Indiana University: Curriculum, Students...
Data Science Curriculum at Indiana University
What is the "Big Data" version of the Linpack Benchmark? ; What is “Big Data...
Experience with Online Teaching with Open Source MOOC Technology
Cloud Services for Big Data Analytics
Big Data and Clouds: Research and Education
High Performance Data Analytics and a Java Grande Run Time
Multi-faceted Classification of Big Data Use Cases and Proposed Architecture ...
Classification of Big Data Use Cases by different Facets
Remarks on MOOC's
FutureGrid Computing Testbed as a Service
Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...
NIST Big Data Public Working Group NBD-PWG
51 Use Cases and implications for HPC & Apache Big Data Stack
Linking Programming models between Grids, Web 2.0 and Multicore
CTS Conference Web 2.0 Tutorial Part 2

Recently uploaded (20)

PPTX
AI IN MARKETING- PRESENTED BY ANWAR KABIR 1st June 2025.pptx
PDF
Comparative analysis of machine learning models for fake news detection in so...
PDF
Zenith AI: Advanced Artificial Intelligence
PDF
Enhancing plagiarism detection using data pre-processing and machine learning...
PDF
A review of recent deep learning applications in wood surface defect identifi...
PDF
Developing a website for English-speaking practice to English as a foreign la...
PDF
A proposed approach for plagiarism detection in Myanmar Unicode text
PDF
How IoT Sensor Integration in 2025 is Transforming Industries Worldwide
PPTX
The various Industrial Revolutions .pptx
PDF
Hybrid horned lizard optimization algorithm-aquila optimizer for DC motor
PDF
sbt 2.0: go big (Scala Days 2025 edition)
PDF
Architecture types and enterprise applications.pdf
PPTX
Benefits of Physical activity for teenagers.pptx
PDF
OpenACC and Open Hackathons Monthly Highlights July 2025
PDF
Getting started with AI Agents and Multi-Agent Systems
PPT
Galois Field Theory of Risk: A Perspective, Protocol, and Mathematical Backgr...
PDF
Produktkatalog für HOBO Datenlogger, Wetterstationen, Sensoren, Software und ...
PDF
Credit Without Borders: AI and Financial Inclusion in Bangladesh
PDF
Taming the Chaos: How to Turn Unstructured Data into Decisions
PDF
1 - Historical Antecedents, Social Consideration.pdf
AI IN MARKETING- PRESENTED BY ANWAR KABIR 1st June 2025.pptx
Comparative analysis of machine learning models for fake news detection in so...
Zenith AI: Advanced Artificial Intelligence
Enhancing plagiarism detection using data pre-processing and machine learning...
A review of recent deep learning applications in wood surface defect identifi...
Developing a website for English-speaking practice to English as a foreign la...
A proposed approach for plagiarism detection in Myanmar Unicode text
How IoT Sensor Integration in 2025 is Transforming Industries Worldwide
The various Industrial Revolutions .pptx
Hybrid horned lizard optimization algorithm-aquila optimizer for DC motor
sbt 2.0: go big (Scala Days 2025 edition)
Architecture types and enterprise applications.pdf
Benefits of Physical activity for teenagers.pptx
OpenACC and Open Hackathons Monthly Highlights July 2025
Getting started with AI Agents and Multi-Agent Systems
Galois Field Theory of Risk: A Perspective, Protocol, and Mathematical Backgr...
Produktkatalog für HOBO Datenlogger, Wetterstationen, Sensoren, Software und ...
Credit Without Borders: AI and Financial Inclusion in Bangladesh
Taming the Chaos: How to Turn Unstructured Data into Decisions
1 - Historical Antecedents, Social Consideration.pdf

Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC Clusters

  • 1. SPIDAL JavaHigh Performance Data Analytics with Java on Large Multicore HPC Clusters [email protected] https://github.com/DSC-SPIDAL | http://saliya.org 24th High Performance Computing Symposium (HPC 2016) April 3-6, 2016, Pasadena, CA, USA as part of the SCS Spring Simulation Multi-Conference (SpringSim'16) Saliya Ekanayake | Supun Kamburugamuve | Geoffrey Fox
  • 2. High Performance? 4/4/2016 HPC 2016 2 48 Nodes 128 Nodes 40x Speedup with SPIDAL Java Typical Java with All MPI Typical Java with Threads and MPI 64x Ideal (if life was so fair!) We’ll discuss today Intel Haswell HPC Clusterwith 40Gbps Infiniband
  • 3. Introduction • Big Data and HPC  Big data + cloud is the norm, but not always  Some applications demand significant computation and communication  HPC clusters are ideal  However, it’s not easy • Java  Unprecedented big data ecosystem  Apache has over 300 big data systems, mostly written in Java  Performance and APIs have improved greatly with 1.7x  Not much used in HPC, but can give comparative performance (e.g. SPIDAL Java)  Comparative performance to C (more on this later)  Google query https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF- 8#q=java%20faster%20than%20c will point to interesting discussions on why.  Interoperable and productive 4/4/2016 HPC 2016 3
  • 4. SPIDAL Java • Scalable Parallel Interoperable Data Analytics Library (SPIDAL)  Available at https://github.com/DSC-SPIDAL • Includes Multidimensional Scaling (MDS) and Clustering Applications  DA-MDS  Y. Ruan and G. Fox, "A Robust and Scalable Solution for Interpolative Multidimensional Scaling with Weighting," eScience (eScience), 2013 IEEE 9th International Conference on, Beijing, 2013, pp. 61-69. doi: 10.1109/eScience.2013.30  DA-PWC  Fox, G. C. Deterministic annealing and robust scalable data mining for the data deluge. In Proceedings of the 2nd International Workshop on Petascal Data Analytics: Challenges and Opportunities, PDAC ’11, ACM (New York, NY, USA, 2011), 39–40.  DA-VS  Fox, G., Mani, D., and Pyne, S. Parallel deterministic annealing clustering and its application to lc-ms data analysis. In Big Data, 2013 IEEE International Conference on (Oct 2013), 665–673.  MDSasChisq  General MDS implementation using LevenbergMarquardt algorithm  Levenberg, K. A method for the solution of certain non-linear problems in least squares. Quarterly Journal of Applied Mathmatics II, 2 (1944), 164–168.) 4/4/2016 HPC 2016 4
  • 5. SPIDAL Java Applications • Gene Sequence Clustering and Visualization  Results at WebPlotViz  https://spidal-gw.dsc.soic.indiana.edu/resultsets/991946447  https://spidal-gw.dsc.soic.indiana.edu/resultsets/795366853  A few snapshots 4/4/2016 HPC 2016 5 Sequence File Pairwise Alignment DA-MDS DA-PWC 3D Plot 100,000 fungi sequences 3D phylogenetic tree 3D plot of vector data
  • 6. SPIDAL Java Applications • Stocks Data Analysis  Time series view of stocks  E.g. with 1 year moving window https://spidal- gw.dsc.soic.indiana.edu/public/timeseriesview/825496517 4/4/2016 HPC 2016 6
  • 7. Performance Challenges • Intra-node Communication • Exploiting Fat Nodes • Overhead of Garbage Collection • Cost of Heap Allocated Objects • Cache and Memory Access 4/4/2016 HPC 2016 7
  • 8. Performance Challenges • Intra-node Communication [1/3]  Large core counts per node – 24 to 36  Data analytics use global collective communication – Allreduce, Allgather, Broadcast, etc.  HPC simulations, in contrast, typically, uses local communications for tasks like halo exchanges. 4/4/2016 HPC 2016 8 3 million double values distributed uniformly over 48 nodes • Identical message size per node, yet 24 MPI is ~10 times slower than 1 MPI • Suggests #ranks per node should be 1 for the best performance • But how to exploit all available cores?
  • 9. Performance Challenges • Intra-node Communication [2/3]  Solution: Shared memory  Use threads?  didn’t work well (explained shortly)  Processes with shared memory communication  Custom implementation in SPIDAL Java outside of MPI framework 4/4/2016 HPC 2016 9 • Only 1 rank per node participates in the MPI collective call • Others exchange data using shared memory maps 100K DA-MDS Run Communication 200K DA-MDS Run Communication
  • 10. Performance Challenges • Intra-node Communication [3/3]  Heterogeneity support  Nodes with 24 and 36 cores  Automatically detects configuration and allocates memory maps  Implementation  Custom shared memory implementation using OpenHFT’s Bytes API  Supports collective calls necessary within SPIDAL Java 4/4/2016 HPC 2016 10
  • 11. Performance Challenges • Exploiting Fat Nodes [1/2]  Large #Cores per Node  E.g. 1 Node in Juliet HPC cluster  2 Sockets  12 Cores each  2 Hardware threads per core  L1 and L2 per core  L3 shared per socket  Two approaches  All processes  1 proc per core  1 Process multiple threads  Which is better? 4/4/2016 HPC 2016 11 Socket 0 Socket 1 1 Core – 2 HTs
  • 12. Performance Challenges • Exploiting Fat Nodes [2/2]  Suggested thread model in literature  fork-join regions within a process 4/4/2016 HPC 2016 12 Iterations 1. Thread creation and scheduling overhead accumulates over iterations and significant (~5%) • True for Java threads as well as OpenMP in C/C++ (see https://github.com/esaliya/JavaThreads and https://github.com/esaliya/CppStack/tree/ master/omp2/letomppar) 2. Long running threads do better than this model, still have non- negligible overhead 3. Solution is to use processes with shared memory communications as in SPIDAL Java process Prev. Optimization
  • 13. Performance Challenges • Garbage Collection  “Stop the world” events are expensive  Especially, for parallel processes with collective communications  Typical OOP  allocate – use – forget  Original SPIDAL code produced frequent garbage of small arrays  Solution: Zero-GC using  Static allocation and reuse  Off-heap static buffers (more on next slide)  Advantage  No GC – obvious  Scale to larger problem sizes  E.g. Original SPIDAL code required 5GB (x 24 = 120 GB per node) heap per process to handle 200K MDS. Optimized code use < 1GB heap to finish within the same timing.  Note. Physical memory is 128GB, so with optimized SPIDAL can now do 1 million point MDS within hardware limits. 4/4/2016 HPC 2016 13 Heap size per process reaches –Xmx (2.5GB) early in the computation Frequent GC Heap size per process is well below (~1.1GB) of –Xmx (2.5GB) Virtually no GC activity after optimizing
  • 14. Performance Challenges 4/4/2016 HPC 2016 14 • I/O with Heap Allocated Objects  Java-to-native I/O creates copies of objects in heap  Otherwise can’t guarantee object’s memory location due to GC  Too expensive  Solution: Off-heap buffers (memory maps)  Initial data loading  significantly faster than Java stream API calls  Intra-node messaging  gives the best performance  MPI inter-node communications • Cache and Memory Access  Nested data structures are neat, but expensive  Solution: Contiguous memory with 1D arrays over 2D structures  Indirect memory references are costly  Also, adopted from HPC  Blocked loops and loop ordering
  • 15. Evaluation 4/4/2016 HPC 2016 15 • HPC Cluster  128 Intel Haswell nodes with 2.3GHz nominal frequency  96 nodes with 24 cores on 2 sockets (12 cores each)  32 nodes with 36 cores on 2 sockets (18 cores each)  128GB memory per node  40Gbps Infiniband • Software  Java 1.8  OpenHFT JavaLang 6.7.2  Habanero Java 0.1.4  OpenMPI 1.10.1 Application: DA-MDS • Computations grow 𝑂 𝑁2 • Communication global and is 𝑂(𝑁)
  • 16. 4/4/2016 HPC 2016 16 100K DA-MDS 200K DA-MDS 400K DA-MDS • 1152 Total Parallelism across 48 nodes • All combinations of 24 way parallelism per node • LHS is all processes • RHS is all internal threads and MPI across nodes 1. With SM communications in SPIDAL, processes outperform threads (blue line) 2. Other optimizations further improves performance (green line)
  • 17. 4/4/2016 HPC 2016 17 • Speedup for varying data sizes • All processes • LHS is 1 proc per node across 48 nodes • RHS is 24 procs per node across 128 nodes (ideal 64x speedup) Larger data sizes show better speedup (400K – 45x, 200K – 40x, 100K – 38x) • Speedup on 36 core nodes • All processes • LHS is 1 proc per node across 32 nodes • RHS is 36 procs per node across 32 nodes (ideal 36x speedup) Speedup plateaus around 23x after 24 way parallelism per node
  • 18. 4/4/2016 HPC 2016 18 The effect of different optimizations on speedup Java, is it worth it? – YES! Also, with JIT some cases in MDS are better than C