SlideShare a Scribd company logo
Low level Java programming 
With examples from OpenHFT 
Peter Lawrey 
CEO and Principal Consultant 
Higher Frequency Trading. 
Presentation to Joker 2014 St Petersburg, October 2014.
About Us… 
Higher Frequency Trading is a small 
consulting and software development house 
specialising in: 
• Low latency, high throughput software 
• 6 developers + 3 staff in Europe and USA. 
• Sponsor HFT related open source projects 
• Core Java engineering
About Me… 
• CEO and Principal Consultant 
• Third on Stackoverflow for Java, 
most Java Performance answers. 
• Vanilla Java blog with 3 million views 
• Founder of the Performance Java User's 
Group 
• An Australian, based in the U.K.
What Is The Problem We Solve? 
"I want to be able to read and write my 
data to a persisted, distributed system, 
with the speed of in memory data 
structures"
Agenda 
What are our Open Source products used for? 
Where to start with low latency? 
When you know you have a problem, what can you do?
Chronicle scaling Vertically and Horizontally 
 Shares data structure between processes 
 Replication between machines 
 Build on a low level library Java Lang. 
 Millions of operations per second. 
 Micro-second latency. No TCP locally. 
 Synchronous logging to the OS. 
 Apache 2.0 available on GitHub 
 Persisted via the OS.
What is Chronicle Map/Set? 
 Low latency persisted key-value store. 
 Latency between processes around 200 ns. 
 In specialized cases, latencies < 25 ns. 
 Throughputs up to 30 million/second.
What is Chronicle Map/Set? 
 ConcurrentMap or Set interface 
 Designed for reactive programming 
 Replication via TCP and UDP. 
 Apache 2.0 open source library. 
 Pure Java, supported on Windows, Linux, Mac OSX.
Low level java programming
Low level java programming
What is Chronicle Queue? 
 Low latency journaling and logging. 
 Low latency cross JVM communication. 
 Designed for reactive programming 
 Throughputs up to 40 million/second.
What is Chronicle Queue? 
 Latencies between processes of 200 nano-seconds. 
 Sustain rates of 400 MB/s, peaks much higher. 
 Replication via TCP. 
 Apache 2.0 open source library. 
 Pure Java, supported on Windows, Linux, Mac OSX.
Chronicle monitoring a legacy application
Chronicle journalling multiple applications
Chronicle for low latency trading
Short demo using OSResizesMain 
Note: The “VIRT” virtual memory size is 125t for 125 TB, actual usage 97M 
System memory: 7.7 GB, Extents of map: 137439.0 GB, disk used: 97MB, 
addressRange: 233a777f000-7f33a8000000 
$ ls -lh /tmp/oversized* 
-rw-rw-r-- 1 peter peter 126T Oct 20 17:03 /tmp/over-sized... 
$ du -h /tmp/oversized* 
97M /tmp/over-sized....
Where to start with low latency?
You need to measure first. 
 What is the end to end use case you need to improve? 
 Is it throughput or latency you need to improve? 
 Throughput or average latency hides poor latencies. 
 Avoid co-ordinated omission. See Gil Tene's talks.
Looking at the latency percentiles
Tools to help you measure 
 A commercial profiler. e.g. YourKit. 
 Instrument timings in production. 
 Record and replay production loads. 
 Avoid co-ordinated omission. See Gil Tene's talks. 
 If you can't change the code, Censum can help you 
tune your GC pause times. 
 Azul's Zing “solves” GC pause times, but has many 
other tools to reduce jitter.
What to look for when profiling 
 Reducing the allocation rate is often a quick win. 
 Memory profile to reduce garbage produced. 
 When CPU profiling, leave the memory profiler on. 
 If the profiler is no long helpful, application 
instrumentation can take it to the next level.
When you know you have a problem, what 
can you do about it?
Is garbage unavoidable? 
 You can always reduce it further and further, but at 
some point it's not worth it. 
 For a web service, 500 MB/s might be ok. 
 For a trading system, 500 KB/s might be ok. 
 If you produce 250 KB/s it will take you 24 hours 
to fill a 24 GB Eden space.
Common things to tune. 
A common source of garbage is Iterators. 
for (String s : arrayList) { } 
Creates an Iterator, however 
for (int i = 0, len = arrayList.size(); i < len; i++) { 
String s = arrayList.get(i); 
} 
Doesn't create an Iterator.
Common things to tune. 
BigDecimal can be a massive source of garbage. 
BigDecimal a = b.add(c) 
.divide(BigDecimal.TWO, 2, ROUND_HALF_UP); 
The same as double produces no garbage. 
double a = round(b + c, 2); 
You have to have a library to support rounding. Without 
it you will get rounding and representation errors.
Be aware of your memory speeds. 
concurrency Clock cycles Seconds 
L1 caches multi-core 4 1 seconds 
L2 cache multi-core 10 3 seconds 
L3 cache socket wide 40-75 15 seconds 
Main memory System wide 200 50 seconds. 
SSD access System wide 50,000 14 hours 
Local Network Network 180,000 2 days 
HDD System wide 30,000,000 1 year. 
To maximise performance you want to spend as much 
time in L3, or ideally L1/L2 caches as possible.
Memory access is faster with less garbage 
Reducing garbage minimises filling your caches with 
garbage. 
If you are producing 300 MB/s of garbage your L1 cache 
will be filled with garbage is about 100 micro-seconds, 
your L2 cache will be filled in under 1 milli-second. 
The L3 cache and main memory shared and the more you 
use this, the less scalability you will get from your multi-cores.
Faster memory access 
 Reduce garbage 
 Reduce pointer chasing 
 Use primitives instead of objects. 
 Avoid false sharing for highly contended mutated values
Lock free coding 
 AtomicLong.addAndGet(n) 
 Unsafe.compareAndSwapLong 
 Unsafe.getAndAddLong
Using off heap memory 
 Reduce GC work load. 
 More control over layout 
 Data can be persisted 
 Data can be embedded into multiple processes. 
 Can exploit virtual memory allocation instead of 
main memory allocation.
Low latency network connections 
 Kernel by pass network cards e.g. Solarflare 
 Single hop latencies around 5 micro-seconds 
 Make scaling to more machines practical when tens of 
micro-seconds matter.
Reducing micro-jitter 
 Unless you isolate a CPU, it will get interrupted by the 
scheduler a lot. 
 You can see delays of 1 – 5 ms every hour on an 
otherwise idle machine. 
 On a virtualised machine, you can see delays of 50 ms. 
 Java Thread Affinity library can declaratively layout 
your critical threads.
Reducing micro-jitter 
Number of interrupts per hour by length.
Q & A 
http://openhft.net/ 
Performance Java User's Group. 
@PeterLawrey 
peter.lawrey@higherfrequencytrading.com

More Related Content

PPT
GC free coding in @Java presented @Geecon
PPTX
Low latency in java 8 v5
PPT
High Frequency Trading and NoSQL database
PPTX
Nginx Reverse Proxy with Kafka.pptx
PPT
Open HFT libraries in @Java
PPTX
Low latency microservices in java QCon New York 2016
ODP
Writing and testing high frequency trading engines in java
PPTX
Apache Kafka
GC free coding in @Java presented @Geecon
Low latency in java 8 v5
High Frequency Trading and NoSQL database
Nginx Reverse Proxy with Kafka.pptx
Open HFT libraries in @Java
Low latency microservices in java QCon New York 2016
Writing and testing high frequency trading engines in java
Apache Kafka

What's hot (20)

PDF
Understanding Memory Management In Spark For Fun And Profit
PDF
Deep Dive: Memory Management in Apache Spark
PPTX
From cache to in-memory data grid. Introduction to Hazelcast.
PDF
Best Practices of HA and Replication of PostgreSQL in Virtualized Environments
PDF
Hive on Tezのベストプラクティス
PDF
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
PDF
The Rise of ZStandard: Apache Spark/Parquet/ORC/Avro
PPTX
Prometheus and Grafana
PDF
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
PDF
Apache Spark At Scale in the Cloud
PDF
MariaDB Galera Cluster presentation
PPTX
How to understand and analyze Apache Hive query execution plan for performanc...
PDF
マルチコアとネットワークスタックの高速化技法
PPSX
LMAX Disruptor as real-life example
ODP
Memory management in Linux
PDF
Intrinsic Methods in HotSpot VM
PPTX
jcmd をさわってみよう
PDF
噛み砕いてKafka Streams #kafkajp
PPTX
File Format Benchmark - Avro, JSON, ORC and Parquet
PDF
Prometheus Overview
Understanding Memory Management In Spark For Fun And Profit
Deep Dive: Memory Management in Apache Spark
From cache to in-memory data grid. Introduction to Hazelcast.
Best Practices of HA and Replication of PostgreSQL in Virtualized Environments
Hive on Tezのベストプラクティス
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
The Rise of ZStandard: Apache Spark/Parquet/ORC/Avro
Prometheus and Grafana
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Apache Spark At Scale in the Cloud
MariaDB Galera Cluster presentation
How to understand and analyze Apache Hive query execution plan for performanc...
マルチコアとネットワークスタックの高速化技法
LMAX Disruptor as real-life example
Memory management in Linux
Intrinsic Methods in HotSpot VM
jcmd をさわってみよう
噛み砕いてKafka Streams #kafkajp
File Format Benchmark - Avro, JSON, ORC and Parquet
Prometheus Overview
Ad

Viewers also liked (20)

PPT
Reactive programming with examples
PPT
Streams and lambdas the good, the bad and the ugly
PPTX
Low latency for high throughput
PPTX
Deterministic behaviour and performance in trading systems
PPTX
Responding rapidly when you have 100+ GB data sets in Java
PPTX
Legacy lambda code
PPTX
Determinism in finance
PPTX
Microservices for performance - GOTO Chicago 2016
PPT
Introduction to OpenHFT for Melbourne Java Users Group
PPTX
Reactive Programming in Java 8 with Rx-Java
PPT
Advanced off heap ipc
PPTX
ISI 5121 Trove Presentation
PDF
Java and the blockchain - introducing web3j
PDF
RxJava - introduction & design
PDF
John Davies: "High Performance Java Binary" from JavaZone 2015
PPT
Thread Safe Interprocess Shared Memory in Java (in 7 mins)
PPTX
Computer programming language concept
PPTX
Tuning Java GC to resolve performance issues
PDF
GC Tuning in the HotSpot Java VM - a FISL 10 Presentation
PPTX
Big Data for Finance – Challenges in High-Frequency Trading
Reactive programming with examples
Streams and lambdas the good, the bad and the ugly
Low latency for high throughput
Deterministic behaviour and performance in trading systems
Responding rapidly when you have 100+ GB data sets in Java
Legacy lambda code
Determinism in finance
Microservices for performance - GOTO Chicago 2016
Introduction to OpenHFT for Melbourne Java Users Group
Reactive Programming in Java 8 with Rx-Java
Advanced off heap ipc
ISI 5121 Trove Presentation
Java and the blockchain - introducing web3j
RxJava - introduction & design
John Davies: "High Performance Java Binary" from JavaZone 2015
Thread Safe Interprocess Shared Memory in Java (in 7 mins)
Computer programming language concept
Tuning Java GC to resolve performance issues
GC Tuning in the HotSpot Java VM - a FISL 10 Presentation
Big Data for Finance – Challenges in High-Frequency Trading
Ad

Similar to Low level java programming (20)

PPT
Optimizing your java applications for multi core hardware
KEY
Everything I Ever Learned About JVM Performance Tuning @Twitter
PDF
OpenDS_Jazoon2010
PPTX
Low latency in java 8 by Peter Lawrey
PDF
“Show Me the Garbage!”, Garbage Collection a Friend or a Foe
PPT
Best Practices for performance evaluation and diagnosis of Java Applications ...
PDF
Performance optimization techniques for Java code
PDF
Toward low-latency Java applications - javaOne 2014
KEY
Writing Scalable Software in Java
PDF
Understanding and Designing Ultra low latency systems | Low Latency | Ultra L...
PDF
Performance and predictability
PDF
Slices Of Performance in Java - Oleksandr Bodnar
PPT
Performance Analysis of Idle Programs
PDF
Explorations of the three legged performance stool
PDF
Donatas Mažionis, Building low latency web APIs
PPT
Azul yandexjune010
PPTX
Application Profiling for Memory and Performance
PDF
Software Profiling: Java Performance, Profiling and Flamegraphs
PDF
Low latency & mechanical sympathy issues and solutions
PPT
Hs java open_party
Optimizing your java applications for multi core hardware
Everything I Ever Learned About JVM Performance Tuning @Twitter
OpenDS_Jazoon2010
Low latency in java 8 by Peter Lawrey
“Show Me the Garbage!”, Garbage Collection a Friend or a Foe
Best Practices for performance evaluation and diagnosis of Java Applications ...
Performance optimization techniques for Java code
Toward low-latency Java applications - javaOne 2014
Writing Scalable Software in Java
Understanding and Designing Ultra low latency systems | Low Latency | Ultra L...
Performance and predictability
Slices Of Performance in Java - Oleksandr Bodnar
Performance Analysis of Idle Programs
Explorations of the three legged performance stool
Donatas Mažionis, Building low latency web APIs
Azul yandexjune010
Application Profiling for Memory and Performance
Software Profiling: Java Performance, Profiling and Flamegraphs
Low latency & mechanical sympathy issues and solutions
Hs java open_party

Recently uploaded (20)

PDF
Sensors and Actuators in IoT Systems using pdf
PDF
solutions_manual_-_materials___processing_in_manufacturing__demargo_.pdf
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
GDG Cloud Iasi [PUBLIC] Florian Blaga - Unveiling the Evolution of Cybersecur...
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Modernizing your data center with Dell and AMD
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Advanced Soft Computing BINUS July 2025.pdf
PDF
madgavkar20181017ppt McKinsey Presentation.pdf
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
GamePlan Trading System Review: Professional Trader's Honest Take
PDF
KodekX | Application Modernization Development
Sensors and Actuators in IoT Systems using pdf
solutions_manual_-_materials___processing_in_manufacturing__demargo_.pdf
20250228 LYD VKU AI Blended-Learning.pptx
Dropbox Q2 2025 Financial Results & Investor Presentation
GDG Cloud Iasi [PUBLIC] Florian Blaga - Unveiling the Evolution of Cybersecur...
Reach Out and Touch Someone: Haptics and Empathic Computing
NewMind AI Weekly Chronicles - August'25 Week I
Network Security Unit 5.pdf for BCA BBA.
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Modernizing your data center with Dell and AMD
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
CIFDAQ's Market Insight: SEC Turns Pro Crypto
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Review of recent advances in non-invasive hemoglobin estimation
Advanced Soft Computing BINUS July 2025.pdf
madgavkar20181017ppt McKinsey Presentation.pdf
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
GamePlan Trading System Review: Professional Trader's Honest Take
KodekX | Application Modernization Development

Low level java programming

  • 1. Low level Java programming With examples from OpenHFT Peter Lawrey CEO and Principal Consultant Higher Frequency Trading. Presentation to Joker 2014 St Petersburg, October 2014.
  • 2. About Us… Higher Frequency Trading is a small consulting and software development house specialising in: • Low latency, high throughput software • 6 developers + 3 staff in Europe and USA. • Sponsor HFT related open source projects • Core Java engineering
  • 3. About Me… • CEO and Principal Consultant • Third on Stackoverflow for Java, most Java Performance answers. • Vanilla Java blog with 3 million views • Founder of the Performance Java User's Group • An Australian, based in the U.K.
  • 4. What Is The Problem We Solve? "I want to be able to read and write my data to a persisted, distributed system, with the speed of in memory data structures"
  • 5. Agenda What are our Open Source products used for? Where to start with low latency? When you know you have a problem, what can you do?
  • 6. Chronicle scaling Vertically and Horizontally  Shares data structure between processes  Replication between machines  Build on a low level library Java Lang.  Millions of operations per second.  Micro-second latency. No TCP locally.  Synchronous logging to the OS.  Apache 2.0 available on GitHub  Persisted via the OS.
  • 7. What is Chronicle Map/Set?  Low latency persisted key-value store.  Latency between processes around 200 ns.  In specialized cases, latencies < 25 ns.  Throughputs up to 30 million/second.
  • 8. What is Chronicle Map/Set?  ConcurrentMap or Set interface  Designed for reactive programming  Replication via TCP and UDP.  Apache 2.0 open source library.  Pure Java, supported on Windows, Linux, Mac OSX.
  • 11. What is Chronicle Queue?  Low latency journaling and logging.  Low latency cross JVM communication.  Designed for reactive programming  Throughputs up to 40 million/second.
  • 12. What is Chronicle Queue?  Latencies between processes of 200 nano-seconds.  Sustain rates of 400 MB/s, peaks much higher.  Replication via TCP.  Apache 2.0 open source library.  Pure Java, supported on Windows, Linux, Mac OSX.
  • 13. Chronicle monitoring a legacy application
  • 15. Chronicle for low latency trading
  • 16. Short demo using OSResizesMain Note: The “VIRT” virtual memory size is 125t for 125 TB, actual usage 97M System memory: 7.7 GB, Extents of map: 137439.0 GB, disk used: 97MB, addressRange: 233a777f000-7f33a8000000 $ ls -lh /tmp/oversized* -rw-rw-r-- 1 peter peter 126T Oct 20 17:03 /tmp/over-sized... $ du -h /tmp/oversized* 97M /tmp/over-sized....
  • 17. Where to start with low latency?
  • 18. You need to measure first.  What is the end to end use case you need to improve?  Is it throughput or latency you need to improve?  Throughput or average latency hides poor latencies.  Avoid co-ordinated omission. See Gil Tene's talks.
  • 19. Looking at the latency percentiles
  • 20. Tools to help you measure  A commercial profiler. e.g. YourKit.  Instrument timings in production.  Record and replay production loads.  Avoid co-ordinated omission. See Gil Tene's talks.  If you can't change the code, Censum can help you tune your GC pause times.  Azul's Zing “solves” GC pause times, but has many other tools to reduce jitter.
  • 21. What to look for when profiling  Reducing the allocation rate is often a quick win.  Memory profile to reduce garbage produced.  When CPU profiling, leave the memory profiler on.  If the profiler is no long helpful, application instrumentation can take it to the next level.
  • 22. When you know you have a problem, what can you do about it?
  • 23. Is garbage unavoidable?  You can always reduce it further and further, but at some point it's not worth it.  For a web service, 500 MB/s might be ok.  For a trading system, 500 KB/s might be ok.  If you produce 250 KB/s it will take you 24 hours to fill a 24 GB Eden space.
  • 24. Common things to tune. A common source of garbage is Iterators. for (String s : arrayList) { } Creates an Iterator, however for (int i = 0, len = arrayList.size(); i < len; i++) { String s = arrayList.get(i); } Doesn't create an Iterator.
  • 25. Common things to tune. BigDecimal can be a massive source of garbage. BigDecimal a = b.add(c) .divide(BigDecimal.TWO, 2, ROUND_HALF_UP); The same as double produces no garbage. double a = round(b + c, 2); You have to have a library to support rounding. Without it you will get rounding and representation errors.
  • 26. Be aware of your memory speeds. concurrency Clock cycles Seconds L1 caches multi-core 4 1 seconds L2 cache multi-core 10 3 seconds L3 cache socket wide 40-75 15 seconds Main memory System wide 200 50 seconds. SSD access System wide 50,000 14 hours Local Network Network 180,000 2 days HDD System wide 30,000,000 1 year. To maximise performance you want to spend as much time in L3, or ideally L1/L2 caches as possible.
  • 27. Memory access is faster with less garbage Reducing garbage minimises filling your caches with garbage. If you are producing 300 MB/s of garbage your L1 cache will be filled with garbage is about 100 micro-seconds, your L2 cache will be filled in under 1 milli-second. The L3 cache and main memory shared and the more you use this, the less scalability you will get from your multi-cores.
  • 28. Faster memory access  Reduce garbage  Reduce pointer chasing  Use primitives instead of objects.  Avoid false sharing for highly contended mutated values
  • 29. Lock free coding  AtomicLong.addAndGet(n)  Unsafe.compareAndSwapLong  Unsafe.getAndAddLong
  • 30. Using off heap memory  Reduce GC work load.  More control over layout  Data can be persisted  Data can be embedded into multiple processes.  Can exploit virtual memory allocation instead of main memory allocation.
  • 31. Low latency network connections  Kernel by pass network cards e.g. Solarflare  Single hop latencies around 5 micro-seconds  Make scaling to more machines practical when tens of micro-seconds matter.
  • 32. Reducing micro-jitter  Unless you isolate a CPU, it will get interrupted by the scheduler a lot.  You can see delays of 1 – 5 ms every hour on an otherwise idle machine.  On a virtualised machine, you can see delays of 50 ms.  Java Thread Affinity library can declaratively layout your critical threads.
  • 33. Reducing micro-jitter Number of interrupts per hour by length.
  • 34. Q & A http://openhft.net/ Performance Java User's Group. @PeterLawrey [email protected]