SlideShare a Scribd company logo
© 2020 OctoML and University of Washington
Introduction to the TVM Open
Source Deep Learning Compiler
Stack
Luis Ceze
w/ Tianqi Chen, Thierry Moreau, Jared Roesch, Ziheng Jiang,
Lianmin Zheng, Eddie Yan, Meghan Cowan, Chien-Yu Lin,
Haichen Shen, Leyuan Wang, Yuwei Hu, Carlos Guestrin,
Arvind Krishnamurthy, Zach Tatlock, and many in the Apache
TVM community!
© 2020 OctoML and University of Washington
A perfect storm
2
Growing set of requirements: Cost, latency, power, security & privacy
Cambrian explosion of models,
workloads, and use cases CNN GAN RNN MLP DQNN
Rapidly evolving ML software
ecosystem
Silicon scaling limitations
(Dennard and Moore)
Cambrian explosion of HW backends.
Heterogeneous HW
© 2020 OctoML and University of Washington
Current Dominant Deep Learning
Systems Landscape
3
Frameworks and
Inference engines
DL Compilers
Kernel
Libraries
Hardware
Orchestrators
Azure ML GCP Datalab
cuDNN NNPack MKL-DNN
Open source, automated
end-to-end optimization
framework for deep learning
Hand optimized
© 2020 OctoML and University of Washington
Stack
4
End-to-end,
framework to metal open
stack.
Research and deployment.
High-Level Differentiable IR
Tensor Expression IR
LLVM, CUDA, Metal VTA
Edge
FPGA
Cloud
FPGA
ASIC
Open source synthesizable deep
learning accelerator design
© 2020 OctoML and University of Washington
Automated by Machine Learning
5
High-Level Differentiable IR
Tensor Expression IR
LLVM, CUDA, Metal VTA
Edge
FPGA
Cloud
FPGA
ASIC
TVM: Automated End-to-end Optimizations for Deep Learning. Chen et al. OSDI 18
ML-based
Optimization
AutoTVM
AutoVTA
Hardware Fleet
© 2020 OctoML and University of Washington
End-user perspective:
Compile & deploy
6
import tvm
from tvm import relay
graph, params =
Frontend.from_keras
(keras_resnet50)
graph, lib, params =
Relay.build(graph, target)
Compile Deploy
© 2020 OctoML and University of Washington
Open Source Community
and Impact
7
Open source: ~420+ contributors from UW, Berkeley, Cornell, UCLA, Amazon, Huawei, NTT, Facebook, Microsoft,
Qualcomm, Alibaba, Intel, …
Incubated as Apache TVM. Independent governance, allowing competitors to
collaborate.
Used in production at leading companies
Deep Learning
Compiler Service
DSP/Tensor engine
for mobile
Mobile and Server
Optimizations
Cloud-side model
optimization
© 2020 OctoML and University of Washington 8
© 2020 OctoML and University of Washington
Existing Deep Learning Frameworks
9
Frameworks
Hardware
Primitive Tensor operators such as
Conv2D
High-level data flow graph
Offload to heavily optimized DNN operator
library
eg. cuDNN
© 2020 OctoML and University of Washington
Engineering costs limits progress
10
cuDNN Engineering intensive
New operator introduced by operator fusion optimization potential
benefit: 1.5x speedup
Frameworks
© 2020 OctoML and University of Washington
Our approach: Learning-based Learning System
11
Frameworks
Hardware
Directly generate optimized program
for new operator workloads and hardware
High-level data flow graph and optimizations
Machine Learning based Program Optimizer
© 2020 OctoML and University of Washington
Tensor Compilation/Optimization as a
search problem
12
Tensor Expression (Specification)
C = tvm.compute((m, n),
lambda y, x: tvm.sum(A[k, y] * B[k, x], axis=k))
Search Space of Possible Program Optimizations
Low-level Program Variants
© 2020 OctoML and University of Washington
Search Space Example (1/3)
13
Search Space of Possible Program Optimizations
Vanilla Code
Tensor Expression (Specification)
C = tvm.compute((m, n),
lambda y, x: tvm.sum(A[k, y] * B[k, x], axis=k))
© 2020 OctoML and University of Washington
Search Space Example (2/3)
14
Search Space of Possible Program Optimizations
Loop Tiling for Locality
Tensor Expression (Specification)
C = tvm.compute((m, n),
lambda y, x: tvm.sum(A[k, y] * B[k, x], axis=k))
© 2020 OctoML and University of Washington
Search Space Example (3/3)
15
Search Space of Possible Program Optimizations
Map to Accelerators
Tensor Expression (Specification)
C = tvm.compute((m, n),
lambda y, x: tvm.sum(A[k, y] * B[k, x], axis=k))
© 2020 OctoML and University of Washington
Optimization space is really large…
16
Loop Transformations
Thread
Bindings
Cache
Locality
Thread Cooperation Tensorization
Latency
Hiding
Typically explored via human intuition.
How can we automate this? Auto-tuning is too slow.
Billions of possible
optimization
choices
Tensor Expression (Specification)
C = tvm.compute((m, n),
lambda y, x: tvm.sum(A[k, y] * B[k, x], axis=k))
© 2020 OctoML and University of Washington
Problem Formalization
17
Search Space
Expression
Objective
Code Generator
Optimization
Configuration
Cost:
Execute Time
Program
AutoOpt
© 2020 OctoML and University of Washington
Black-box Optimization
18
Challenge: Lots of experimental trials, each trial costs ~1 second
Code Generator
Try each configuration until we find a good one
Search Space
Expression AutoTVM
© 2020 OctoML and University of Washington
Cost-model Driven Approach
19
Search Space
Expression AutoOpt
Challenge: Need reliable cost model per hardware
Use cost model to pick configuration
Code Generator
Cost Model
© 2020 OctoML and University of Washington
Statistical Cost Model
20
Search Space
Expression AutoOpt Code Generator
Our approach: Use machine learning to learn a statistical cost model
Statistical
Cost Model
Learning
Training data
Benefit: Automatically adapt to hardware type Important: How to design the cost model
© 2020 OctoML and University of Washington
Search
Space
Expression
2
2 AutoTVM
Shared
Cost Model
Code
Generator
New Tasks
Historical data from related operators
(tasks)
Need task invariant
representation
Transfer learning
AutoTVM Overview
21
Conv2D
Matmul
O(microseconds) inference vs. O(seconds) execution
Search
Space
Expression AutoTVM
Code
Generator
Statistical
Cost Model
Learning
Training data
High-level
configurations
Low-level
Abstract Syntax Tree
(AST)
Benefit: Low-level AST is a common representation (General, task
invariant)
Your favourite model
Statistical features
of AST
+ +
Learning to Optimize Tensor Programs. Chen et al. NeurIPS 18
© 2020 OctoML and University of Washington
Does it work?
22
Better than hand-tuned code in a few minutes
1.50x faster than hand-tuned in steady state
AutoTVM + transferred model
3x to 10x faster tuning w/ transfer
learning
© 2020 OctoML and University of Washington
Device Fleet: Distributed Test Bed for AutoTVM
23
Resource
Allocation
Resource
Token
Resource Manager (Tracker)
Nvidia GPU Server
RPC RT CUDA
Android Phone
RPC RT OpenCL
Zynq FPGA Board
RPC RT Bitstream
AutoTVM
Experiment 1
AutoTVM
Experiment 2
Persistent Remote Session
Scale up optimization
Resource sharing
…
© 2020 OctoML and University of Washington
State-of-the-art performance
24
Nvidia Titan X ARM GPU (MALI)
ARM CPU
(Cortex-A53)
Key point: TVM offers good performance with low manual effort
© 2020 OctoML and University of Washington 25
End-to-end,
framework to metal open
stack.
Research and deployment
High-Level Differentiable IR
Tensor Expression IR
LLVM, CUDA, Metal VTA
Edge
FPGA
Cloud
FPGA
ASIC
Open source synthesizable deep
learning accelerator design
Stack
© 2020 OctoML and University of Washington
DL Accelerator Design Challenges
26
CNN
GAN
RNN
MLP
DQNN
• Keeping up with algorithmic changes
• Finding the right generality/efficiency trade-off
• Enable a “day-0” software stack on top
• (VTA: two-level ISA, templatized design)
• (VTA: templatized design + HW parameter search)
• (VTA: tight coupling with TVM)
© 2020 OctoML and University of Washington
VTA:
Open & Flexible Deep Learning Accelerator
27
Current TVM Stack
VTA Runtime & JIT Compiler
VTA MicroArchitecture VTA Simulator
VTA Hardware/Software Interface (ISA)
• Move hardware complexity to
software via a two-level ISA
• Runtime JIT-compile
accelerator micro code
• Native support in TVM
• Support heterogenous devices
(split graph)
• Support for secure execution
(soon)
© 2020 OctoML and University of Washington
VTA Open Source Deep Learning accelerator
28
• Decoupled access-execute with explicit software control
• Two-level ISA: JIT breaks multi-cycle “CISC” instructions into micro-ops
• Enables model retargeting without HW changes
• Focused on FPGA deployments so far. Exploring custom silicon
possibilities
Note: HW-SW Blueprint for Flexible Deep Learning Acceleration. Moreau et al. IEEE Micro 2019.
Template
© 2020 OctoML and University of Washington
µTVM - Bare-metal model deployment for edge
devices
29
Optimize, compile and package model for standalone bare metal
deployment
See recent demo on TVM for Azure Sphere deployment.
µTVM
ML model
Optimized
model
Optimized
operators
Standalone
runtime
Edge device board
(ARM, MIPS, RISC-
V,...)
Flash code
© 2020 OctoML and University of Washington
Coming Soon - Ultra low bit-width quantization
Automatic quantization: 5-20x
performance gains with reasonable
accuracy loss.
TVM supports flexible code
generation for a variety of data
types
Squeezenet on RaspberryPi 3
© 2020 OctoML and University of Washington
What about training?
31
• Direct support for training in Apache TVM
coming soon!
• Automatic generation of gradient programs
• Support for customized data types and training
on FPGAs
High-Level Differentiable IR
Tensor Expression IR
LLVM, CUDA, Metal VTA
Edge
FPGA
Cloud
FPGA
ASIC
Standalone training deployment
Standalone inference deployment
Gradient Program for Training
Automatic Differentiation
© 2020 OctoML and University of Washington
Other Ongoing TVM efforts
32
• Autoscheduling (Zheng et al. OSDI’20 @ UCBerkeley)
• Automatic synthesis of operator implementations (Cowan et al. CGO’20 @ UWash)
• Sparse support (NLP, graph convolutional neural networks, etc…)
• Secure enclaves
• …
• Join the community!
© 2020 OctoML and University of Washington
https://tvm.ai
33
2nd TVM conference on Dec 5, 2019. 200+ ppl last year!
• Video tutorials
• iPython notebooks tutorials
3rd TVM conference on Dec 3/4, 2020. https://tvmconf.org
© 2020 OctoML and University of Washington 34
https://octoml.ai
© 2020 OctoML and University of Washington
What I would like you to remember…
35
TVM is an emerging open source standard for ML compilation and optimization
TVM offers
• Improved time to market for ML
• Performance
• Unified support for CPU, GPU, Accelerators
• On the framework of your choice
OctoML is here to help you succeed in you ML deployment needs
End-to-end,
framework to
metal open stack.
Research and
deployment
High-Level Differentiable IR
Tensor Expression IR
LLVM, CUDA, Metal VTA
Edge
FPGA
Cloud
FPGA
ASIC

More Related Content

PDF
Graph Attention Network
PPTX
Deep Learningのための専用プロセッサ「MN-Core」の開発と活用(2022/10/19東大大学院「 融合情報学特別講義Ⅲ」)
PDF
How to use Apache TVM to optimize your ML models
PDF
Gpu vs fpga
PDF
Tensorflow Liteの量子化アーキテクチャ
PDF
ゼロから始める転移学習
PPTX
[DL輪読会]Set Transformer: A Framework for Attention-based Permutation-Invariant...
PDF
[GTCJ2018]CuPy -NumPy互換GPUライブラリによるPythonでの高速計算- PFN奥田遼介
Graph Attention Network
Deep Learningのための専用プロセッサ「MN-Core」の開発と活用(2022/10/19東大大学院「 融合情報学特別講義Ⅲ」)
How to use Apache TVM to optimize your ML models
Gpu vs fpga
Tensorflow Liteの量子化アーキテクチャ
ゼロから始める転移学習
[DL輪読会]Set Transformer: A Framework for Attention-based Permutation-Invariant...
[GTCJ2018]CuPy -NumPy互換GPUライブラリによるPythonでの高速計算- PFN奥田遼介

What's hot (20)

PDF
グラフニューラルネットワーク入門
PDF
0から理解するニューラルネットアーキテクチャサーチ(NAS)
PDF
TensorFlow XLAは、 中で何をやっているのか?
PDF
PFNのML/DL基盤を支えるKubernetesにおける自動化 / DevOpsDays Tokyo 2021
PDF
Automatic Mixed Precision の紹介
PDF
LLVM最適化のこつ
PDF
サイバーエージェントにおけるMLOpsに関する取り組み at PyDataTokyo 23
PDF
Chainer でのプロファイリングをちょっと楽にする話
PDF
PFN のオンプレML基盤の取り組み / オンプレML基盤 on Kubernetes 〜PFN、ヤフー〜
PDF
画像生成・生成モデル メタサーベイ
PDF
全力解説!Transformer
PDF
最近のDQN
PDF
Kubernetesによる機械学習基盤への挑戦
PDF
TVM の紹介
PDF
【DL輪読会】Perceiver io a general architecture for structured inputs & outputs
PDF
1日5分でPostgreSQLに詳しくなるアプリの開発 ~PostgRESTを使ってみた~(第38回PostgreSQLアンカンファレンス@オンライン 発...
PDF
2値化CNN on FPGAでGPUとガチンコバトル(公開版)
PPTX
畳み込みニューラルネットワークの研究動向
PDF
【論文調査】XAI技術の効能を ユーザ実験で評価する研究
PPTX
MLflowで学ぶMLOpsことはじめ
グラフニューラルネットワーク入門
0から理解するニューラルネットアーキテクチャサーチ(NAS)
TensorFlow XLAは、 中で何をやっているのか?
PFNのML/DL基盤を支えるKubernetesにおける自動化 / DevOpsDays Tokyo 2021
Automatic Mixed Precision の紹介
LLVM最適化のこつ
サイバーエージェントにおけるMLOpsに関する取り組み at PyDataTokyo 23
Chainer でのプロファイリングをちょっと楽にする話
PFN のオンプレML基盤の取り組み / オンプレML基盤 on Kubernetes 〜PFN、ヤフー〜
画像生成・生成モデル メタサーベイ
全力解説!Transformer
最近のDQN
Kubernetesによる機械学習基盤への挑戦
TVM の紹介
【DL輪読会】Perceiver io a general architecture for structured inputs & outputs
1日5分でPostgreSQLに詳しくなるアプリの開発 ~PostgRESTを使ってみた~(第38回PostgreSQLアンカンファレンス@オンライン 発...
2値化CNN on FPGAでGPUとガチンコバトル(公開版)
畳み込みニューラルネットワークの研究動向
【論文調査】XAI技術の効能を ユーザ実験で評価する研究
MLflowで学ぶMLOpsことはじめ
Ad

Similar to “Introduction to the TVM Open Source Deep Learning Compiler Stack,” a Presentation from OctoML (20)

PDF
Cray HPC Environments for Leading Edge Simulations
PDF
Automatic generation of hardware memory architectures for HPC
PDF
Keynote (Mike Muller) - Is There Anything New in Heterogeneous Computing - by...
PPT
B Kindilien-Does Manufacturing Have a Future?
PDF
Real time machine learning proposers day v3
PPTX
Cloud Roundtable at Microsoft Switzerland
PDF
Edge optimized architecture for fabric defect detection in real-time
PDF
“Deploying Large Models on the Edge: Success Stories and Challenges,” a Prese...
PDF
Lecture 1 Advanced Computer Architecture
PDF
Adapting to a Cambrian AI/SW/HW explosion with open co-design competitions an...
PPTX
Highway to heaven - Microservices Meetup Munich
PDF
SAMOS 2018: LEGaTO: first steps towards energy-efficient toolset for heteroge...
PPT
TeraGrid Communication and Computation
PDF
Full resume dr_russell_john_childs_2013
PPTX
MPEG-21-based Cross-Layer Optimization Techniques for enabling Quality of Exp...
PDF
QPACE - QCD Parallel Computing on the Cell Broadband Engine™ (Cell/B.E.)
PPTX
SS-CPSIoT 2023_Kevin Mika and Piotr Zierhoffer presentation
PDF
Open Source Possibilities for 5G Edge Computing Deployment
PPT
Cluster Tutorial
PDF
“Accelerate Tomorrow’s Models with Lattice FPGAs,” a Presentation from Lattic...
Cray HPC Environments for Leading Edge Simulations
Automatic generation of hardware memory architectures for HPC
Keynote (Mike Muller) - Is There Anything New in Heterogeneous Computing - by...
B Kindilien-Does Manufacturing Have a Future?
Real time machine learning proposers day v3
Cloud Roundtable at Microsoft Switzerland
Edge optimized architecture for fabric defect detection in real-time
“Deploying Large Models on the Edge: Success Stories and Challenges,” a Prese...
Lecture 1 Advanced Computer Architecture
Adapting to a Cambrian AI/SW/HW explosion with open co-design competitions an...
Highway to heaven - Microservices Meetup Munich
SAMOS 2018: LEGaTO: first steps towards energy-efficient toolset for heteroge...
TeraGrid Communication and Computation
Full resume dr_russell_john_childs_2013
MPEG-21-based Cross-Layer Optimization Techniques for enabling Quality of Exp...
QPACE - QCD Parallel Computing on the Cell Broadband Engine™ (Cell/B.E.)
SS-CPSIoT 2023_Kevin Mika and Piotr Zierhoffer presentation
Open Source Possibilities for 5G Edge Computing Deployment
Cluster Tutorial
“Accelerate Tomorrow’s Models with Lattice FPGAs,” a Presentation from Lattic...
Ad

More from Edge AI and Vision Alliance (20)

PDF
“Quantization Techniques for Efficient Deployment of Large Language Models: A...
PDF
“Introduction to Data Types for AI: Trade-Offs and Trends,” a Presentation fr...
PDF
“Introduction to Radar and Its Use for Machine Perception,” a Presentation fr...
PDF
“NPU IP Hardware Shaped Through Software and Use-case Analysis,” a Presentati...
PDF
“Voice Interfaces on a Budget: Building Real-time Speech Recognition on Low-c...
PDF
“Computer Vision at Sea: Automated Fish Tracking for Sustainable Fishing,” a ...
PDF
“Squinting Vision Pipelines: Detecting and Correcting Errors in Vision Models...
PDF
“ONNX and Python to C++: State-of-the-art Graph Compilation,” a Presentation ...
PDF
“Beyond the Demo: Turning Computer Vision Prototypes into Scalable, Cost-effe...
PDF
“Running Accelerated CNNs on Low-power Microcontrollers Using Arm Ethos-U55, ...
PDF
“Scaling i.MX Applications Processors’ Native Edge AI with Discrete AI Accele...
PDF
“A Re-imagination of Embedded Vision System Design,” a Presentation from Imag...
PDF
“MPU+: A Transformative Solution for Next-Gen AI at the Edge,” a Presentation...
PDF
“Evolving Inference Processor Software Stacks to Support LLMs,” a Presentatio...
PDF
“Efficiently Registering Depth and RGB Images,” a Presentation from eInfochips
PDF
“How to Right-size and Future-proof a Container-first Edge AI Infrastructure,...
PDF
“Image Tokenization for Distributed Neural Cascades,” a Presentation from Goo...
PDF
“Key Requirements to Successfully Implement Generative AI in Edge Devices—Opt...
PDF
“Bridging the Gap: Streamlining the Process of Deploying AI onto Processors,”...
PDF
“From Enterprise to Makers: Driving Vision AI Innovation at the Extreme Edge,...
“Quantization Techniques for Efficient Deployment of Large Language Models: A...
“Introduction to Data Types for AI: Trade-Offs and Trends,” a Presentation fr...
“Introduction to Radar and Its Use for Machine Perception,” a Presentation fr...
“NPU IP Hardware Shaped Through Software and Use-case Analysis,” a Presentati...
“Voice Interfaces on a Budget: Building Real-time Speech Recognition on Low-c...
“Computer Vision at Sea: Automated Fish Tracking for Sustainable Fishing,” a ...
“Squinting Vision Pipelines: Detecting and Correcting Errors in Vision Models...
“ONNX and Python to C++: State-of-the-art Graph Compilation,” a Presentation ...
“Beyond the Demo: Turning Computer Vision Prototypes into Scalable, Cost-effe...
“Running Accelerated CNNs on Low-power Microcontrollers Using Arm Ethos-U55, ...
“Scaling i.MX Applications Processors’ Native Edge AI with Discrete AI Accele...
“A Re-imagination of Embedded Vision System Design,” a Presentation from Imag...
“MPU+: A Transformative Solution for Next-Gen AI at the Edge,” a Presentation...
“Evolving Inference Processor Software Stacks to Support LLMs,” a Presentatio...
“Efficiently Registering Depth and RGB Images,” a Presentation from eInfochips
“How to Right-size and Future-proof a Container-first Edge AI Infrastructure,...
“Image Tokenization for Distributed Neural Cascades,” a Presentation from Goo...
“Key Requirements to Successfully Implement Generative AI in Edge Devices—Opt...
“Bridging the Gap: Streamlining the Process of Deploying AI onto Processors,”...
“From Enterprise to Makers: Driving Vision AI Innovation at the Extreme Edge,...

Recently uploaded (20)

PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
GDG Cloud Iasi [PUBLIC] Florian Blaga - Unveiling the Evolution of Cybersecur...
PPTX
Telecom Fraud Prevention Guide | Hyperlink InfoSystem
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
How Onsite IT Support Drives Business Efficiency, Security, and Growth.pdf
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Transforming Manufacturing operations through Intelligent Integrations
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
NewMind AI Monthly Chronicles - July 2025
PDF
GamePlan Trading System Review: Professional Trader's Honest Take
PDF
solutions_manual_-_materials___processing_in_manufacturing__demargo_.pdf
PDF
AI And Its Effect On The Evolving IT Sector In Australia - Elevate
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
madgavkar20181017ppt McKinsey Presentation.pdf
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PPTX
Big Data Technologies - Introduction.pptx
PDF
cuic standard and advanced reporting.pdf
PDF
HCSP-Presales-Campus Network Planning and Design V1.0 Training Material-Witho...
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
Dropbox Q2 2025 Financial Results & Investor Presentation
GDG Cloud Iasi [PUBLIC] Florian Blaga - Unveiling the Evolution of Cybersecur...
Telecom Fraud Prevention Guide | Hyperlink InfoSystem
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
How Onsite IT Support Drives Business Efficiency, Security, and Growth.pdf
“AI and Expert System Decision Support & Business Intelligence Systems”
Transforming Manufacturing operations through Intelligent Integrations
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
NewMind AI Monthly Chronicles - July 2025
GamePlan Trading System Review: Professional Trader's Honest Take
solutions_manual_-_materials___processing_in_manufacturing__demargo_.pdf
AI And Its Effect On The Evolving IT Sector In Australia - Elevate
Advanced methodologies resolving dimensionality complications for autism neur...
madgavkar20181017ppt McKinsey Presentation.pdf
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Big Data Technologies - Introduction.pptx
cuic standard and advanced reporting.pdf
HCSP-Presales-Campus Network Planning and Design V1.0 Training Material-Witho...
NewMind AI Weekly Chronicles - August'25 Week I
Diabetes mellitus diagnosis method based random forest with bat algorithm

“Introduction to the TVM Open Source Deep Learning Compiler Stack,” a Presentation from OctoML

  • 1. © 2020 OctoML and University of Washington Introduction to the TVM Open Source Deep Learning Compiler Stack Luis Ceze w/ Tianqi Chen, Thierry Moreau, Jared Roesch, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Meghan Cowan, Chien-Yu Lin, Haichen Shen, Leyuan Wang, Yuwei Hu, Carlos Guestrin, Arvind Krishnamurthy, Zach Tatlock, and many in the Apache TVM community!
  • 2. © 2020 OctoML and University of Washington A perfect storm 2 Growing set of requirements: Cost, latency, power, security & privacy Cambrian explosion of models, workloads, and use cases CNN GAN RNN MLP DQNN Rapidly evolving ML software ecosystem Silicon scaling limitations (Dennard and Moore) Cambrian explosion of HW backends. Heterogeneous HW
  • 3. © 2020 OctoML and University of Washington Current Dominant Deep Learning Systems Landscape 3 Frameworks and Inference engines DL Compilers Kernel Libraries Hardware Orchestrators Azure ML GCP Datalab cuDNN NNPack MKL-DNN Open source, automated end-to-end optimization framework for deep learning Hand optimized
  • 4. © 2020 OctoML and University of Washington Stack 4 End-to-end, framework to metal open stack. Research and deployment. High-Level Differentiable IR Tensor Expression IR LLVM, CUDA, Metal VTA Edge FPGA Cloud FPGA ASIC Open source synthesizable deep learning accelerator design
  • 5. © 2020 OctoML and University of Washington Automated by Machine Learning 5 High-Level Differentiable IR Tensor Expression IR LLVM, CUDA, Metal VTA Edge FPGA Cloud FPGA ASIC TVM: Automated End-to-end Optimizations for Deep Learning. Chen et al. OSDI 18 ML-based Optimization AutoTVM AutoVTA Hardware Fleet
  • 6. © 2020 OctoML and University of Washington End-user perspective: Compile & deploy 6 import tvm from tvm import relay graph, params = Frontend.from_keras (keras_resnet50) graph, lib, params = Relay.build(graph, target) Compile Deploy
  • 7. © 2020 OctoML and University of Washington Open Source Community and Impact 7 Open source: ~420+ contributors from UW, Berkeley, Cornell, UCLA, Amazon, Huawei, NTT, Facebook, Microsoft, Qualcomm, Alibaba, Intel, … Incubated as Apache TVM. Independent governance, allowing competitors to collaborate. Used in production at leading companies Deep Learning Compiler Service DSP/Tensor engine for mobile Mobile and Server Optimizations Cloud-side model optimization
  • 8. © 2020 OctoML and University of Washington 8
  • 9. © 2020 OctoML and University of Washington Existing Deep Learning Frameworks 9 Frameworks Hardware Primitive Tensor operators such as Conv2D High-level data flow graph Offload to heavily optimized DNN operator library eg. cuDNN
  • 10. © 2020 OctoML and University of Washington Engineering costs limits progress 10 cuDNN Engineering intensive New operator introduced by operator fusion optimization potential benefit: 1.5x speedup Frameworks
  • 11. © 2020 OctoML and University of Washington Our approach: Learning-based Learning System 11 Frameworks Hardware Directly generate optimized program for new operator workloads and hardware High-level data flow graph and optimizations Machine Learning based Program Optimizer
  • 12. © 2020 OctoML and University of Washington Tensor Compilation/Optimization as a search problem 12 Tensor Expression (Specification) C = tvm.compute((m, n), lambda y, x: tvm.sum(A[k, y] * B[k, x], axis=k)) Search Space of Possible Program Optimizations Low-level Program Variants
  • 13. © 2020 OctoML and University of Washington Search Space Example (1/3) 13 Search Space of Possible Program Optimizations Vanilla Code Tensor Expression (Specification) C = tvm.compute((m, n), lambda y, x: tvm.sum(A[k, y] * B[k, x], axis=k))
  • 14. © 2020 OctoML and University of Washington Search Space Example (2/3) 14 Search Space of Possible Program Optimizations Loop Tiling for Locality Tensor Expression (Specification) C = tvm.compute((m, n), lambda y, x: tvm.sum(A[k, y] * B[k, x], axis=k))
  • 15. © 2020 OctoML and University of Washington Search Space Example (3/3) 15 Search Space of Possible Program Optimizations Map to Accelerators Tensor Expression (Specification) C = tvm.compute((m, n), lambda y, x: tvm.sum(A[k, y] * B[k, x], axis=k))
  • 16. © 2020 OctoML and University of Washington Optimization space is really large… 16 Loop Transformations Thread Bindings Cache Locality Thread Cooperation Tensorization Latency Hiding Typically explored via human intuition. How can we automate this? Auto-tuning is too slow. Billions of possible optimization choices Tensor Expression (Specification) C = tvm.compute((m, n), lambda y, x: tvm.sum(A[k, y] * B[k, x], axis=k))
  • 17. © 2020 OctoML and University of Washington Problem Formalization 17 Search Space Expression Objective Code Generator Optimization Configuration Cost: Execute Time Program AutoOpt
  • 18. © 2020 OctoML and University of Washington Black-box Optimization 18 Challenge: Lots of experimental trials, each trial costs ~1 second Code Generator Try each configuration until we find a good one Search Space Expression AutoTVM
  • 19. © 2020 OctoML and University of Washington Cost-model Driven Approach 19 Search Space Expression AutoOpt Challenge: Need reliable cost model per hardware Use cost model to pick configuration Code Generator Cost Model
  • 20. © 2020 OctoML and University of Washington Statistical Cost Model 20 Search Space Expression AutoOpt Code Generator Our approach: Use machine learning to learn a statistical cost model Statistical Cost Model Learning Training data Benefit: Automatically adapt to hardware type Important: How to design the cost model
  • 21. © 2020 OctoML and University of Washington Search Space Expression 2 2 AutoTVM Shared Cost Model Code Generator New Tasks Historical data from related operators (tasks) Need task invariant representation Transfer learning AutoTVM Overview 21 Conv2D Matmul O(microseconds) inference vs. O(seconds) execution Search Space Expression AutoTVM Code Generator Statistical Cost Model Learning Training data High-level configurations Low-level Abstract Syntax Tree (AST) Benefit: Low-level AST is a common representation (General, task invariant) Your favourite model Statistical features of AST + + Learning to Optimize Tensor Programs. Chen et al. NeurIPS 18
  • 22. © 2020 OctoML and University of Washington Does it work? 22 Better than hand-tuned code in a few minutes 1.50x faster than hand-tuned in steady state AutoTVM + transferred model 3x to 10x faster tuning w/ transfer learning
  • 23. © 2020 OctoML and University of Washington Device Fleet: Distributed Test Bed for AutoTVM 23 Resource Allocation Resource Token Resource Manager (Tracker) Nvidia GPU Server RPC RT CUDA Android Phone RPC RT OpenCL Zynq FPGA Board RPC RT Bitstream AutoTVM Experiment 1 AutoTVM Experiment 2 Persistent Remote Session Scale up optimization Resource sharing …
  • 24. © 2020 OctoML and University of Washington State-of-the-art performance 24 Nvidia Titan X ARM GPU (MALI) ARM CPU (Cortex-A53) Key point: TVM offers good performance with low manual effort
  • 25. © 2020 OctoML and University of Washington 25 End-to-end, framework to metal open stack. Research and deployment High-Level Differentiable IR Tensor Expression IR LLVM, CUDA, Metal VTA Edge FPGA Cloud FPGA ASIC Open source synthesizable deep learning accelerator design Stack
  • 26. © 2020 OctoML and University of Washington DL Accelerator Design Challenges 26 CNN GAN RNN MLP DQNN • Keeping up with algorithmic changes • Finding the right generality/efficiency trade-off • Enable a “day-0” software stack on top • (VTA: two-level ISA, templatized design) • (VTA: templatized design + HW parameter search) • (VTA: tight coupling with TVM)
  • 27. © 2020 OctoML and University of Washington VTA: Open & Flexible Deep Learning Accelerator 27 Current TVM Stack VTA Runtime & JIT Compiler VTA MicroArchitecture VTA Simulator VTA Hardware/Software Interface (ISA) • Move hardware complexity to software via a two-level ISA • Runtime JIT-compile accelerator micro code • Native support in TVM • Support heterogenous devices (split graph) • Support for secure execution (soon)
  • 28. © 2020 OctoML and University of Washington VTA Open Source Deep Learning accelerator 28 • Decoupled access-execute with explicit software control • Two-level ISA: JIT breaks multi-cycle “CISC” instructions into micro-ops • Enables model retargeting without HW changes • Focused on FPGA deployments so far. Exploring custom silicon possibilities Note: HW-SW Blueprint for Flexible Deep Learning Acceleration. Moreau et al. IEEE Micro 2019. Template
  • 29. © 2020 OctoML and University of Washington µTVM - Bare-metal model deployment for edge devices 29 Optimize, compile and package model for standalone bare metal deployment See recent demo on TVM for Azure Sphere deployment. µTVM ML model Optimized model Optimized operators Standalone runtime Edge device board (ARM, MIPS, RISC- V,...) Flash code
  • 30. © 2020 OctoML and University of Washington Coming Soon - Ultra low bit-width quantization Automatic quantization: 5-20x performance gains with reasonable accuracy loss. TVM supports flexible code generation for a variety of data types Squeezenet on RaspberryPi 3
  • 31. © 2020 OctoML and University of Washington What about training? 31 • Direct support for training in Apache TVM coming soon! • Automatic generation of gradient programs • Support for customized data types and training on FPGAs High-Level Differentiable IR Tensor Expression IR LLVM, CUDA, Metal VTA Edge FPGA Cloud FPGA ASIC Standalone training deployment Standalone inference deployment Gradient Program for Training Automatic Differentiation
  • 32. © 2020 OctoML and University of Washington Other Ongoing TVM efforts 32 • Autoscheduling (Zheng et al. OSDI’20 @ UCBerkeley) • Automatic synthesis of operator implementations (Cowan et al. CGO’20 @ UWash) • Sparse support (NLP, graph convolutional neural networks, etc…) • Secure enclaves • … • Join the community!
  • 33. © 2020 OctoML and University of Washington https://tvm.ai 33 2nd TVM conference on Dec 5, 2019. 200+ ppl last year! • Video tutorials • iPython notebooks tutorials 3rd TVM conference on Dec 3/4, 2020. https://tvmconf.org
  • 34. © 2020 OctoML and University of Washington 34 https://octoml.ai
  • 35. © 2020 OctoML and University of Washington What I would like you to remember… 35 TVM is an emerging open source standard for ML compilation and optimization TVM offers • Improved time to market for ML • Performance • Unified support for CPU, GPU, Accelerators • On the framework of your choice OctoML is here to help you succeed in you ML deployment needs End-to-end, framework to metal open stack. Research and deployment High-Level Differentiable IR Tensor Expression IR LLVM, CUDA, Metal VTA Edge FPGA Cloud FPGA ASIC