SlideShare a Scribd company logo
Lecture Notes on Quantization
for
Open Educational Resource
on
Data Compression(CA209)
by
Dr. Piyush Charan
Assistant Professor
Department of Electronics and Communication Engg.
Integral University, Lucknow
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Unit 5-Syllabus
• Quantization
– Vector Quantization,
– Advantages of Vector Quantization over Scalar
Quantization,
– The Linde-BuzoGray Algorithm,
– Tree-structured Vector Quantizers,
– Structured Vector Quantizers
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 3
Introduction
• Quantization is one of the efficient tool for lossy
compression.
• It can reduce the bits required to represent the source.
• In lossy compression application, we represent each source
output using one of a small number of codewords.
• The number of distinct source output values is generally
much larger than the number of codewords available to
represent them.
• The process of representing the number of distinct output
values to a much smaller set is called quantization.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 4
Introduction contd…
• The set of input and output of a quantizer can
be scalars or vectors.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 5
Types of Quantization
• Scalar Quantization: The most common types of
quantization is scalar quantization. Scalar quantization,
typically denoted as y = Q(x) is the process of using
quantization function Q(x) to map a input value x to scalar
output value y.
• Vector Quantization: A vector quantization map k-
dimensional vector in the vector space Rk into a finite set of
vectors Y=[Yi : i=1,2,..,N]. Each vector Yi is called a code
vector or a codeword and set of all the codeword is called a
codebook.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 6
Vector Quantization
• VQ is a lossy data compression method based on the principal of
block coding technique that quantizes blocks of data instead of
signal sample.
• VQ exploits the correlation existing between neighboring signal
sample by quantizing them together.
• VQ is one of the widely used and efficient technique for image
compression.
• Since last few decades in the field of multimedia data compression,
VQ has received a great attention because it has simple decoding
structure and can provide high compression ratio.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 7
Vector Quantization contd…
• VQ based image compression technique has three major steps namely:
1. Codebook Design
2. VQ Encoding Processes.
3. VQ Decoding Processes.
• In VQ based image compression first image is decomposed into non-
overlapping sub-blocks and each sub block is converted into one-
dimension vector termed as training vector.
• From training vectors, a set of representative vector are selected to
represent the entire set of training vector.
• The set of representative training vector is called a codebook and each
representative training vector is called codeword or code-vector.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 8
Vector Quantization contd…
• The goal of VQ code book generation is to find an optimal code book that yields the
lowest possible distortion when compared with all other code books of the same size.
• The performance of VQ based image compression technique depends upon the
constructed codebook.
• The search complexity increases with the number of vectors in the code book and to
minimize the search complexity, the tree search vector quantization schemes was
introduced.
• The number of code vectors N depends on two parameters, rate R and dimensions L.
• The number of code vector is calculated using the following formula-
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑐𝑜𝑑𝑒 𝑣𝑒𝑐𝑡𝑜𝑟𝑠 (𝑁) = 2𝑅×𝐿
where;
R → Rate in bits/pixel,
L → Dimensions
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 9
Vector Quantization Process
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 10
Difference between Vector and Scalar
Quantization
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 11
• ⇢ 1: Vector Quantization can lower the average distortion with the
number of reconstruction levels held constant, While Scalar Quantization
cannot.
• ⇢ 2: Vector Quantization can reduce the number of reconstruction levels
when distortion is held constant, While Scalar Quantization cannot.
• ⇢ 3: The most significant way Vector Quantization can improve
performance over Scalar Quantization is by exploiting the statistical
dependence among scalars in the block.
• ⇢ 4: Vector Quantization is also more effective than Scalar Quantization
When the source output values are not correlated.
Difference between Vector and Scalar
Quantization contd…
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 12
• ⇢ 5: In Scalar Quantization, in One Dimension, the quantization regions
are restricted to be in intervals(i.e., Output points are restricted to be
rectangular grids) and the only parameter we can manipulate is the size of
the interval. While, in Vector Quantization, When we divide the input into
vectors of some length n, the quantization regions are no longer restricted to
be rectangles or squares, we have the freedom to divide the range of the
inputs in an infinite number of ways.
• ⇢ 6: In Scalar Quantization, the Granular Error is affected by size of
quantization interval only, while in Vector Quantization, Granular Error is
affected by the both shape and size of quantization interval.
• ⇢ 7: Vector Quantization provides more flexibility towards modifications
than Scalar Quantization. The flexibility of Vector Quantization towards
modification increases with increasing dimension.
Difference between Vector and Scalar
Quantization contd…
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 13
• ⇢ 8: Vector Quantization have improved performance when there is
sample to sample dependence of input, While not in Scalar
Quantization.
• ⇢ 9: Vector Quantization have improved performance when there is
not the sample to sample dependence of input, While not in Scalar
Quantization.
• ⇢ 10:Describing the decision boundaries between reconstruction
levels is easier in Scalar Quantization than in Vector Quantization.
Advantages of Vector Quantization
over Scalar Quantization
• Vector Quantization provide flexibility in choosing
multidimensional Quantizer cell shape and in choosing a
desired code-book size.
• The advantage of VQ over SQ is the fractional value of
resolution that is achieved and very important for low-bit rate
applications where low resolution is sufficient.
• For a given rate VQ results in a lower distortion than SQ.
• VQ can utilize the memory of the source better than SQ.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 14
Linde-Buzo Gray Algorithm
• The need for multi-dimensional integration for the
design of a vector quantizer was a challenging problem
in the earlier days.
• The main concepts is to divide a group of vector. To
find a most representative vector from one group. Then
gather the vectors to from a codebook. The inputs are
not longer scalars in the LBG algorithm.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 15
LBG Algorithm
1. Divide image into block. Then we can view one block as k-dimension
vector.
2. Arbitrarily choose initial codebook. Set these initial codebook as
centroids. Other are grouped. Vector are in the same group when they
have the same nearest centroids.
3. Again to find new centroids for every group. Get new codebooks.
Repeat 2,3 steps until the centroids of every groups converge.
• Thus at every iteration the codebook become progressively better. This
processed is continued till there is no change in the overall distortion.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 16
Initializing the LBG Algorithm
• The important thing we need to consider is the good set of initial
quantization points that will guarantee the convergence the LBG
algorithm guarantee that the distortion from one iteration to the next will
not increase.
• The performance of the LBG algorithm depends heavily on the initial
codebook.
• We will use splitting technique to design the initial codebook.
1. Random selection of Hilbert technique
2. Pair wise Nearest Neighbor (PNN) method.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 17
Empty Cell Problem
• What we will do if one of the reconstruction or quantization region in
some iteration is empty?
• There might be no points which are closer to a given reconstruction
point than any other reconstruction points.
• This is problem because in order to update an output points, we need to
take the average of the input vectors assigned to that output.
• But in this case we will end up with an output that is never used.
• A common solution to a empty cell problem is to remove an output
point that has no inputs associated with it and replace it with a point
from the quantization region with most training points.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 18
Tree Structure Vector Quantization
• Another fast codebook design technique-structured VQ and
was presented by Buzo.
• The number of operation can be reduced by enforcing a certain
structure on the codebook.
• One such possibility is using a tree structure, while turns into a
tree codebook and the method is called the binary search
clustering.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 19
Tree Structure Vector Quantization
• The disadvantage of tree-search is that we might not end up with the reconstruction
point that is closest the distortion will be a little higher compared to a full search
Quantizer.
• The storage requirement will also be larger, since we have to store all test vector too.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 20
How to design TSVQ
1. Obtain the average of all the training vectors, unsettled it to obtain a
second vector, and use these vector to from a two level VQ.
2. Call the vector v0and v1 and the group of training set vector that would
be quantized to each as g0 and g1.
3. Unsettled v0 and v1 to get the initial vectors for a four-level VQ.
4. Use g0 to design a two-level VQ and g1 and to design the another two-
level VQ.
5. Label the vectors v00,v01,v10,v11.
6. Split g0 using v00 and v01 into two groups g00,g01
7. Split g1 using v10 and v11 into two groups g10 and g11.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 21
Pruned Tree- structured Vector Quantizer
• Now we have develop tree-structured codebook and we can
improve its rate distortion performance by pruning removing
carefully selected subgroups that will reduce the size of the
codebook and thus the rate.
• But it may increase the distortion so the main objective of
pruning is to remove those groups that will result in the best
trade-off rate and distortion.
• Prune trees by finding sub tree T that minimizes ‘𝜆𝑇’
• 𝜆𝑇 =
𝑐ℎ𝑎𝑛𝑔𝑒 𝑖𝑛 𝑑𝑖𝑠𝑡𝑜𝑟𝑡𝑖𝑜𝑛 𝑖𝑓 𝑝𝑟𝑢𝑛𝑒 𝑠𝑢𝑏 𝑡𝑟𝑒𝑒 𝑇
𝑐ℎ𝑎𝑛𝑔𝑒 𝑖𝑛 𝑟𝑎𝑡𝑒 𝑖𝑓 𝑝𝑟𝑢𝑛𝑒 𝑠𝑢𝑏𝑡𝑟𝑒𝑒 𝑇
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 22
Structured Vector Quantization
• Several structured code impose a structure that allows for reduces implementation
complexity and also constrain codewords or codeword search.
• Let L be the digestion of the VQ. If R is the bit rate, then 𝐿2𝑅𝐿 scalar need to be stored.
Also, 𝐿2𝑅𝐿 scalar distortion calculation are required.
• So solution is to introduce some from of structure in the codebook and also in quantization
process.
• Disadvantage of structure VQ is inventible loss in rate-distortion performance.
• Different types of structure vector Quantizer are:
1. Lattice quantization
2. Tree-structure code
3. Multistage code
4. Product code: gain/shape code
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 23
Lattice Vector Quantizer
• VQ codebook designed using LBG
algorithm complicated the
quantization process and have no
visible structure.
• So alternative is a Lattice point
quantization sine we can use it as
fast encoding algorithm.
• For a bit rate of n bit/sample and
spatial dimension v, the number of
codebook vectors, or equivalently of
lattice points used in 2nv.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 24
How are tree structured vector quantizers
better?
• Tree-structured vector quantization (TSVQ) reduces the complexity
by imposing a hierarchical structure on the partitioning. We study the
design of optimal tree-structured vector quantizers that minimize the
expected distortion subject to cost functions related to storage cost,
encoding rate, or quantization time.
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 25
Thanks!!
02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 26
Dr. Piyush Charan
Assistant Professor,
Department of ECE,
Integral University, Lucknow
Email: er.piyush.charan@gmail.com, piyush@iul.ac.in

More Related Content

PPTX
Vector Quantization Vs Scalar Quantization
PPTX
Advantages of vector quantization over scalar quantization (1)
PPT
static dictionary technique
PPTX
Vector quantization
PPTX
Text compression
PPTX
Difference between Vector Quantization and Scalar Quantization
PPTX
Unit iv(simple code generator)
PDF
Unit 3 Dictionary based Compression Techniques
Vector Quantization Vs Scalar Quantization
Advantages of vector quantization over scalar quantization (1)
static dictionary technique
Vector quantization
Text compression
Difference between Vector Quantization and Scalar Quantization
Unit iv(simple code generator)
Unit 3 Dictionary based Compression Techniques

What's hot (20)

PDF
Address in the target code in Compiler Construction
PPTX
data generalization and summarization
PPTX
Data Encryption Standard (DES)
PPTX
Dynamic interconnection networks
PPTX
Register allocation and assignment
PPTX
Data Integration and Transformation in Data mining
PPTX
Network layer - design Issues
PDF
Data preprocessing using Machine Learning
PPT
Polyalphabetic Substitution Cipher
PPTX
Image Representation & Descriptors
PPTX
Naïve Bayes Classifier Algorithm.pptx
DOCX
Unit v
PPT
Clustering
PPTX
PPT
Arithmetic coding
PDF
Elliptic curve cryptography
PPTX
Dynamic storage allocation techniques
DOCX
Digital Watermarking
PDF
Digital Image Processing - Image Compression
PDF
Unit 1 Introduction to Data Compression
Address in the target code in Compiler Construction
data generalization and summarization
Data Encryption Standard (DES)
Dynamic interconnection networks
Register allocation and assignment
Data Integration and Transformation in Data mining
Network layer - design Issues
Data preprocessing using Machine Learning
Polyalphabetic Substitution Cipher
Image Representation & Descriptors
Naïve Bayes Classifier Algorithm.pptx
Unit v
Clustering
Arithmetic coding
Elliptic curve cryptography
Dynamic storage allocation techniques
Digital Watermarking
Digital Image Processing - Image Compression
Unit 1 Introduction to Data Compression
Ad

Similar to Unit 5 Quantization (20)

PDF
ResNeSt: Split-Attention Networks
PDF
Natural Language Processing of applications.pdf
PDF
AN EFFICIENT CODEBOOK INITIALIZATION APPROACH FOR LBG ALGORITHM
PDF
A systematic image compression in the combination of linear vector quantisati...
PPTX
250707_JW_labseminar[CBAM: Convolutional Block Attention Module].pptx
PDF
Performance Comparison of K-means Codebook Optimization using different Clust...
PPTX
• An attacker’s aim for carrying out a CSRF attack is to force the user to su...
PPTX
PgVector + : Enable Richer Interaction with vector database.pptx
PDF
Towards better analysis of deep convolutional neural networks
PDF
Automatic Grading of Handwritten Answers
DOCX
BULK IEEE PROJECTS IN MATLAB ,BULK IEEE PROJECTS, IEEE 2015-16 MATLAB PROJEC...
DOCX
final year ieee pojects in pondicherry,bulk ieee projects ,bulk 2015-16 i...
PDF
Week 11: Programming for Data Analysis
PPTX
Neural Networks for Machine Learning and Deep Learning
PDF
Unit 1 Introduction to Data Compression
PDF
Unit 2 Lecture notes on Huffman coding
PPTX
Researc-paper_Project Work Phase-1 PPT (21CS09).pptx
DOCX
Types of Machine Learnig Algorithms(CART, ID3)
PDF
Generalization of linear and non-linear support vector machine in multiple fi...
PPTX
phase2.pptx project slides which helps to know the content
ResNeSt: Split-Attention Networks
Natural Language Processing of applications.pdf
AN EFFICIENT CODEBOOK INITIALIZATION APPROACH FOR LBG ALGORITHM
A systematic image compression in the combination of linear vector quantisati...
250707_JW_labseminar[CBAM: Convolutional Block Attention Module].pptx
Performance Comparison of K-means Codebook Optimization using different Clust...
• An attacker’s aim for carrying out a CSRF attack is to force the user to su...
PgVector + : Enable Richer Interaction with vector database.pptx
Towards better analysis of deep convolutional neural networks
Automatic Grading of Handwritten Answers
BULK IEEE PROJECTS IN MATLAB ,BULK IEEE PROJECTS, IEEE 2015-16 MATLAB PROJEC...
final year ieee pojects in pondicherry,bulk ieee projects ,bulk 2015-16 i...
Week 11: Programming for Data Analysis
Neural Networks for Machine Learning and Deep Learning
Unit 1 Introduction to Data Compression
Unit 2 Lecture notes on Huffman coding
Researc-paper_Project Work Phase-1 PPT (21CS09).pptx
Types of Machine Learnig Algorithms(CART, ID3)
Generalization of linear and non-linear support vector machine in multiple fi...
phase2.pptx project slides which helps to know the content
Ad

More from Dr Piyush Charan (20)

PDF
Unit 1- Intro to Wireless Standards.pdf
PPTX
Unit 1 Solar Collectors
PDF
Unit 4 Lossy Coding Preliminaries
PDF
Unit 3 Geothermal Energy
PDF
Unit 2: Programming Language Tools
PDF
Unit 4 Arrays
PDF
Unit 3 Lecture Notes on Programming
PDF
Unit 3 introduction to programming
PDF
Forensics and wireless body area networks
PDF
Final PhD Defense Presentation
PDF
Unit 3 Arithmetic Coding
PDF
Unit 1 Introduction to Non-Conventional Energy Resources
PDF
Unit 5-Operational Amplifiers and Electronic Measurement Devices
PDF
Unit 4 Switching Theory and Logic Gates
PDF
Unit 1 Numerical Problems on PN Junction Diode
PDF
Unit 4_Part 1_Number System
PDF
Unit 5 Global Issues- Early life of Prophet Muhammad
PDF
Unit 4 Engineering Ethics
PDF
Unit 3 Professional Responsibility
PDF
Unit 5 oscillators and voltage regulators
Unit 1- Intro to Wireless Standards.pdf
Unit 1 Solar Collectors
Unit 4 Lossy Coding Preliminaries
Unit 3 Geothermal Energy
Unit 2: Programming Language Tools
Unit 4 Arrays
Unit 3 Lecture Notes on Programming
Unit 3 introduction to programming
Forensics and wireless body area networks
Final PhD Defense Presentation
Unit 3 Arithmetic Coding
Unit 1 Introduction to Non-Conventional Energy Resources
Unit 5-Operational Amplifiers and Electronic Measurement Devices
Unit 4 Switching Theory and Logic Gates
Unit 1 Numerical Problems on PN Junction Diode
Unit 4_Part 1_Number System
Unit 5 Global Issues- Early life of Prophet Muhammad
Unit 4 Engineering Ethics
Unit 3 Professional Responsibility
Unit 5 oscillators and voltage regulators

Recently uploaded (20)

PPT
Project quality management in manufacturing
PPT
Total quality management ppt for engineering students
PPTX
additive manufacturing of ss316l using mig welding
PDF
A SYSTEMATIC REVIEW OF APPLICATIONS IN FRAUD DETECTION
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PDF
null (2) bgfbg bfgb bfgb fbfg bfbgf b.pdf
PDF
III.4.1.2_The_Space_Environment.p pdffdf
PPTX
Geodesy 1.pptx...............................................
PPTX
6ME3A-Unit-II-Sensors and Actuators_Handouts.pptx
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PPT
introduction to datamining and warehousing
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PPTX
Construction Project Organization Group 2.pptx
DOCX
573137875-Attendance-Management-System-original
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PDF
PPT on Performance Review to get promotions
PPTX
Safety Seminar civil to be ensured for safe working.
PDF
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF
Project quality management in manufacturing
Total quality management ppt for engineering students
additive manufacturing of ss316l using mig welding
A SYSTEMATIC REVIEW OF APPLICATIONS IN FRAUD DETECTION
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
null (2) bgfbg bfgb bfgb fbfg bfbgf b.pdf
III.4.1.2_The_Space_Environment.p pdffdf
Geodesy 1.pptx...............................................
6ME3A-Unit-II-Sensors and Actuators_Handouts.pptx
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
introduction to datamining and warehousing
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
Construction Project Organization Group 2.pptx
573137875-Attendance-Management-System-original
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PPT on Performance Review to get promotions
Safety Seminar civil to be ensured for safe working.
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF

Unit 5 Quantization

  • 1. Lecture Notes on Quantization for Open Educational Resource on Data Compression(CA209) by Dr. Piyush Charan Assistant Professor Department of Electronics and Communication Engg. Integral University, Lucknow This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
  • 2. Unit 5-Syllabus • Quantization – Vector Quantization, – Advantages of Vector Quantization over Scalar Quantization, – The Linde-BuzoGray Algorithm, – Tree-structured Vector Quantizers, – Structured Vector Quantizers 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 3
  • 3. Introduction • Quantization is one of the efficient tool for lossy compression. • It can reduce the bits required to represent the source. • In lossy compression application, we represent each source output using one of a small number of codewords. • The number of distinct source output values is generally much larger than the number of codewords available to represent them. • The process of representing the number of distinct output values to a much smaller set is called quantization. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 4
  • 4. Introduction contd… • The set of input and output of a quantizer can be scalars or vectors. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 5
  • 5. Types of Quantization • Scalar Quantization: The most common types of quantization is scalar quantization. Scalar quantization, typically denoted as y = Q(x) is the process of using quantization function Q(x) to map a input value x to scalar output value y. • Vector Quantization: A vector quantization map k- dimensional vector in the vector space Rk into a finite set of vectors Y=[Yi : i=1,2,..,N]. Each vector Yi is called a code vector or a codeword and set of all the codeword is called a codebook. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 6
  • 6. Vector Quantization • VQ is a lossy data compression method based on the principal of block coding technique that quantizes blocks of data instead of signal sample. • VQ exploits the correlation existing between neighboring signal sample by quantizing them together. • VQ is one of the widely used and efficient technique for image compression. • Since last few decades in the field of multimedia data compression, VQ has received a great attention because it has simple decoding structure and can provide high compression ratio. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 7
  • 7. Vector Quantization contd… • VQ based image compression technique has three major steps namely: 1. Codebook Design 2. VQ Encoding Processes. 3. VQ Decoding Processes. • In VQ based image compression first image is decomposed into non- overlapping sub-blocks and each sub block is converted into one- dimension vector termed as training vector. • From training vectors, a set of representative vector are selected to represent the entire set of training vector. • The set of representative training vector is called a codebook and each representative training vector is called codeword or code-vector. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 8
  • 8. Vector Quantization contd… • The goal of VQ code book generation is to find an optimal code book that yields the lowest possible distortion when compared with all other code books of the same size. • The performance of VQ based image compression technique depends upon the constructed codebook. • The search complexity increases with the number of vectors in the code book and to minimize the search complexity, the tree search vector quantization schemes was introduced. • The number of code vectors N depends on two parameters, rate R and dimensions L. • The number of code vector is calculated using the following formula- 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑐𝑜𝑑𝑒 𝑣𝑒𝑐𝑡𝑜𝑟𝑠 (𝑁) = 2𝑅×𝐿 where; R → Rate in bits/pixel, L → Dimensions 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 9
  • 9. Vector Quantization Process 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 10
  • 10. Difference between Vector and Scalar Quantization 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 11 • ⇢ 1: Vector Quantization can lower the average distortion with the number of reconstruction levels held constant, While Scalar Quantization cannot. • ⇢ 2: Vector Quantization can reduce the number of reconstruction levels when distortion is held constant, While Scalar Quantization cannot. • ⇢ 3: The most significant way Vector Quantization can improve performance over Scalar Quantization is by exploiting the statistical dependence among scalars in the block. • ⇢ 4: Vector Quantization is also more effective than Scalar Quantization When the source output values are not correlated.
  • 11. Difference between Vector and Scalar Quantization contd… 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 12 • ⇢ 5: In Scalar Quantization, in One Dimension, the quantization regions are restricted to be in intervals(i.e., Output points are restricted to be rectangular grids) and the only parameter we can manipulate is the size of the interval. While, in Vector Quantization, When we divide the input into vectors of some length n, the quantization regions are no longer restricted to be rectangles or squares, we have the freedom to divide the range of the inputs in an infinite number of ways. • ⇢ 6: In Scalar Quantization, the Granular Error is affected by size of quantization interval only, while in Vector Quantization, Granular Error is affected by the both shape and size of quantization interval. • ⇢ 7: Vector Quantization provides more flexibility towards modifications than Scalar Quantization. The flexibility of Vector Quantization towards modification increases with increasing dimension.
  • 12. Difference between Vector and Scalar Quantization contd… 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 13 • ⇢ 8: Vector Quantization have improved performance when there is sample to sample dependence of input, While not in Scalar Quantization. • ⇢ 9: Vector Quantization have improved performance when there is not the sample to sample dependence of input, While not in Scalar Quantization. • ⇢ 10:Describing the decision boundaries between reconstruction levels is easier in Scalar Quantization than in Vector Quantization.
  • 13. Advantages of Vector Quantization over Scalar Quantization • Vector Quantization provide flexibility in choosing multidimensional Quantizer cell shape and in choosing a desired code-book size. • The advantage of VQ over SQ is the fractional value of resolution that is achieved and very important for low-bit rate applications where low resolution is sufficient. • For a given rate VQ results in a lower distortion than SQ. • VQ can utilize the memory of the source better than SQ. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 14
  • 14. Linde-Buzo Gray Algorithm • The need for multi-dimensional integration for the design of a vector quantizer was a challenging problem in the earlier days. • The main concepts is to divide a group of vector. To find a most representative vector from one group. Then gather the vectors to from a codebook. The inputs are not longer scalars in the LBG algorithm. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 15
  • 15. LBG Algorithm 1. Divide image into block. Then we can view one block as k-dimension vector. 2. Arbitrarily choose initial codebook. Set these initial codebook as centroids. Other are grouped. Vector are in the same group when they have the same nearest centroids. 3. Again to find new centroids for every group. Get new codebooks. Repeat 2,3 steps until the centroids of every groups converge. • Thus at every iteration the codebook become progressively better. This processed is continued till there is no change in the overall distortion. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 16
  • 16. Initializing the LBG Algorithm • The important thing we need to consider is the good set of initial quantization points that will guarantee the convergence the LBG algorithm guarantee that the distortion from one iteration to the next will not increase. • The performance of the LBG algorithm depends heavily on the initial codebook. • We will use splitting technique to design the initial codebook. 1. Random selection of Hilbert technique 2. Pair wise Nearest Neighbor (PNN) method. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 17
  • 17. Empty Cell Problem • What we will do if one of the reconstruction or quantization region in some iteration is empty? • There might be no points which are closer to a given reconstruction point than any other reconstruction points. • This is problem because in order to update an output points, we need to take the average of the input vectors assigned to that output. • But in this case we will end up with an output that is never used. • A common solution to a empty cell problem is to remove an output point that has no inputs associated with it and replace it with a point from the quantization region with most training points. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 18
  • 18. Tree Structure Vector Quantization • Another fast codebook design technique-structured VQ and was presented by Buzo. • The number of operation can be reduced by enforcing a certain structure on the codebook. • One such possibility is using a tree structure, while turns into a tree codebook and the method is called the binary search clustering. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 19
  • 19. Tree Structure Vector Quantization • The disadvantage of tree-search is that we might not end up with the reconstruction point that is closest the distortion will be a little higher compared to a full search Quantizer. • The storage requirement will also be larger, since we have to store all test vector too. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 20
  • 20. How to design TSVQ 1. Obtain the average of all the training vectors, unsettled it to obtain a second vector, and use these vector to from a two level VQ. 2. Call the vector v0and v1 and the group of training set vector that would be quantized to each as g0 and g1. 3. Unsettled v0 and v1 to get the initial vectors for a four-level VQ. 4. Use g0 to design a two-level VQ and g1 and to design the another two- level VQ. 5. Label the vectors v00,v01,v10,v11. 6. Split g0 using v00 and v01 into two groups g00,g01 7. Split g1 using v10 and v11 into two groups g10 and g11. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 21
  • 21. Pruned Tree- structured Vector Quantizer • Now we have develop tree-structured codebook and we can improve its rate distortion performance by pruning removing carefully selected subgroups that will reduce the size of the codebook and thus the rate. • But it may increase the distortion so the main objective of pruning is to remove those groups that will result in the best trade-off rate and distortion. • Prune trees by finding sub tree T that minimizes ‘𝜆𝑇’ • 𝜆𝑇 = 𝑐ℎ𝑎𝑛𝑔𝑒 𝑖𝑛 𝑑𝑖𝑠𝑡𝑜𝑟𝑡𝑖𝑜𝑛 𝑖𝑓 𝑝𝑟𝑢𝑛𝑒 𝑠𝑢𝑏 𝑡𝑟𝑒𝑒 𝑇 𝑐ℎ𝑎𝑛𝑔𝑒 𝑖𝑛 𝑟𝑎𝑡𝑒 𝑖𝑓 𝑝𝑟𝑢𝑛𝑒 𝑠𝑢𝑏𝑡𝑟𝑒𝑒 𝑇 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 22
  • 22. Structured Vector Quantization • Several structured code impose a structure that allows for reduces implementation complexity and also constrain codewords or codeword search. • Let L be the digestion of the VQ. If R is the bit rate, then 𝐿2𝑅𝐿 scalar need to be stored. Also, 𝐿2𝑅𝐿 scalar distortion calculation are required. • So solution is to introduce some from of structure in the codebook and also in quantization process. • Disadvantage of structure VQ is inventible loss in rate-distortion performance. • Different types of structure vector Quantizer are: 1. Lattice quantization 2. Tree-structure code 3. Multistage code 4. Product code: gain/shape code 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 23
  • 23. Lattice Vector Quantizer • VQ codebook designed using LBG algorithm complicated the quantization process and have no visible structure. • So alternative is a Lattice point quantization sine we can use it as fast encoding algorithm. • For a bit rate of n bit/sample and spatial dimension v, the number of codebook vectors, or equivalently of lattice points used in 2nv. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 24
  • 24. How are tree structured vector quantizers better? • Tree-structured vector quantization (TSVQ) reduces the complexity by imposing a hierarchical structure on the partitioning. We study the design of optimal tree-structured vector quantizers that minimize the expected distortion subject to cost functions related to storage cost, encoding rate, or quantization time. 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 25
  • 25. Thanks!! 02 February 2021 Dr. Piyush Charan, Dept. of ECE, Integral University, Lucknow 26 Dr. Piyush Charan Assistant Professor, Department of ECE, Integral University, Lucknow Email: [email protected], [email protected]