SlideShare a Scribd company logo
Physical layer abstraction 
for 
LTE downlink 
PRESENTED BY 
RAJ PATEL
Introduction 
link level simulator simulates a single radio link 
system level simulator takes into account 
a complete cell: time consuming 
Physical layer abstraction : process of modeling 
the performance of the physical layer based on 
the current channel state 
and the physical layer parameters
Introduction 
AWGN 
MCS -> CQI 
target SNR – 10% BLER 
Plots : Target SNR vs CQI / MCS - linear
Introduction 
Extrapolation of Reference curve to get effective SNR 
choose MCS values belonging to same constellation. 
Get the Target SNR value 
•Calc. difference between the T.SNR values 
We note down the effective code rate for the MCS used. 
We use the reference curves to get the values of SNR 
using the effective code rate of that MCS 
•Calc. the difference between the SNR values
Observations 
otheoretical difference and the difference calculated using interpolation are not the same 
oPossible reason: C* = (TBS + CRC) / G. G: bits transmitted per second; C: Code Rate 
o 40 <= Code Block Size(= TBS + CRC) <= 6144 ; CRC = 24 bits 
oEg: 6126 bits TBC 
6120 + 24 // 6 + 24 + 10 ; 10 : padding 
Delta SNR from 
Lookup table values 
C = TBS / G 
4.237 4.3203 1.4398 2.8805 4.7258 6.6409 2.6672 3.9737 
Delta SNR from look 
up table using 
C* = (TBS + CRC) / G 
4.1689 4.3423 1.4366 2.9057 4.7415 6.684 2.6877 3.9963 
Delta SNR from log 
BLER curve 
2.86 3.446 0.788 2.668 4.2 3.742 2.412 2.33
Frequency Selective Fading 
Coherence Bandwidth 
Signal Bandwidth 
Flat fading: Just attenuation, no distortion 
Frequency Selective (much more realistic): Distortion 
If the attenuation happens in different amounts for the different parts of the signal, it is a 
distortion. 
Condition: Coherence Bandwidth < Signal Bandwidth 
Frequency selective fading channel model 
Eg.: EPA
EPA : Extended Pedestrian A model 
omultiple paths 
osame signal copies arrive at the receiver 
delayed and different attenuations 
o-g E –M1 –R1 –N 100 –n 10000 
o-M1: Abstraction flag 
keeps channel coefficients constant over SNR range 
o-R1: to reduce simulation time 
o-g E: fading model 
o-n: number of packets 
o-N: number of channel realizations 
oOUTPUT format: 
SNR, 50 channel coefficients, BLER1
Abstraction Techniques 
EESM 
MIESM
EESM: Exponential Effective SINR Mapping 
훾eff = 훽1 퐼−1 1 
푁 
푁 퐼 
푛=1 
훾푛 
훽2 
퐼 훾푛 = 1 − exp (−훾푛) ; 훾푛 is the instantaneous SNR 
Aim: to calculate SINR effective 
Noise_var = 1 / SNR_linear; inst_snr = 10*log10 (h^2/Noise_var); 
1. Calculate the instantaneous SNR corresponding to each value of channel realization 
2. Use the I function with the instantaneous SNR and average it over N 
3. Use the inverse function of I to calculate the effective SNR
PLOTS - EESM
MIESM 
Mutual Information Effective SINR Mapping 
No closed form expression 
Calculate the instantaneous SNR 
Using lookup tables, calculate normalized capacity for each instantaneous SNR 
Calculate average normalized capacity per SNR 
Calculate the effective SNR using average normalized capacity with lookup table
PLOTS MIESM
MSE calculation 
훾eff = 퐼−1 1 
푁 
푁 퐼(훾푛) 
푛=1 
*N stands for the number of values 
of channel coefficients per SNR. 
SNR interp: image of SNR effective on AWGN curve 
푀푆퐸 = 
1 
푁 
푁 
푛=1 
훾푖푛푡푒푟푝 BLER푐ℎ −훾eff 
훾푖푛푡푒푟푝 BLER푐ℎ 
2 
*N here, stands for the number of SNR values.
MSE results 
MCS MSE EESM using 
'linear','extrap' 
NORMALIZED 
Linear, log 
MSE_MIESM 
'linear','extrap' 
NORMALIZED 
Linear, log 
3 58.695, 0.3663 108.92, 0.2975 
15 1.5247, 0.4958 0.3202, 0.3395 
15 _n = 1000, N =1000 1.1699, 1.3596 0.3403, 1.9242 
20 * 0.3869, 0.2304 0.1067, 0.5900 
23 0.2551, 0.4954 0.0823, 0.3636 
25 0.0897, 0.7444 0.0672, 0.7858
MSE –With 훽1, 훽2 
훾eff = 훽1 퐼−1 1 
푁 
푁 퐼 
푛=1 
훾푛 
훽2 
푀푆퐸argmin 
훽1,훽2 
= 
1 
푁 
푁 
푛=1 
훾푖푛푡푒푟푝 BLER푐ℎ −훾eff 훽1,훽2 
훾푖푛푡푒푟푝 BLER푐ℎ 
2
MSE Results –With 훽1, 훽2 
MCS B values MSE EESM 
calibrated 
3 [0.0334,0.6226] 0.7683 
15 [3.975e+02,4.7833e+03] 0.0037 
15 _n = 1000, N 
[3.991e+02,5.581e+03] 0.0041 
=1000 
20 (erroneous) [41.3997,58.1240] 0.0466 
23 [6.862e+02,1.241e+04] 1.64e-04 
25 [7.469e+02,1.318e+04] 1.20e-04 
MCS B values MSE MIESM 
calibrated 
3 [0.2051,17.348] 0.9835 
15 [0.7490,0.6111] 0.2887 
15 _n = 1000, N 
[0.7903,0.7440] 0.3339 
=1000 
20 (erroneous) [0.6041,0.7456] 0.0430 
23 [0.8813,0.7282] 0.0567 
25 [0.8398,0.8028] 0.0645
EESM – calib. 
MCS- color 
3-Red, 15- Yellow, 20*- Sky blue, 
23- Blue, 25- Pink
Conclusions and Observations 
Calibration factors work better with EESM 
The resultant MSE after using calibration factor with EESM are around 10^3 times better 
Where as for MIESM, it is 10 times better. 
MCS 25: EESM MIESM 
MSE Without calibration 0.7444 0.7858 
MSE With calibration 1.20e-04 0.0645
Conclusions and Observations 
Calculations done in the log scale don’t make 
푀푆퐸argmin 
훽1,훽2 
= 
1 
푁 
푁 
푛=1 
훾푖푛푡푒푟푝 BLER푐ℎ −훾eff 훽1,훽2 
훾푖푛푡푒푟푝 BLER푐ℎ 
2 
Division in log scale? 
MCS MSE EESM using 
'linear','extrap' 
NORMALIZED 
Linear, log 
MSE_MIESM 
'linear','extrap' 
NORMALIZED 
Linear, log 
3 58.695, 0.3663 108.92, 0.2975 
15 1.5247,0.4958 0.3202, 0.3395 
20 (erroneous) 0.3869, 0.2304 0.1067, 0.5900 
23 0.2551, 0.4954 0.0823, 0.3636 
25 0.0897, 0.7444 0.0672, 0.7858 
NOTE: Calculations in Linear scale show a gradual 
Decrease in MSE value, unlike the log scale 
Thus operate with linear values 
if we are using Normalization 
But why does Lower MCS have weird 
MSE values?
Conclusions and Observations 
Issues with the lower MCS values any ideas?? 
Working on Linear scale, why is it that the Lower MCS has higher values of MSE compared to 
higher MCS values? 
Reason: Normalization while calculating MSE 
푀푆퐸argmin 
훽1,훽2 
= 
1 
푁 
푁 
푛=1 
훾푖푛푡푒푟푝 BLER푐ℎ −훾eff 훽1,훽2 
훾푖푛푡푒푟푝 BLER푐ℎ 
2 
훾푖푛푡푒푟푝 BLER푐ℎ − 훾eff 훽1, 훽2 : more or less remains the same, say around 5-10 dB 
But, 훾푖푛푡푒푟푝 BLER푐ℎ changes according to MCS value, stays close to -2 to 2 dB
Conclusions and Observations
Conclusions and Observations 
MCS MSE EESM using 
'linear','extrap' 
NORMALIZED 
Linear 
MSE_MIESM 
'linear','extrap' 
NORMALIZED 
Linear 
3 58.695 108.92 
15 1.5247 0.3202 
15 _n = 1000, N =1000 1.1699 0.3403 
20 (erroneous) 0.3869 0.1067 
23 0.2551 0.0823 
25 0.0897 0.0672 
Table with the calculations done in Linear scale.
Conclusions and Observations 
For 15 _n = 1000, N =1000 case, the calculations are not in synchronization with the other cases. 
Reason: too many values: may be it gives us a better estimate. 
MCS B values MSE EESM 
calibrated 
3 [0.0334,0.6226] 0.7683 
15 [3.975e+02,4.7833e+03] 0.0037 
15 _n = 1000, N 
[3.991e+02,5.581e+03] 0.0041 
=1000 
20 (erroneous) [41.3997,58.1240] 0.0466 
23 [6.862e+02,1.241e+04] 1.64e-04 
25 [7.469e+02,1.318e+04] 1.20e-04 
NOTE: Calculations in Linear scale 
show a gradual Decrease in MCS value 
MCS B values MSE MIESM 
calibrated 
3 [0.2051,17.348] 0.9835 
15 [0.7490,0.6111] 0.2887 
15 _n = 1000, N 
[0.7903,0.7440] 0.3339 
=1000 
20 (erroneous) [0.6041,0.7456] 0.0430 
23 [0.8813,0.7282] 0.0567 
25 [0.8398,0.8028] 0.0645 
Note: The MSE of EESM is lower than the MSE of MIESM
Conclusions and Observations 
Note: The MSE of EESM is lower than the MSE of MIESM 
Reason? High values of Beta using EESM? 
MCS B values MSE EESM 
calibrated 
3 [0.0334,0.6226] 0.7683 
15 [3.975e+02,4.7833e+03] 0.0037 
15 _n = 1000, N 
[3.991e+02,5.581e+03] 0.0041 
=1000 
20 (erroneous) [41.3997,58.1240] 0.0466 
23 [6.862e+02,1.241e+04] 1.64e-04 
25 [7.469e+02,1.318e+04] 1.20e-04 
MCS B values MSE MIESM 
calibrated 
3 [0.2051,17.348] 0.9835 
15 [0.7490,0.6111] 0.2887 
15 _n = 1000, N 
[0.7903,0.7440] 0.3339 
=1000 
20 (erroneous) [0.6041,0.7456] 0.0430 
23 [0.8813,0.7282] 0.0567 
25 [0.8398,0.8028] 0.0645
Issues and Future Work 
The calibration factors are a bit high for some MCS values for EESM! 
WHY!? 
Is that the only reason why we see the performance of EESM is better than MIESM??
Thank You! 
Questions if any
Phy Abstraction for LTE
Phy Abstraction for LTE
LTE 
OFDM 
OFDMA 
Cyclic Prefix 
ISI 
RE 
RB
OAI 
Eurecom 
Physical layer stimulations
Resource Elements Allocation 
•N_PILOTS = 6*N_RB*TM 
•N_RB - by default set to 25 
•N_RE = (OFDM symbols – Prefix length) * (N_RB*sub-carriers per block) - N_PILOTS 
•Example: -x1 –y1 –z1 ; Normal cyclic prefix 
•N_RE= (14-1)*(25*12) – (6*25*1) = 3750
Map CQI --> MCS 
•CQI – feedback 
•MCS – chosen 
CQI (1-15) MCS(1-28) 
3 3 
8 15 
10 20 
13 25 (with extended prefix)
AWGN reference curves 
•BLER vs SNR plots 
•Monte Carlo stimulations 
•Step size 
•SNR range 
•Interpret .csv 
•Target SNR
Plots 
•Target SNR vs CQI 
•Target SNR vs MCS 
•Target SNR vs Code rate 
•Observation
Extrapolation of curves 
•ΔSNR (db) = f -1(r2) – f-1(r1) 
•Normalized capacity is the 
effective code rate 
•Code rate/ bits per symbol
Extrapolation method 
•Choose MCS values belonging to same constellation. 
•Stimulate for those MCS values and get the Target SNR value. Target SNR is the SNR value for log 
BLER= -1 
•ΔSNR value of two MCS schemes from stimulation 
•We note down the effective code rate for the MCS used. 
•We use the reference curves to get the values of SNR using the appropriate curve (taking into 
consideration the Modulation scheme used for that MCS) 
•ΔSNR values found from the reference curves by extrapolation
Conclusions 
•Extrapolation important 
•Needs to be improved

More Related Content

PDF
E33018021
PDF
Development of Improved Diode Clamped Multilevel Inverter Using Optimized Sel...
PDF
Bode Plot Notes Step by Step
PDF
Antenna Paper Solution
PPT
IGARSS11-Zhang.ppt
PDF
Project 3 “Satellite Link Budgets and PE”
PDF
Project 1 “Signal Power, Noise, SNR and Auto- and Cross Correlation”
PDF
E33018021
Development of Improved Diode Clamped Multilevel Inverter Using Optimized Sel...
Bode Plot Notes Step by Step
Antenna Paper Solution
IGARSS11-Zhang.ppt
Project 3 “Satellite Link Budgets and PE”
Project 1 “Signal Power, Noise, SNR and Auto- and Cross Correlation”

What's hot (18)

PPTX
Lecture Notes: EEEC6440315 Communication Systems - Spectral Analysis
PDF
Performance of MMSE Denoise Signal Using LS-MMSE Technique
PDF
A0420105
PPT
Contemporary photonics serie2
PPTX
CHANNEL EQUALIZATION by NAVEEN TOKAS
PDF
Solucionario serway cap 3
PPT
Jagmohan presentation2008
PDF
12. Linear models
PDF
Report01_rev1
PDF
6. Vectors – Data Frames
 
PPTX
Lecture Notes: EEEC6440315 Communication Systems - Inter Symbol Interference...
PDF
Monte Carlo Simulation of the Statistical Uncertainty of Emission Measurement...
PPT
Sampling
PDF
Path loss models
PPTX
Bode diagram
PPT
Enhancement in frequency domain
PDF
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
PPTX
Lec 4 design via frequency response
Lecture Notes: EEEC6440315 Communication Systems - Spectral Analysis
Performance of MMSE Denoise Signal Using LS-MMSE Technique
A0420105
Contemporary photonics serie2
CHANNEL EQUALIZATION by NAVEEN TOKAS
Solucionario serway cap 3
Jagmohan presentation2008
12. Linear models
Report01_rev1
6. Vectors – Data Frames
 
Lecture Notes: EEEC6440315 Communication Systems - Inter Symbol Interference...
Monte Carlo Simulation of the Statistical Uncertainty of Emission Measurement...
Sampling
Path loss models
Bode diagram
Enhancement in frequency domain
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
Lec 4 design via frequency response
Ad

Viewers also liked (11)

PDF
LTE Basics - II
PDF
LTE Physical layer aspects
PDF
LTE in a Nutshell: Pysical Layer
PDF
LTE Key Technologies
PDF
AIRCOM LTE Webinar 5 - LTE Capacity
PDF
337626 jawadnakad2
PDF
lte physical layer overview
PDF
3GPP LTE-MAC
PDF
Chap 2. lte channel structure .eng
PPTX
Day two 10 november 2012
PDF
Slides day one
LTE Basics - II
LTE Physical layer aspects
LTE in a Nutshell: Pysical Layer
LTE Key Technologies
AIRCOM LTE Webinar 5 - LTE Capacity
337626 jawadnakad2
lte physical layer overview
3GPP LTE-MAC
Chap 2. lte channel structure .eng
Day two 10 november 2012
Slides day one
Ad

Similar to Phy Abstraction for LTE (20)

PPSX
Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...
PDF
Efficient realization-of-an-adfe-with-a-new-adaptive-algorithm
PPT
Image compression using dpcm with lms algorithm ranbeer
PPTX
Image Representation & Descriptors
PDF
Sparse channel estimation by pilot allocation in MIMO-OFDM systems
PDF
Final Project
PDF
BER Analysis ofImpulse Noise inOFDM System Using LMS,NLMS&RLS
PDF
I017325055
PDF
Performance of MMSE Denoise Signal Using LS-MMSE Technique
PDF
Lect2 up390 (100329)
PDF
Image Compression using DPCM with LMS Algorithm
PDF
A novel and efficient mixed-signal compressed sensing for wide-band cognitive...
PDF
Ijrdt11 140004
PDF
Final Project
PDF
A MODIFIED DIRECTIONAL WEIGHTED CASCADED-MASK MEDIAN FILTER FOR REMOVAL OF RA...
PPTX
Programming and Management of Research_16-JUL-2021.pptx
PPTX
Parameter estimation of distributed hydrological model using polynomial chaos...
PDF
poster_Wang Junshan
PDF
Id135
PPT
2013 06 tdr measurement and simulation of rg58 coaxial cable s-parameters_final
Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...
Efficient realization-of-an-adfe-with-a-new-adaptive-algorithm
Image compression using dpcm with lms algorithm ranbeer
Image Representation & Descriptors
Sparse channel estimation by pilot allocation in MIMO-OFDM systems
Final Project
BER Analysis ofImpulse Noise inOFDM System Using LMS,NLMS&RLS
I017325055
Performance of MMSE Denoise Signal Using LS-MMSE Technique
Lect2 up390 (100329)
Image Compression using DPCM with LMS Algorithm
A novel and efficient mixed-signal compressed sensing for wide-band cognitive...
Ijrdt11 140004
Final Project
A MODIFIED DIRECTIONAL WEIGHTED CASCADED-MASK MEDIAN FILTER FOR REMOVAL OF RA...
Programming and Management of Research_16-JUL-2021.pptx
Parameter estimation of distributed hydrological model using polynomial chaos...
poster_Wang Junshan
Id135
2013 06 tdr measurement and simulation of rg58 coaxial cable s-parameters_final

Recently uploaded (20)

PPTX
A Presentation on Artificial Intelligence
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PPTX
Tartificialntelligence_presentation.pptx
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Empathic Computing: Creating Shared Understanding
PDF
Heart disease approach using modified random forest and particle swarm optimi...
PPTX
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
PPTX
1. Introduction to Computer Programming.pptx
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
August Patch Tuesday
PDF
NewMind AI Weekly Chronicles - August'25-Week II
PDF
A comparative study of natural language inference in Swahili using monolingua...
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
Encapsulation theory and applications.pdf
PDF
A comparative analysis of optical character recognition models for extracting...
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
A Presentation on Artificial Intelligence
Mobile App Security Testing_ A Comprehensive Guide.pdf
MIND Revenue Release Quarter 2 2025 Press Release
Tartificialntelligence_presentation.pptx
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Empathic Computing: Creating Shared Understanding
Heart disease approach using modified random forest and particle swarm optimi...
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
1. Introduction to Computer Programming.pptx
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
August Patch Tuesday
NewMind AI Weekly Chronicles - August'25-Week II
A comparative study of natural language inference in Swahili using monolingua...
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Advanced methodologies resolving dimensionality complications for autism neur...
Programs and apps: productivity, graphics, security and other tools
Encapsulation theory and applications.pdf
A comparative analysis of optical character recognition models for extracting...
Encapsulation_ Review paper, used for researhc scholars
Reach Out and Touch Someone: Haptics and Empathic Computing

Phy Abstraction for LTE

  • 1. Physical layer abstraction for LTE downlink PRESENTED BY RAJ PATEL
  • 2. Introduction link level simulator simulates a single radio link system level simulator takes into account a complete cell: time consuming Physical layer abstraction : process of modeling the performance of the physical layer based on the current channel state and the physical layer parameters
  • 3. Introduction AWGN MCS -> CQI target SNR – 10% BLER Plots : Target SNR vs CQI / MCS - linear
  • 4. Introduction Extrapolation of Reference curve to get effective SNR choose MCS values belonging to same constellation. Get the Target SNR value •Calc. difference between the T.SNR values We note down the effective code rate for the MCS used. We use the reference curves to get the values of SNR using the effective code rate of that MCS •Calc. the difference between the SNR values
  • 5. Observations otheoretical difference and the difference calculated using interpolation are not the same oPossible reason: C* = (TBS + CRC) / G. G: bits transmitted per second; C: Code Rate o 40 <= Code Block Size(= TBS + CRC) <= 6144 ; CRC = 24 bits oEg: 6126 bits TBC 6120 + 24 // 6 + 24 + 10 ; 10 : padding Delta SNR from Lookup table values C = TBS / G 4.237 4.3203 1.4398 2.8805 4.7258 6.6409 2.6672 3.9737 Delta SNR from look up table using C* = (TBS + CRC) / G 4.1689 4.3423 1.4366 2.9057 4.7415 6.684 2.6877 3.9963 Delta SNR from log BLER curve 2.86 3.446 0.788 2.668 4.2 3.742 2.412 2.33
  • 6. Frequency Selective Fading Coherence Bandwidth Signal Bandwidth Flat fading: Just attenuation, no distortion Frequency Selective (much more realistic): Distortion If the attenuation happens in different amounts for the different parts of the signal, it is a distortion. Condition: Coherence Bandwidth < Signal Bandwidth Frequency selective fading channel model Eg.: EPA
  • 7. EPA : Extended Pedestrian A model omultiple paths osame signal copies arrive at the receiver delayed and different attenuations o-g E –M1 –R1 –N 100 –n 10000 o-M1: Abstraction flag keeps channel coefficients constant over SNR range o-R1: to reduce simulation time o-g E: fading model o-n: number of packets o-N: number of channel realizations oOUTPUT format: SNR, 50 channel coefficients, BLER1
  • 9. EESM: Exponential Effective SINR Mapping 훾eff = 훽1 퐼−1 1 푁 푁 퐼 푛=1 훾푛 훽2 퐼 훾푛 = 1 − exp (−훾푛) ; 훾푛 is the instantaneous SNR Aim: to calculate SINR effective Noise_var = 1 / SNR_linear; inst_snr = 10*log10 (h^2/Noise_var); 1. Calculate the instantaneous SNR corresponding to each value of channel realization 2. Use the I function with the instantaneous SNR and average it over N 3. Use the inverse function of I to calculate the effective SNR
  • 11. MIESM Mutual Information Effective SINR Mapping No closed form expression Calculate the instantaneous SNR Using lookup tables, calculate normalized capacity for each instantaneous SNR Calculate average normalized capacity per SNR Calculate the effective SNR using average normalized capacity with lookup table
  • 13. MSE calculation 훾eff = 퐼−1 1 푁 푁 퐼(훾푛) 푛=1 *N stands for the number of values of channel coefficients per SNR. SNR interp: image of SNR effective on AWGN curve 푀푆퐸 = 1 푁 푁 푛=1 훾푖푛푡푒푟푝 BLER푐ℎ −훾eff 훾푖푛푡푒푟푝 BLER푐ℎ 2 *N here, stands for the number of SNR values.
  • 14. MSE results MCS MSE EESM using 'linear','extrap' NORMALIZED Linear, log MSE_MIESM 'linear','extrap' NORMALIZED Linear, log 3 58.695, 0.3663 108.92, 0.2975 15 1.5247, 0.4958 0.3202, 0.3395 15 _n = 1000, N =1000 1.1699, 1.3596 0.3403, 1.9242 20 * 0.3869, 0.2304 0.1067, 0.5900 23 0.2551, 0.4954 0.0823, 0.3636 25 0.0897, 0.7444 0.0672, 0.7858
  • 15. MSE –With 훽1, 훽2 훾eff = 훽1 퐼−1 1 푁 푁 퐼 푛=1 훾푛 훽2 푀푆퐸argmin 훽1,훽2 = 1 푁 푁 푛=1 훾푖푛푡푒푟푝 BLER푐ℎ −훾eff 훽1,훽2 훾푖푛푡푒푟푝 BLER푐ℎ 2
  • 16. MSE Results –With 훽1, 훽2 MCS B values MSE EESM calibrated 3 [0.0334,0.6226] 0.7683 15 [3.975e+02,4.7833e+03] 0.0037 15 _n = 1000, N [3.991e+02,5.581e+03] 0.0041 =1000 20 (erroneous) [41.3997,58.1240] 0.0466 23 [6.862e+02,1.241e+04] 1.64e-04 25 [7.469e+02,1.318e+04] 1.20e-04 MCS B values MSE MIESM calibrated 3 [0.2051,17.348] 0.9835 15 [0.7490,0.6111] 0.2887 15 _n = 1000, N [0.7903,0.7440] 0.3339 =1000 20 (erroneous) [0.6041,0.7456] 0.0430 23 [0.8813,0.7282] 0.0567 25 [0.8398,0.8028] 0.0645
  • 17. EESM – calib. MCS- color 3-Red, 15- Yellow, 20*- Sky blue, 23- Blue, 25- Pink
  • 18. Conclusions and Observations Calibration factors work better with EESM The resultant MSE after using calibration factor with EESM are around 10^3 times better Where as for MIESM, it is 10 times better. MCS 25: EESM MIESM MSE Without calibration 0.7444 0.7858 MSE With calibration 1.20e-04 0.0645
  • 19. Conclusions and Observations Calculations done in the log scale don’t make 푀푆퐸argmin 훽1,훽2 = 1 푁 푁 푛=1 훾푖푛푡푒푟푝 BLER푐ℎ −훾eff 훽1,훽2 훾푖푛푡푒푟푝 BLER푐ℎ 2 Division in log scale? MCS MSE EESM using 'linear','extrap' NORMALIZED Linear, log MSE_MIESM 'linear','extrap' NORMALIZED Linear, log 3 58.695, 0.3663 108.92, 0.2975 15 1.5247,0.4958 0.3202, 0.3395 20 (erroneous) 0.3869, 0.2304 0.1067, 0.5900 23 0.2551, 0.4954 0.0823, 0.3636 25 0.0897, 0.7444 0.0672, 0.7858 NOTE: Calculations in Linear scale show a gradual Decrease in MSE value, unlike the log scale Thus operate with linear values if we are using Normalization But why does Lower MCS have weird MSE values?
  • 20. Conclusions and Observations Issues with the lower MCS values any ideas?? Working on Linear scale, why is it that the Lower MCS has higher values of MSE compared to higher MCS values? Reason: Normalization while calculating MSE 푀푆퐸argmin 훽1,훽2 = 1 푁 푁 푛=1 훾푖푛푡푒푟푝 BLER푐ℎ −훾eff 훽1,훽2 훾푖푛푡푒푟푝 BLER푐ℎ 2 훾푖푛푡푒푟푝 BLER푐ℎ − 훾eff 훽1, 훽2 : more or less remains the same, say around 5-10 dB But, 훾푖푛푡푒푟푝 BLER푐ℎ changes according to MCS value, stays close to -2 to 2 dB
  • 22. Conclusions and Observations MCS MSE EESM using 'linear','extrap' NORMALIZED Linear MSE_MIESM 'linear','extrap' NORMALIZED Linear 3 58.695 108.92 15 1.5247 0.3202 15 _n = 1000, N =1000 1.1699 0.3403 20 (erroneous) 0.3869 0.1067 23 0.2551 0.0823 25 0.0897 0.0672 Table with the calculations done in Linear scale.
  • 23. Conclusions and Observations For 15 _n = 1000, N =1000 case, the calculations are not in synchronization with the other cases. Reason: too many values: may be it gives us a better estimate. MCS B values MSE EESM calibrated 3 [0.0334,0.6226] 0.7683 15 [3.975e+02,4.7833e+03] 0.0037 15 _n = 1000, N [3.991e+02,5.581e+03] 0.0041 =1000 20 (erroneous) [41.3997,58.1240] 0.0466 23 [6.862e+02,1.241e+04] 1.64e-04 25 [7.469e+02,1.318e+04] 1.20e-04 NOTE: Calculations in Linear scale show a gradual Decrease in MCS value MCS B values MSE MIESM calibrated 3 [0.2051,17.348] 0.9835 15 [0.7490,0.6111] 0.2887 15 _n = 1000, N [0.7903,0.7440] 0.3339 =1000 20 (erroneous) [0.6041,0.7456] 0.0430 23 [0.8813,0.7282] 0.0567 25 [0.8398,0.8028] 0.0645 Note: The MSE of EESM is lower than the MSE of MIESM
  • 24. Conclusions and Observations Note: The MSE of EESM is lower than the MSE of MIESM Reason? High values of Beta using EESM? MCS B values MSE EESM calibrated 3 [0.0334,0.6226] 0.7683 15 [3.975e+02,4.7833e+03] 0.0037 15 _n = 1000, N [3.991e+02,5.581e+03] 0.0041 =1000 20 (erroneous) [41.3997,58.1240] 0.0466 23 [6.862e+02,1.241e+04] 1.64e-04 25 [7.469e+02,1.318e+04] 1.20e-04 MCS B values MSE MIESM calibrated 3 [0.2051,17.348] 0.9835 15 [0.7490,0.6111] 0.2887 15 _n = 1000, N [0.7903,0.7440] 0.3339 =1000 20 (erroneous) [0.6041,0.7456] 0.0430 23 [0.8813,0.7282] 0.0567 25 [0.8398,0.8028] 0.0645
  • 25. Issues and Future Work The calibration factors are a bit high for some MCS values for EESM! WHY!? Is that the only reason why we see the performance of EESM is better than MIESM??
  • 29. LTE OFDM OFDMA Cyclic Prefix ISI RE RB
  • 30. OAI Eurecom Physical layer stimulations
  • 31. Resource Elements Allocation •N_PILOTS = 6*N_RB*TM •N_RB - by default set to 25 •N_RE = (OFDM symbols – Prefix length) * (N_RB*sub-carriers per block) - N_PILOTS •Example: -x1 –y1 –z1 ; Normal cyclic prefix •N_RE= (14-1)*(25*12) – (6*25*1) = 3750
  • 32. Map CQI --> MCS •CQI – feedback •MCS – chosen CQI (1-15) MCS(1-28) 3 3 8 15 10 20 13 25 (with extended prefix)
  • 33. AWGN reference curves •BLER vs SNR plots •Monte Carlo stimulations •Step size •SNR range •Interpret .csv •Target SNR
  • 34. Plots •Target SNR vs CQI •Target SNR vs MCS •Target SNR vs Code rate •Observation
  • 35. Extrapolation of curves •ΔSNR (db) = f -1(r2) – f-1(r1) •Normalized capacity is the effective code rate •Code rate/ bits per symbol
  • 36. Extrapolation method •Choose MCS values belonging to same constellation. •Stimulate for those MCS values and get the Target SNR value. Target SNR is the SNR value for log BLER= -1 •ΔSNR value of two MCS schemes from stimulation •We note down the effective code rate for the MCS used. •We use the reference curves to get the values of SNR using the appropriate curve (taking into consideration the Modulation scheme used for that MCS) •ΔSNR values found from the reference curves by extrapolation
  • 37. Conclusions •Extrapolation important •Needs to be improved