SlideShare a Scribd company logo
Compiler presentaion
A compiler is a computer program (or set of 
programs) that transforms source code written 
in a programming language (the source 
language) into another computer language 
(the target language, often having a binary 
form known as object code).
Source 
code 
Optimizing 
Object 
code
The term decompiler is most commonly applied 
to a program which translates executable 
programs (the output from a compiler) into 
source code in a (relatively) high level 
language which, when compiled, will produce 
an executable whose behavior is the same as 
the original executable program
Compiler presentaion
1. Lexical analysis: in a compiler linear 
analysis is called lexical analysis or scanning 
2. Preprocessor: in addition to a compiler 
several other programs may be required to 
create and executable target program. A 
source program may be divided into 
modules stored in spa rate files. The task of 
collection the source program is sometimes 
entrusted to distinct program called a 
preprocessing.
3. Parsing: hierarchical analysis is called parsing or 
syntax analysis. 
4. Semantic analysis: is the phase in which the 
compiler adds semantic information to the parse 
tree and builds the symbol table. This phase 
performs semantic checks such as type checking 
(checking for type errors), or object binding 
(associating variable and function references with 
their definitions), or definite assignment (requiring 
all local variables to be initialized before use), 
rejecting incorrect programs or issuing warnings.
5. Code generation: the final phase of the 
compiler is the generation of target code 
consisting normally of relocatable machine 
code or assembly code. 
6. Code optimization: the code optimization 
phase attempts to improve the 
intermediate code, so that faster-running 
machine code will result.
Compilers bridge source programs in high-level 
languages with the underlying hardware. 
A compiler requires : 
1) Determining the correctness of the syntax of 
programs. 
2) Generating correct and efficient object code. 
3) Run-time organization. 
4) Formatting output according to assembler and/or 
linker conventions.
The front end 
The middle 
end 
The back end
1. The front end: 
checks whether the program is correctly written 
in terms of the programming language syntax and 
semantics. Here legal and illegal programs are 
recognized. Errors are reported, if any, in a useful 
way. Type checking is also performed by collecting 
type information. The frontend then generates an 
intermediate representation or IR of the source 
code for processing by the middle-end.
2. The middle end: 
Is where optimization takes place. Typical 
transformations for optimization are removal of 
useless or unreachable code, discovery and 
propagation of constant values, relocation of 
computation to a less frequently executed place 
(e.g., out of a loop), or specialization of 
computation based on the context. The middle-end 
generates another IR for the following 
backend. Most optimization efforts are focused on 
this part.
3. The back end: 
Is responsible for translating the IR from the middle-end 
into assembly code. The target instruction(s) are 
chosen for each IR instruction. Register allocation 
assigns processor registers for the program variables 
where possible. The backend utilizes the hardware by 
figuring out how to keep parallel execution units busy, 
filling delay slots, and so on.
 One classification of compilers is by the platform 
on which their generated code executes. This is 
known as the target platform. 
 The output of a compiler that produces code for a 
virtual machine (VM) may or may not be executed 
on the same platform as the compiler that 
produced it. For this reason such compilers are not 
usually classified as native or cross compilers.
 EQN, a preprocessor for typesetting 
mathematics 
 Compilers for Pascal 
 The C compilers 
 The Fortran H compilers. 
 The Bliss/11 compiler. 
 Modula – 2 optimization compiler.
Compiler 
Passes 
Single Pass Multi Pass
A single pass compiler makes a single pass over 
the source text, parsing, analyzing, and 
generating code all at once.
let var n: 
integer; 
var c: char 
in begin 
c := ‘&’; 
n := n+1 
end 
PUSH 2 
LOADL 38 
STORE 1[SB] 
LOAD 0[SB] 
LOADL 1 
CALL add 
STORE 0[SB] 
POP 2 
Ident HALT 
N 
c 
Type 
Int 
char 
Address 
0[SB] 
1[SB]
A multi pass compiler makes several passes 
over the program. The output of a preceding 
phase is stored in a data structure and used by 
subsequent phases.
Compiler presentaion
Automatic parallelization: 
The last one of which implies automation when 
used in context, refers to converting sequential 
code into multi-threaded or vectorized (or even 
both) code in order to utilize multiple 
processors simultaneously in a shared-memory 
multiprocessor (SMP) machine.
The compiler usually conducts two passes of analysis 
before actual parallelization in order to determine the 
following: 
 Is it safe to parallelize the loop? Answering this 
question needs accurate dependence analysis and 
alias analysis 
 Is it worthwhile to parallelize it? This answer 
requires a reliable estimation (modeling) of the 
program workload and the capacity of the parallel 
system.
The Fortran code below can be auto-parallelized by a 
compiler because each iteration is independent of the 
others, and the final result of array z will be correct 
regardless of the execution order of the other 
iterations. 
do i=n ,1 
z(i) = x(i) + y(i( 
enddo
On the other hand, the following code cannot be 
auto-parallelized, because the value of z(i) depends 
on the result of the previous iteration, z(i-1). 
do i=2, n 
z(i) = z(i-1)*2 
enddo
This does not mean that the code cannot be 
parallelized. Indeed, it is equivalent to 
do i=2, n 
z(i) = z(1)*2**(i-1) 
enddo
Automatic parallelization by compilers or tools is very 
difficult due to the following reasons: 
 Dependence analysis is hard for code using indirect 
addressing, pointers, recursion, and indirect 
function calls. 
 loops have an unknown number of iterations. 
 Accesses to global resources are difficult to 
coordinate in terms of memory allocation, I/O, and 
shared variables.
Due to the inherent difficulties in full automatic 
parallelization, several easier approaches exist to get 
a parallel program in higher quality. They are: 
 Allow programmers to add "hints" to their 
programs to guide compiler parallelization. 
 Build an interactive system between programmers 
and parallelizing tools/compilers. 
 Hardware-supported speculative multithreading.

More Related Content

PPTX
PPTX
Introduction to armv8 aarch64
PDF
Debugging linux kernel tools and techniques
PPTX
Linux Kernel Booting Process (2) - For NLKB
PPTX
Hypervisors
PPTX
Life cycle of a computer program
PPTX
Fundamentals of Language Processing
PPT
Chapter 21 - The Linux System
Introduction to armv8 aarch64
Debugging linux kernel tools and techniques
Linux Kernel Booting Process (2) - For NLKB
Hypervisors
Life cycle of a computer program
Fundamentals of Language Processing
Chapter 21 - The Linux System

What's hot (20)

PPT
Linux seminar
PPTX
Online test program generator for RISC-V processors
PPTX
Toy complier
PDF
The linux networking architecture
PDF
spinlock.pdf
PPTX
Memory model
PPTX
Introduction to python
PPTX
Introduction to loaders
PPTX
Linkers
ODP
Linux Internals - Kernel/Core
PPT
Pcie drivers basics
PDF
Architecture Of The Linux Kernel
PPT
MCP2515: Stand-Alone CAN Controller
PPSX
Programming Fundamental Presentation
PPT
PPT
C program compiler presentation
PPTX
System Programing Unit 1
PDF
Getting Started with Buildroot
PDF
Code optimization in compiler design
Linux seminar
Online test program generator for RISC-V processors
Toy complier
The linux networking architecture
spinlock.pdf
Memory model
Introduction to python
Introduction to loaders
Linkers
Linux Internals - Kernel/Core
Pcie drivers basics
Architecture Of The Linux Kernel
MCP2515: Stand-Alone CAN Controller
Programming Fundamental Presentation
C program compiler presentation
System Programing Unit 1
Getting Started with Buildroot
Code optimization in compiler design
Ad

Viewers also liked (8)

PPT
Parallel concepts1
ODP
Artificial Intelligence
PPTX
Artificial Intelligence Presentation
PPT
Artificial Intelligence
PPTX
Artificial Intelligence
PPT
Artificial Intelligence
PPT
artificial intelligence
PPT
Artificial inteligence
Parallel concepts1
Artificial Intelligence
Artificial Intelligence Presentation
Artificial Intelligence
Artificial Intelligence
Artificial Intelligence
artificial intelligence
Artificial inteligence
Ad

Similar to Compiler presentaion (20)

PPT
A basic introduction to compiler design.ppt
PPT
A basic introduction to compiler design.ppt
PPT
Phases of compiler
DOCX
Compiler Design Material
PPTX
Chapter 2 Program language translation.pptx
PPTX
Chapter 1.pptx
PPT
Compiler Construction introduction
PPT
How a Compiler Works ?
PDF
Phases of compiler
PDF
Chapter1pdf__2021_11_23_10_53_20.pdf
PDF
unit1pdf__2021_12_14_12_37_34.pdf
DOCX
2-Design Issues, Patterns, Lexemes, Tokens-28-04-2023.docx
PPTX
16 compiler-151129060845-lva1-app6892-converted.pptx
PPTX
Phases of Compiler
DOCX
Dineshmaterial1 091225091539-phpapp02
PDF
Compiler_Lecture1.pdf
DOC
Compiler Design(Nanthu)
DOC
Compilerdesignnew 091219090526-phpapp02
DOC
Compiler Design(NANTHU NOTES)
PPTX
Chapter 1.pptx compiler design lecture note
A basic introduction to compiler design.ppt
A basic introduction to compiler design.ppt
Phases of compiler
Compiler Design Material
Chapter 2 Program language translation.pptx
Chapter 1.pptx
Compiler Construction introduction
How a Compiler Works ?
Phases of compiler
Chapter1pdf__2021_11_23_10_53_20.pdf
unit1pdf__2021_12_14_12_37_34.pdf
2-Design Issues, Patterns, Lexemes, Tokens-28-04-2023.docx
16 compiler-151129060845-lva1-app6892-converted.pptx
Phases of Compiler
Dineshmaterial1 091225091539-phpapp02
Compiler_Lecture1.pdf
Compiler Design(Nanthu)
Compilerdesignnew 091219090526-phpapp02
Compiler Design(NANTHU NOTES)
Chapter 1.pptx compiler design lecture note

Recently uploaded (20)

PDF
NewMind AI Weekly Chronicles - August'25-Week II
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PPTX
Machine Learning_overview_presentation.pptx
PPTX
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PPT
Teaching material agriculture food technology
PDF
A comparative analysis of optical character recognition models for extracting...
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Heart disease approach using modified random forest and particle swarm optimi...
PPTX
A Presentation on Artificial Intelligence
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Getting Started with Data Integration: FME Form 101
PPTX
cloud_computing_Infrastucture_as_cloud_p
PDF
Empathic Computing: Creating Shared Understanding
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PPTX
SOPHOS-XG Firewall Administrator PPT.pptx
PDF
Univ-Connecticut-ChatGPT-Presentaion.pdf
NewMind AI Weekly Chronicles - August'25-Week II
Per capita expenditure prediction using model stacking based on satellite ima...
Reach Out and Touch Someone: Haptics and Empathic Computing
gpt5_lecture_notes_comprehensive_20250812015547.pdf
Machine Learning_overview_presentation.pptx
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
Mobile App Security Testing_ A Comprehensive Guide.pdf
Teaching material agriculture food technology
A comparative analysis of optical character recognition models for extracting...
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Encapsulation_ Review paper, used for researhc scholars
Heart disease approach using modified random forest and particle swarm optimi...
A Presentation on Artificial Intelligence
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Getting Started with Data Integration: FME Form 101
cloud_computing_Infrastucture_as_cloud_p
Empathic Computing: Creating Shared Understanding
Diabetes mellitus diagnosis method based random forest with bat algorithm
SOPHOS-XG Firewall Administrator PPT.pptx
Univ-Connecticut-ChatGPT-Presentaion.pdf

Compiler presentaion

  • 2. A compiler is a computer program (or set of programs) that transforms source code written in a programming language (the source language) into another computer language (the target language, often having a binary form known as object code).
  • 4. The term decompiler is most commonly applied to a program which translates executable programs (the output from a compiler) into source code in a (relatively) high level language which, when compiled, will produce an executable whose behavior is the same as the original executable program
  • 6. 1. Lexical analysis: in a compiler linear analysis is called lexical analysis or scanning 2. Preprocessor: in addition to a compiler several other programs may be required to create and executable target program. A source program may be divided into modules stored in spa rate files. The task of collection the source program is sometimes entrusted to distinct program called a preprocessing.
  • 7. 3. Parsing: hierarchical analysis is called parsing or syntax analysis. 4. Semantic analysis: is the phase in which the compiler adds semantic information to the parse tree and builds the symbol table. This phase performs semantic checks such as type checking (checking for type errors), or object binding (associating variable and function references with their definitions), or definite assignment (requiring all local variables to be initialized before use), rejecting incorrect programs or issuing warnings.
  • 8. 5. Code generation: the final phase of the compiler is the generation of target code consisting normally of relocatable machine code or assembly code. 6. Code optimization: the code optimization phase attempts to improve the intermediate code, so that faster-running machine code will result.
  • 9. Compilers bridge source programs in high-level languages with the underlying hardware. A compiler requires : 1) Determining the correctness of the syntax of programs. 2) Generating correct and efficient object code. 3) Run-time organization. 4) Formatting output according to assembler and/or linker conventions.
  • 10. The front end The middle end The back end
  • 11. 1. The front end: checks whether the program is correctly written in terms of the programming language syntax and semantics. Here legal and illegal programs are recognized. Errors are reported, if any, in a useful way. Type checking is also performed by collecting type information. The frontend then generates an intermediate representation or IR of the source code for processing by the middle-end.
  • 12. 2. The middle end: Is where optimization takes place. Typical transformations for optimization are removal of useless or unreachable code, discovery and propagation of constant values, relocation of computation to a less frequently executed place (e.g., out of a loop), or specialization of computation based on the context. The middle-end generates another IR for the following backend. Most optimization efforts are focused on this part.
  • 13. 3. The back end: Is responsible for translating the IR from the middle-end into assembly code. The target instruction(s) are chosen for each IR instruction. Register allocation assigns processor registers for the program variables where possible. The backend utilizes the hardware by figuring out how to keep parallel execution units busy, filling delay slots, and so on.
  • 14.  One classification of compilers is by the platform on which their generated code executes. This is known as the target platform.  The output of a compiler that produces code for a virtual machine (VM) may or may not be executed on the same platform as the compiler that produced it. For this reason such compilers are not usually classified as native or cross compilers.
  • 15.  EQN, a preprocessor for typesetting mathematics  Compilers for Pascal  The C compilers  The Fortran H compilers.  The Bliss/11 compiler.  Modula – 2 optimization compiler.
  • 16. Compiler Passes Single Pass Multi Pass
  • 17. A single pass compiler makes a single pass over the source text, parsing, analyzing, and generating code all at once.
  • 18. let var n: integer; var c: char in begin c := ‘&’; n := n+1 end PUSH 2 LOADL 38 STORE 1[SB] LOAD 0[SB] LOADL 1 CALL add STORE 0[SB] POP 2 Ident HALT N c Type Int char Address 0[SB] 1[SB]
  • 19. A multi pass compiler makes several passes over the program. The output of a preceding phase is stored in a data structure and used by subsequent phases.
  • 21. Automatic parallelization: The last one of which implies automation when used in context, refers to converting sequential code into multi-threaded or vectorized (or even both) code in order to utilize multiple processors simultaneously in a shared-memory multiprocessor (SMP) machine.
  • 22. The compiler usually conducts two passes of analysis before actual parallelization in order to determine the following:  Is it safe to parallelize the loop? Answering this question needs accurate dependence analysis and alias analysis  Is it worthwhile to parallelize it? This answer requires a reliable estimation (modeling) of the program workload and the capacity of the parallel system.
  • 23. The Fortran code below can be auto-parallelized by a compiler because each iteration is independent of the others, and the final result of array z will be correct regardless of the execution order of the other iterations. do i=n ,1 z(i) = x(i) + y(i( enddo
  • 24. On the other hand, the following code cannot be auto-parallelized, because the value of z(i) depends on the result of the previous iteration, z(i-1). do i=2, n z(i) = z(i-1)*2 enddo
  • 25. This does not mean that the code cannot be parallelized. Indeed, it is equivalent to do i=2, n z(i) = z(1)*2**(i-1) enddo
  • 26. Automatic parallelization by compilers or tools is very difficult due to the following reasons:  Dependence analysis is hard for code using indirect addressing, pointers, recursion, and indirect function calls.  loops have an unknown number of iterations.  Accesses to global resources are difficult to coordinate in terms of memory allocation, I/O, and shared variables.
  • 27. Due to the inherent difficulties in full automatic parallelization, several easier approaches exist to get a parallel program in higher quality. They are:  Allow programmers to add "hints" to their programs to guide compiler parallelization.  Build an interactive system between programmers and parallelizing tools/compilers.  Hardware-supported speculative multithreading.