SlideShare a Scribd company logo
Greedy Algorithms
Greedy Algorithms
1
Chapter 3
Chapter 3
Overview
• Greedy algorithms are used to solve optimization
problems.
– For most optimization problems you want to find, not
just a solution, but the best solution.
• A greedy algorithm sometimes works well for
optimization problems. It works in phases.
– At each phase:
• You take the best you can get right now, without
regard for future consequences.
• You hope that by choosing a local optimum at each
step, you will end up at a global optimum.
…
• Problems exhibit optimal substructure.
– The optimal solution for the problem can be gained from
the optimal solutions of its sub problems.
• Problems also exhibit the greedy-choice property.
– When we have a choice to make, make the one that looks
best right now.
– Make a locally optimal choice in hope of getting a
globally optimal solution.
3
Greedy Strategy
• The choice that seems best at the moment is the one
we go with.
– Prove that when there is a choice to make, one of the
optimal choices is the greedy choice.
• Therefore, it’s always safe to make the greedy choice.
– Show that all but one of the sub-problems resulting from
the greedy choice are empty.
Minimum Spanning Trees (MST)
6
Minimum Spanning Trees
• Spanning Tree
– A tree (i.e., connected, acyclic graph) which contains
all the vertices of the graph
• Minimum Spanning Tree
– Spanning tree with the minimum sum of weights
• Spanning forest
– If a graph is not connected, then there is a spanning
tree for each connected component of the graph
a
b c d
e
g g f
i
4
8 7
8
11
1 2
7
2
4 14
9
10
6
7
Applications of MST
– Find the least expensive way to connect a set of
cities, terminals, computers, etc.
8
How to build MST?
Idea: build the MST edge by edge.
Start from A = , By definition A is a (trivial) subset of a MST
∅
Add edge to A, maintaining the property that A is a subset of
some MST
Stop when no edge can be added to A anymore. At this point A
will be in MST
9
Example
Problem
• A town has a set of houses
and a set of roads
• A road connects 2 and only
2 houses
• A road connecting houses u and v has a repair
cost w(u, v)
Goal: Repair enough (and no more) roads such that:
1. Everyone stays connected
i.e., can reach every house from all other houses
2. Total repair cost is minimum
a
b c d
e
h g f
i
4
8 7
8
11
1 2
7
2
4 14
9
10
6
10
Minimum Spanning Trees
• A connected, undirected graph:
– Vertices = houses, Edges = roads
• A weight w(u, v) on each edge (u, v)  E
a
b c d
e
h g f
i
4
8 7
8
11
1 2
7
2
4 14
9
10
6
Find T  E such that:
1. T connects all vertices
2. w(T) = Σ(u,v)T w(u, v) is
minimized
11
Properties of Minimum Spanning Trees
• Minimum spanning tree is not unique
• MST has no cycles
– We can take out an edge of a cycle, and still have
the vertices connected while reducing the cost
• All of edges in a MST:
– |V| - 1
12
Prim’s Algorithm
• The edges in set A always form a single tree
• Starts from an arbitrary root: VA = {a}
• At each step:
– Find a light edge crossing (VA, V - VA)
– Add this edge to A
– Repeat until the tree spans all vertices
a
b c d
e
h g f
i
4
8 7
8
11
1 2
7
2
4 14
9
10
6
13
How to Find Light Edges Quickly?
Use a priority queue Q:
• Contains vertices not yet
included in the tree, i.e., (V – VA)
– VA = {a}, Q = {b, c, d, e, f, g, h, i}
• We associate a key with each vertex v:
key[v] = minimum weight of any edge (u, v)
connecting v to VA
a
b c d
e
h g f
i
4
8 7
8
11
1 2
7
2
4 14
9
10
6
w1
w2
Key[a] = min(w1,w2)
a
14
…
• After adding a new node to VA we update the weights of
all the nodes adjacent to it
e.g., after adding a to the tree, k[b]=4 and k[h]=8
• Key of v is  if v is not adjacent to any vertices in VA
a
b c d
e
h g f
i
4
8 7
8
11
1 2
7
2
4 14
9
10
6
15
Example
0        
Q = {a, b, c, d, e, f, g, h, i}
VA = 
Extract-MIN(Q)  a
a
b c d
e
h g f
i
4
8 7
8
11
1 2
7
2
4 14
9
10
6
a
b c d
e
h g f
i
4
8 7
8
11
1 2
7
2
4 14
9
10
6
key [b] = 4  [b] = a
key [h] = 8  [h] = a
4      8 
Q = {b, c, d, e, f, g, h, i} VA = {a}
Extract-MIN(Q)  b
  
 
  
 
 
 

4

8
16
4 

8  
8

Example
a
b c d
e
h g f
i
4
8 7
8
11
1 2
7
2
4 14
9
10
6
key [c] = 8  [c] = b
key [h] = 8  [h] = a -
unchanged
8     8 
Q = {c, d, e, f, g, h, i} VA = {a, b}
Extract-MIN(Q)  c
a
b c d
e
h g f
i
4
8 7
8
11
1 2
7
2
4 14
9
10
6
key [d] = 7  [d] = c
key [f] = 4  [f] = c
key [i] = 2  [i] = c
7  4  8 2
Q = {d, e, f, g, h, i} VA = {a, b, c}
Extract-MIN(Q)  i


4 

8  
8
7
4
2
17
Example
a
b c d
e
h g f
i
4
8 7
8
11
1 2
7
2
4 14
9
10
6
key [h] = 7  [h] = i
key [g] = 6  [g] = i
7  4 6 8
Q = {d, e, f, g, h} VA = {a, b, c, i}
Extract-MIN(Q)  f
a
b c d
e
h g f
i
4
8 7
8
11
1 2
7
2
4 14
9
10
6
key [g] = 2  [g] = f
key [d] = 7  [d] = c
unchanged
key [e] = 10  [e] = f
7 10 2 8
Q = {d, e, g, h} VA = {a, b, c, i, f}
Extract-MIN(Q)  g
4 7

8  4
8
2
7 6
4 7

7 6 4
8
2
2
10
18
Example
a
b c d
e
h g f
i
4
8 7
8
11
1 2
7
2
4 14
9
10
6
key [h] = 1  [h] = g
7 10 1
Q = {d, e, h} VA = {a, b, c, i, f, g}
Extract-MIN(Q)  h
a
b c d
e
h g f
i
4
8 7
8
11
1 2
7
2
4 14
9
10
6
7 10
Q = {d, e} VA = {a, b, c, i, f, g, h}
Extract-MIN(Q)  d
4 7
10
7 2 4
8
2
1
4 7
10
1 2 4
8
2
19
Example
a
b c d
e
h g f
i
4
8 7
8
11
1 2
7
2
4 14
9
10
6
key [e] = 9  [e] = f
9
Q = {e} VA = {a, b, c, i, f, g, h, d}
Extract-MIN(Q)  e
– Q = 
– VA = {a, b, c, i, f, g, h, d, e}
4 7
10
1 2 4
8
2 9
20
PRIM(V, E, w, r)
1. Q ← 
2. for each u  V
3. do key[u] ← ∞
4. π[u] ← NIL
5. INSERT(Q, u)
6. DECREASE-KEY(Q, r, 0) ► key[r] 0
←
7. while Q  
8. do u ← EXTRACT-MIN(Q)
9. for each v  Adj[u]
10. do if v  Q and w(u, v) < key[v]
11. then π[v] u
←
12. DECREASE-KEY(Q, v, w(u, v))
O(V) if Q is implemented
as a min-heap
Executed |V| times
Takes O(logV)
Min-heap
operations:
O(VlogV)
Executed O(E) times total
Constant
Takes O(logV)
O(ElogV)
Total time: O(VlogV + ElogV) = O(ElogV)
O(logV)
21
Kruskal’s Algorithm
• How is it different from Prim’s algorithm?
– Prim’s algorithm grows one tree all the
time
– Kruskal’s algorithm grows multiple trees
(i.e., a forest) at the same time.
– Trees are merged together using safe edges
– Since an MST has exactly |V| - 1 edges,
after |V| - 1 merges, we would have only
one component
u
v
tree1
tree2
22
We would add
edge (c, f)
a
b c d
e
h g f
i
4
8 7
8
11
1 2
7
2
4 14
9
10
6
…
• Start with each vertex being its own
component
• Repeatedly merge two components into
one by choosing the light edge that
connects them
• Which components to consider at each
iteration?
– Scan the set of edges in monotonically
increasing order by weight
23
Example
1. Add (h, g)
2. Add (c, i)
3. Add (g, f)
4. Add (a, b)
5. Add (c, f)
6. Ignore (i, g)
7. Add (c, d)
8. Ignore (i, h)
9. Add (a, h)
10. Ignore (b, c)
11. Add (d, e)
12. Ignore (e, f)
13. Ignore (b, h)
14. Ignore (d, f)
a
b c d
e
h g f
i
4
8 7
8
11
1 2
7
2
4 14
9
10
6
1: (h, g)
2: (c, i), (g, f)
4: (a, b), (c, f)
6: (i, g)
7: (c, d), (i, h)
8: (a, h), (b, c)
9: (d, e)
10: (e, f)
11: (b, h)
14: (d, f)
{g, h}, {a}, {b}, {c}, {d}, {e}, {f}, {i}
{g, h}, {c, i}, {a}, {b}, {d}, {e}, {f}
{g, h, f}, {c, i}, {a}, {b}, {d}, {e}
{g, h, f}, {c, i}, {a, b}, {d}, {e}
{g, h, f, c, i}, {a, b}, {d}, {e}
{g, h, f, c, i}, {a, b}, {d}, {e}
{g, h, f, c, i, d}, {a, b}, {e}
{g, h, f, c, i, d}, {a, b}, {e}
{g, h, f, c, i, d, a, b}, {e}
{g, h, f, c, i, d, a, b}, {e}
{g, h, f, c, i, d, a, b, e}
{g, h, f, c, i, d, a, b, e}
{g, h, f, c, i, d, a, b, e}
{g, h, f, c, i, d, a, b, e}
{a}, {b}, {c}, {d}, {e}, {f}, {g}, {h}, {i}
24
1. A ← 
2. for each vertex v  V
3. do MAKE-SET(v)
4. sort E into non-decreasing order by w
5. for each (u, v) taken from the sorted list
6. do if FIND-SET(u)  FIND-SET(v)
7. then A ← A  {(u, v)}
8. UNION(u, v)
9. return A
- Running time: O(V+ElgE+ElgV)=O(ElgE)
- Since E=O(V2
), we have lgE=O(2lgV)=O(lgV)
KRUSKAL(V, E, w)
O(V)
O(ElgE)
O(E)
O(lgV)
O(ElgV)
Shortest Paths
26
Shortest Path Problems
• How can we find the shortest route between two
points on a road map?
• Model the problem as a graph problem:
– Road map is a weighted graph:
vertices = cities
edges = road segments between cities
edge weights = road distances
– Goal: find a shortest path between two vertices (cities)
27
…
• Input:
– Directed graph G = (V, E)
– Weight function w : E → R
• Weight of path p = v0, v1, . . . , vk
• Shortest-path weight from u to v:
δ(u, v) = min w(p) : u v if there exists a path from u to v
∞ otherwise
• Note: there might be multiple shortest paths from u to v




k
i
i
i v
v
w
p
w
1
1 )
,
(
)
(
p
0
3 9
5 11
3
6
5
7
6
s
t x
y z
2
2 1
4
3
28
Variants of Shortest Path
• Single-source shortest paths
– G = (V, E)  find a shortest path from a given source
vertex s to each vertex v  V
• Single-destination shortest paths
– Find a shortest path to a given destination vertex t
from each vertex v
– Reversing the direction of each edge  single-source
29
…
• Single-pair shortest path
– Find a shortest path from u to v for given vertices u
and v
• All-pairs shortest-paths
– Find a shortest path from u to v for every pair of
vertices u and v
30
…
algorthm analysis from computer scince.ppt
algorthm analysis from computer scince.ppt
algorthm analysis from computer scince.ppt
algorthm analysis from computer scince.ppt
algorthm analysis from computer scince.ppt
algorthm analysis from computer scince.ppt
algorthm analysis from computer scince.ppt
algorthm analysis from computer scince.ppt
algorthm analysis from computer scince.ppt
algorthm analysis from computer scince.ppt
algorthm analysis from computer scince.ppt
algorthm analysis from computer scince.ppt
algorthm analysis from computer scince.ppt
algorthm analysis from computer scince.ppt
algorthm analysis from computer scince.ppt
algorthm analysis from computer scince.ppt
47
Scheduling
Scheduling
Scheduling criteria
• CPU utilization- keep the CPU as busy as possible (from 0%
to100%
• Throughput- number of processes that complete their
execution per time unit
• Turnaround time - amount of time to execute a particular
Process
• Waiting time - amount of time a process has been waiting in
the ready queue
• Response time - amount of time it takes from when a request
was submitted until the first response is produced
48
Optimization criteria
» max CPU utilization
» max throughput
» Min turnaround time
» Min waiting time
» Min Response time
49
Scheduling Algorithm
•First Come First Serve Scheduling
•Shortest job First Scheduling
•Priority Scheduling
•Round-Robin Scheduling
50
First Come First Serve Scheduling (FCFS)
51
Process Burst time
P1 24
P2 3
P2 3
• Suppose that the processes arrive in the order: P1, P2, P3
• The Gantt Chart for the schedule is:
…
• The average of waiting time in this policy is usually quite long
• Waiting time for P1= 0, P2 = 24, P3 = 27
• Average waiting time = (0+24+27)/3 = 17
• Suppose we change the order of arriving job P2, P3, P1
• The Gantt chart for the schedule is:
52
Waiting time for P1= 6, P2 = 0, P3 = 3
Average waiting time: (6 + 0 + 3)/3 = 3
•Consider if we have a CPU-bound process and many I/0-bound processes
•There is a convoy effect as all the other processes waiting for one of the big
process to get off the CPU
•FCFS scheduling algorithm is non-preemptive
Short job first scheduling (SJF)
• This algorithm associates with each process the length of the
processes next CPU burst
• If there is a tie, FCFS is used
• In other words, this algorithm can be also regard as shortest-
next-CPU-burst algorithm
• SJF is optimal - gives minimum average waiting time for a
given set of processes
53
Processes Burst time
P1 6
P2 8
P3 7
P4 3
FCFS average waiting time:
• P1+P2+P3+P4
• (0+6+14+21)/4 = 10.25
SJF average waiting time:
• P1+P2+P3+P4
• (3+16+9+0)/4 = 7
…
Two schemes:
– Non-preemptive - once CPU given to the process it cannot
be preempted until completes its CPU burst
54
…
55
• Preemptive - if a new process arrives with CPU burst length less
than remaining time of current executing process, preempt.
• This scheme is know as the Shortest-Remaining-Time-First
(SRTF)
Priority Scheduling
• A priority number (integer) is associated with each process.
The CPU is allocated to the process with the highest priority
• (smallest integer = highest priority)
– Preemptive
– Non-preemptive
• SJF is a special priority scheduling where priority is the
predicted next CPU burst time, so that it can decide the priority
56
Processes Burst time Priorit
y
Arrival
time
P1 10 3 0.0
P2 1 1 1.0
P3 2 4 2.0
P4 1 5 3.0
P5 5 2 4.0
The average waiting time
= (6+0+16+18+1}/5 = 8.2
…
• Problem : Starvation - how priority processes may never
execute
• Solution : Aging as time progresses increase the priority of
the process
57
Round-Robin Scheduling
• The Round-Robin is designed especially for time sharing
systems.
• It is similar FCFS but add preempted concept
• A small unit of time, called time quantum, is defined
• Each process gets a small unit of CPU time (time quantum)
usually 10-100 milliseconds. After this time has elapsed, the
process is preempted and added to the end of the ready queue.
58
…
59
…
• If there are processes in the ready queue and the time quantum
is q, then each process gets 1/n of the CPU time in chunks of at
most q time units at once. No process waits more than (n-1)q
time units.
• Performance
• q large => FIFO
• q small => q must be large with respect to context switch,
otherwise overhead is too high
• Typically, higher average turnaround than SJF, but better
response
60
…
61
Reading Assignment
Reading Assignment
62
Multilevel Queue Scheduling
Multilevel Feedback-Queue Scheduling

More Related Content

PDF
Minimum spanning tree
PPT
Unit 5 session 2 MinimumSpanningTrees.ppt
PPTX
Algorithm analysis Greedy method in algorithm.pptx
PDF
module4_dynamic programming_2022.pdf
PDF
Shortest Path in Graph
PDF
Shortest path algorithms
PPTX
Prim's algorithm
PPT
prims algorithm for algorithms for cs st
Minimum spanning tree
Unit 5 session 2 MinimumSpanningTrees.ppt
Algorithm analysis Greedy method in algorithm.pptx
module4_dynamic programming_2022.pdf
Shortest Path in Graph
Shortest path algorithms
Prim's algorithm
prims algorithm for algorithms for cs st

Similar to algorthm analysis from computer scince.ppt (20)

PPT
Lec-35Graph - Graph - Copy in Data Structure
PPTX
All pair shortest path by Sania Nisar
PDF
Topological Sort
PDF
MinFill_Presentation
PDF
04 greedyalgorithmsii 2x2
PDF
Greedy Algorithms in Algorithms and Design
PPTX
Greedy Strategy.pptxbfasjbjfn asnfn anjn
PPT
Greedy Approach in Design Analysis and Algorithms
PPT
minimum spanning trees Algorithm
PDF
Introduction to Graph Theory
PDF
Unit 3 greedy method
PDF
Unit 3 - Greedy Method
PPT
Shortest path
PPT
5.1 greedy
PPT
Lecture#9
PDF
Introduction to Treewidth
PPTX
unit3_Dynamic_Progrghaiajawzjabagamming1.pptx
PPT
test pre
PDF
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Lec-35Graph - Graph - Copy in Data Structure
All pair shortest path by Sania Nisar
Topological Sort
MinFill_Presentation
04 greedyalgorithmsii 2x2
Greedy Algorithms in Algorithms and Design
Greedy Strategy.pptxbfasjbjfn asnfn anjn
Greedy Approach in Design Analysis and Algorithms
minimum spanning trees Algorithm
Introduction to Graph Theory
Unit 3 greedy method
Unit 3 - Greedy Method
Shortest path
5.1 greedy
Lecture#9
Introduction to Treewidth
unit3_Dynamic_Progrghaiajawzjabagamming1.pptx
test pre
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Ad

Recently uploaded (20)

PDF
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
PDF
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
PDF
Capcut Pro Crack For PC Latest Version {Fully Unlocked 2025}
PPTX
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
PPTX
01_intro xxxxxxxxxxfffffffffffaaaaaaaaaaafg
PPT
Miokarditis (Inflamasi pada Otot Jantung)
PDF
Business Analytics and business intelligence.pdf
PPTX
Managing Community Partner Relationships
PPTX
oil_refinery_comprehensive_20250804084928 (1).pptx
PPTX
Supervised vs unsupervised machine learning algorithms
PDF
Mega Projects Data Mega Projects Data
PDF
Optimise Shopper Experiences with a Strong Data Estate.pdf
PPTX
Database Infoormation System (DBIS).pptx
PPTX
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
PDF
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
PPT
Quality review (1)_presentation of this 21
PDF
Introduction to Data Science and Data Analysis
PDF
Transcultural that can help you someday.
PPTX
iec ppt-1 pptx icmr ppt on rehabilitation.pptx
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
Capcut Pro Crack For PC Latest Version {Fully Unlocked 2025}
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
01_intro xxxxxxxxxxfffffffffffaaaaaaaaaaafg
Miokarditis (Inflamasi pada Otot Jantung)
Business Analytics and business intelligence.pdf
Managing Community Partner Relationships
oil_refinery_comprehensive_20250804084928 (1).pptx
Supervised vs unsupervised machine learning algorithms
Mega Projects Data Mega Projects Data
Optimise Shopper Experiences with a Strong Data Estate.pdf
Database Infoormation System (DBIS).pptx
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
Quality review (1)_presentation of this 21
Introduction to Data Science and Data Analysis
Transcultural that can help you someday.
iec ppt-1 pptx icmr ppt on rehabilitation.pptx
Ad

algorthm analysis from computer scince.ppt

  • 2. Overview • Greedy algorithms are used to solve optimization problems. – For most optimization problems you want to find, not just a solution, but the best solution. • A greedy algorithm sometimes works well for optimization problems. It works in phases. – At each phase: • You take the best you can get right now, without regard for future consequences. • You hope that by choosing a local optimum at each step, you will end up at a global optimum.
  • 3. … • Problems exhibit optimal substructure. – The optimal solution for the problem can be gained from the optimal solutions of its sub problems. • Problems also exhibit the greedy-choice property. – When we have a choice to make, make the one that looks best right now. – Make a locally optimal choice in hope of getting a globally optimal solution. 3
  • 4. Greedy Strategy • The choice that seems best at the moment is the one we go with. – Prove that when there is a choice to make, one of the optimal choices is the greedy choice. • Therefore, it’s always safe to make the greedy choice. – Show that all but one of the sub-problems resulting from the greedy choice are empty.
  • 6. 6 Minimum Spanning Trees • Spanning Tree – A tree (i.e., connected, acyclic graph) which contains all the vertices of the graph • Minimum Spanning Tree – Spanning tree with the minimum sum of weights • Spanning forest – If a graph is not connected, then there is a spanning tree for each connected component of the graph a b c d e g g f i 4 8 7 8 11 1 2 7 2 4 14 9 10 6
  • 7. 7 Applications of MST – Find the least expensive way to connect a set of cities, terminals, computers, etc.
  • 8. 8 How to build MST? Idea: build the MST edge by edge. Start from A = , By definition A is a (trivial) subset of a MST ∅ Add edge to A, maintaining the property that A is a subset of some MST Stop when no edge can be added to A anymore. At this point A will be in MST
  • 9. 9 Example Problem • A town has a set of houses and a set of roads • A road connects 2 and only 2 houses • A road connecting houses u and v has a repair cost w(u, v) Goal: Repair enough (and no more) roads such that: 1. Everyone stays connected i.e., can reach every house from all other houses 2. Total repair cost is minimum a b c d e h g f i 4 8 7 8 11 1 2 7 2 4 14 9 10 6
  • 10. 10 Minimum Spanning Trees • A connected, undirected graph: – Vertices = houses, Edges = roads • A weight w(u, v) on each edge (u, v)  E a b c d e h g f i 4 8 7 8 11 1 2 7 2 4 14 9 10 6 Find T  E such that: 1. T connects all vertices 2. w(T) = Σ(u,v)T w(u, v) is minimized
  • 11. 11 Properties of Minimum Spanning Trees • Minimum spanning tree is not unique • MST has no cycles – We can take out an edge of a cycle, and still have the vertices connected while reducing the cost • All of edges in a MST: – |V| - 1
  • 12. 12 Prim’s Algorithm • The edges in set A always form a single tree • Starts from an arbitrary root: VA = {a} • At each step: – Find a light edge crossing (VA, V - VA) – Add this edge to A – Repeat until the tree spans all vertices a b c d e h g f i 4 8 7 8 11 1 2 7 2 4 14 9 10 6
  • 13. 13 How to Find Light Edges Quickly? Use a priority queue Q: • Contains vertices not yet included in the tree, i.e., (V – VA) – VA = {a}, Q = {b, c, d, e, f, g, h, i} • We associate a key with each vertex v: key[v] = minimum weight of any edge (u, v) connecting v to VA a b c d e h g f i 4 8 7 8 11 1 2 7 2 4 14 9 10 6 w1 w2 Key[a] = min(w1,w2) a
  • 14. 14 … • After adding a new node to VA we update the weights of all the nodes adjacent to it e.g., after adding a to the tree, k[b]=4 and k[h]=8 • Key of v is  if v is not adjacent to any vertices in VA a b c d e h g f i 4 8 7 8 11 1 2 7 2 4 14 9 10 6
  • 15. 15 Example 0         Q = {a, b, c, d, e, f, g, h, i} VA =  Extract-MIN(Q)  a a b c d e h g f i 4 8 7 8 11 1 2 7 2 4 14 9 10 6 a b c d e h g f i 4 8 7 8 11 1 2 7 2 4 14 9 10 6 key [b] = 4  [b] = a key [h] = 8  [h] = a 4      8  Q = {b, c, d, e, f, g, h, i} VA = {a} Extract-MIN(Q)  b                4  8
  • 16. 16 4   8   8  Example a b c d e h g f i 4 8 7 8 11 1 2 7 2 4 14 9 10 6 key [c] = 8  [c] = b key [h] = 8  [h] = a - unchanged 8     8  Q = {c, d, e, f, g, h, i} VA = {a, b} Extract-MIN(Q)  c a b c d e h g f i 4 8 7 8 11 1 2 7 2 4 14 9 10 6 key [d] = 7  [d] = c key [f] = 4  [f] = c key [i] = 2  [i] = c 7  4  8 2 Q = {d, e, f, g, h, i} VA = {a, b, c} Extract-MIN(Q)  i   4   8   8 7 4 2
  • 17. 17 Example a b c d e h g f i 4 8 7 8 11 1 2 7 2 4 14 9 10 6 key [h] = 7  [h] = i key [g] = 6  [g] = i 7  4 6 8 Q = {d, e, f, g, h} VA = {a, b, c, i} Extract-MIN(Q)  f a b c d e h g f i 4 8 7 8 11 1 2 7 2 4 14 9 10 6 key [g] = 2  [g] = f key [d] = 7  [d] = c unchanged key [e] = 10  [e] = f 7 10 2 8 Q = {d, e, g, h} VA = {a, b, c, i, f} Extract-MIN(Q)  g 4 7  8  4 8 2 7 6 4 7  7 6 4 8 2 2 10
  • 18. 18 Example a b c d e h g f i 4 8 7 8 11 1 2 7 2 4 14 9 10 6 key [h] = 1  [h] = g 7 10 1 Q = {d, e, h} VA = {a, b, c, i, f, g} Extract-MIN(Q)  h a b c d e h g f i 4 8 7 8 11 1 2 7 2 4 14 9 10 6 7 10 Q = {d, e} VA = {a, b, c, i, f, g, h} Extract-MIN(Q)  d 4 7 10 7 2 4 8 2 1 4 7 10 1 2 4 8 2
  • 19. 19 Example a b c d e h g f i 4 8 7 8 11 1 2 7 2 4 14 9 10 6 key [e] = 9  [e] = f 9 Q = {e} VA = {a, b, c, i, f, g, h, d} Extract-MIN(Q)  e – Q =  – VA = {a, b, c, i, f, g, h, d, e} 4 7 10 1 2 4 8 2 9
  • 20. 20 PRIM(V, E, w, r) 1. Q ←  2. for each u  V 3. do key[u] ← ∞ 4. π[u] ← NIL 5. INSERT(Q, u) 6. DECREASE-KEY(Q, r, 0) ► key[r] 0 ← 7. while Q   8. do u ← EXTRACT-MIN(Q) 9. for each v  Adj[u] 10. do if v  Q and w(u, v) < key[v] 11. then π[v] u ← 12. DECREASE-KEY(Q, v, w(u, v)) O(V) if Q is implemented as a min-heap Executed |V| times Takes O(logV) Min-heap operations: O(VlogV) Executed O(E) times total Constant Takes O(logV) O(ElogV) Total time: O(VlogV + ElogV) = O(ElogV) O(logV)
  • 21. 21 Kruskal’s Algorithm • How is it different from Prim’s algorithm? – Prim’s algorithm grows one tree all the time – Kruskal’s algorithm grows multiple trees (i.e., a forest) at the same time. – Trees are merged together using safe edges – Since an MST has exactly |V| - 1 edges, after |V| - 1 merges, we would have only one component u v tree1 tree2
  • 22. 22 We would add edge (c, f) a b c d e h g f i 4 8 7 8 11 1 2 7 2 4 14 9 10 6 … • Start with each vertex being its own component • Repeatedly merge two components into one by choosing the light edge that connects them • Which components to consider at each iteration? – Scan the set of edges in monotonically increasing order by weight
  • 23. 23 Example 1. Add (h, g) 2. Add (c, i) 3. Add (g, f) 4. Add (a, b) 5. Add (c, f) 6. Ignore (i, g) 7. Add (c, d) 8. Ignore (i, h) 9. Add (a, h) 10. Ignore (b, c) 11. Add (d, e) 12. Ignore (e, f) 13. Ignore (b, h) 14. Ignore (d, f) a b c d e h g f i 4 8 7 8 11 1 2 7 2 4 14 9 10 6 1: (h, g) 2: (c, i), (g, f) 4: (a, b), (c, f) 6: (i, g) 7: (c, d), (i, h) 8: (a, h), (b, c) 9: (d, e) 10: (e, f) 11: (b, h) 14: (d, f) {g, h}, {a}, {b}, {c}, {d}, {e}, {f}, {i} {g, h}, {c, i}, {a}, {b}, {d}, {e}, {f} {g, h, f}, {c, i}, {a}, {b}, {d}, {e} {g, h, f}, {c, i}, {a, b}, {d}, {e} {g, h, f, c, i}, {a, b}, {d}, {e} {g, h, f, c, i}, {a, b}, {d}, {e} {g, h, f, c, i, d}, {a, b}, {e} {g, h, f, c, i, d}, {a, b}, {e} {g, h, f, c, i, d, a, b}, {e} {g, h, f, c, i, d, a, b}, {e} {g, h, f, c, i, d, a, b, e} {g, h, f, c, i, d, a, b, e} {g, h, f, c, i, d, a, b, e} {g, h, f, c, i, d, a, b, e} {a}, {b}, {c}, {d}, {e}, {f}, {g}, {h}, {i}
  • 24. 24 1. A ←  2. for each vertex v  V 3. do MAKE-SET(v) 4. sort E into non-decreasing order by w 5. for each (u, v) taken from the sorted list 6. do if FIND-SET(u)  FIND-SET(v) 7. then A ← A  {(u, v)} 8. UNION(u, v) 9. return A - Running time: O(V+ElgE+ElgV)=O(ElgE) - Since E=O(V2 ), we have lgE=O(2lgV)=O(lgV) KRUSKAL(V, E, w) O(V) O(ElgE) O(E) O(lgV) O(ElgV)
  • 26. 26 Shortest Path Problems • How can we find the shortest route between two points on a road map? • Model the problem as a graph problem: – Road map is a weighted graph: vertices = cities edges = road segments between cities edge weights = road distances – Goal: find a shortest path between two vertices (cities)
  • 27. 27 … • Input: – Directed graph G = (V, E) – Weight function w : E → R • Weight of path p = v0, v1, . . . , vk • Shortest-path weight from u to v: δ(u, v) = min w(p) : u v if there exists a path from u to v ∞ otherwise • Note: there might be multiple shortest paths from u to v     k i i i v v w p w 1 1 ) , ( ) ( p 0 3 9 5 11 3 6 5 7 6 s t x y z 2 2 1 4 3
  • 28. 28 Variants of Shortest Path • Single-source shortest paths – G = (V, E)  find a shortest path from a given source vertex s to each vertex v  V • Single-destination shortest paths – Find a shortest path to a given destination vertex t from each vertex v – Reversing the direction of each edge  single-source
  • 29. 29 … • Single-pair shortest path – Find a shortest path from u to v for given vertices u and v • All-pairs shortest-paths – Find a shortest path from u to v for every pair of vertices u and v
  • 48. Scheduling criteria • CPU utilization- keep the CPU as busy as possible (from 0% to100% • Throughput- number of processes that complete their execution per time unit • Turnaround time - amount of time to execute a particular Process • Waiting time - amount of time a process has been waiting in the ready queue • Response time - amount of time it takes from when a request was submitted until the first response is produced 48
  • 49. Optimization criteria » max CPU utilization » max throughput » Min turnaround time » Min waiting time » Min Response time 49
  • 50. Scheduling Algorithm •First Come First Serve Scheduling •Shortest job First Scheduling •Priority Scheduling •Round-Robin Scheduling 50
  • 51. First Come First Serve Scheduling (FCFS) 51 Process Burst time P1 24 P2 3 P2 3 • Suppose that the processes arrive in the order: P1, P2, P3 • The Gantt Chart for the schedule is:
  • 52. … • The average of waiting time in this policy is usually quite long • Waiting time for P1= 0, P2 = 24, P3 = 27 • Average waiting time = (0+24+27)/3 = 17 • Suppose we change the order of arriving job P2, P3, P1 • The Gantt chart for the schedule is: 52 Waiting time for P1= 6, P2 = 0, P3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 •Consider if we have a CPU-bound process and many I/0-bound processes •There is a convoy effect as all the other processes waiting for one of the big process to get off the CPU •FCFS scheduling algorithm is non-preemptive
  • 53. Short job first scheduling (SJF) • This algorithm associates with each process the length of the processes next CPU burst • If there is a tie, FCFS is used • In other words, this algorithm can be also regard as shortest- next-CPU-burst algorithm • SJF is optimal - gives minimum average waiting time for a given set of processes 53 Processes Burst time P1 6 P2 8 P3 7 P4 3 FCFS average waiting time: • P1+P2+P3+P4 • (0+6+14+21)/4 = 10.25 SJF average waiting time: • P1+P2+P3+P4 • (3+16+9+0)/4 = 7
  • 54. … Two schemes: – Non-preemptive - once CPU given to the process it cannot be preempted until completes its CPU burst 54
  • 55. … 55 • Preemptive - if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. • This scheme is know as the Shortest-Remaining-Time-First (SRTF)
  • 56. Priority Scheduling • A priority number (integer) is associated with each process. The CPU is allocated to the process with the highest priority • (smallest integer = highest priority) – Preemptive – Non-preemptive • SJF is a special priority scheduling where priority is the predicted next CPU burst time, so that it can decide the priority 56 Processes Burst time Priorit y Arrival time P1 10 3 0.0 P2 1 1 1.0 P3 2 4 2.0 P4 1 5 3.0 P5 5 2 4.0 The average waiting time = (6+0+16+18+1}/5 = 8.2
  • 57. … • Problem : Starvation - how priority processes may never execute • Solution : Aging as time progresses increase the priority of the process 57
  • 58. Round-Robin Scheduling • The Round-Robin is designed especially for time sharing systems. • It is similar FCFS but add preempted concept • A small unit of time, called time quantum, is defined • Each process gets a small unit of CPU time (time quantum) usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. 58
  • 60. … • If there are processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units. • Performance • q large => FIFO • q small => q must be large with respect to context switch, otherwise overhead is too high • Typically, higher average turnaround than SJF, but better response 60
  • 62. Reading Assignment Reading Assignment 62 Multilevel Queue Scheduling Multilevel Feedback-Queue Scheduling