SlideShare a Scribd company logo
Algorithm Design and Complexity
Course 5
Overview







Greedy Algorithms
Activity Selection
Huffman Trees
Greedy vs Dynamic Programming
Knapsack Problem




Greedy
DP
Generic problem
Greedy Algorithms



Efficient method to solve some optimization problems
The solutions to an optimization problem must satisfy a
global optimum






Advantages:





More difficult to verify
Simplification: choose the solution that looks best at each step
This is called a locally optimal solution

Simpler to build the solution
Less time / Better complexity

Disadvantage:



The locally optimal solution does not always lead to the globally
optimal solution
May not correctly solve the problem (but may provide good
approximations)
Greedy Algorithms (2)



At each step, we choose the best solution according
to the local optimum (greedy) choice
We abandon all the other possible solutions




We‟ll look at two problems that have a greedy
solution that leads to the global optimum as well





The solving paths that are not considered by the greedy
choice are discarded!

Activity selection
Huffman trees

Greedy is an algorithm design technique (pattern)!
General Greedy Scheme
SolveGreedy(Local_choice, Problem)
partial_sols = InitialSolution(problem); // determine the starting point
final_sols = Φ;
WHILE (partial_sols ≠ Φ)
FOREACH (s IN partial_sols)
IF (s is a solution for Problem) {
final_sols = final_sols U {s};
partial_sols = partial_sols  {s};
} ELSE // can you optimize current solving path locally ?
IF(CanOptimize(s, Local_choice, Problem)) // YES
partial_sols = partial_sols  {s} U
OptimizeLocally(s, Local_choice, Problem)
ELSE partial_sols = partial_sols  {s};
// NO
RETURN final_sols;



Most times we follow only a single solving path!
Activity Selection Problem


Given a set of n activities that require exclusive use
of a common resource for a given period of time,
determine the largest subset of non-overlapping
activities




These activities are called mutually compatible
There might be more than a single solution
We want to identify one of these best solutions




Similar to DP, not suitable for finding all possible solutions

Notations:





S = {a1, … , an} are the activities
Each activity has a start time, si, and a finish time, fi
Each activity requires the common resource for the
interval [si, fi)
Activity Selection Problem (2)


E.g.





Activity = classes
Activity = processes

Resource = classroom
Resource = CPU

There exist some other activity selection problems
that are more difficult:




Maximize the usage time of the resource
Maximize income if each activity pays for the usage of the
resource
Example – from CLRS


We can devise a greedy solution if we consider the
activities sorted by their finish times

i

1

2

3

4

5

6

7

8

9

s[i] 1

2

4

1

5

8

9

11

13

f[i] 3

5

7

8

9

10

11

14

16



Solution: {a1, a3, a6, a8}


Not unique: {a2, a5, a7, a9}
Define the Sub-Problems



First, define the similar sub-problems
Let‟s consider the subset of activities that:





Start after ai finishes (start after fi)
Finish before aj starts (finish before sj)

They are compatible with all activities that:



Finish before fi
Start after sj



Si,j = {all ak in S | fi <= sk < fk < sj}



We also add two invented activities:



a0 = [-INF, 0)
an+1 = [INF, INF + 1)
Define the Sub-Problems (2)


S0,n+1 = S = the entire set of activities



When the activities are sorted by their finish time



f0 <= f1 <= f2 <= … <= fn <= fn+1
Si,j = Φ if i > j






fi <= sk < fk < sj < fj
=> fi < fj

Therefore, the sub-problems are Si,j with 0 <= i < j <=
n+1
Optimal Substructure



Suppose an optimal solution to Si,j includes the
activity ak
Then, we need to solve two sub-problems:





Therefore, the solution to Si,j is made of:






Si,k: all activities that start after ai and finish before ak
Sk,j: all activities that start after ak and finish before aj

The solution to Si,k
ak
The solution to Sk,j

Because ak is compatible with both Si,k and Sk,j


|Solution to Si,j| = |Solution to Si,k| + 1 + |Solution to Sk,j|
Optimal Substructure (2)



If an optimal solution to Si,j includes ak, then the subsolutions for Si,k and Sk,j must also be optimal
Ai,j = optimal solution for Si,j



Ai,j = Ai,k U {ak} U Ak,j








If Si,j is not empty
We know ak

c[i, j] = |Ai,j| = maximum size of the subset of
mutually compatible activities in Ai,j


c[i, j] = 0 if i >= j
Recursive Formulation


As we do not know the value of k, we must try all the
possible choices in order to find it



Now, we can solve this problem using DP






O(n2) sub-problems
O(n) choices at each step
O(n3) complexity for the DP solution

We can find a better one by using a greedy strategy!
Greedy Choice







Theorem
If Si,j is not empty and am is the activity with the
earliest finish time in Si,j
Then, am is used by at least one of the maximum
size subset of mutually independent activities in Si,j
Si,m = Φ , therefore only Sm,j needs to be solved
For any other solution to Si,j , we can replace the
activity that finishes earliest in this solution (let‟s call
it ak) with am, and these activities are still mutually
independent, as am finishes earlier than ak
Greedy Choice (2)



The previous theorem offers the greedy choice
The number of sub-problems considered in the
optimal solution at each step:





The number of choices to be considered at each
step:





DP: 2
Greedy: 1

DP: j-i-1
Greedy: 1

As we have a single choice and a single subproblem to solve, we can solve the problem topdown
Greedy Solution


In order to solve Si,j


Just choose the activity with the earliest finish time in Si,j






am

Then, solve Sm,j

In order to solve S = S0,n+1






First choice am1 (is always a1 – why?) for S0,n+1
Then need to solve Sm1,n+1
Second choice am2 for Sm1,n+1
Then need to solve Sm2,n+1
…
Recursive Algorithm


Because the greedy algorithm considers the activities sorted by their
finish time, we first need to sort by the finish time!


O(n logn)

RecursiveActivitySelection(s, f, i, n)
m = i +1
WHILE (m <= n AND s[m] < f[i])
m++
// find the activity with the earliest
// start time that starts after activity i finishes
IF (m <= n) THEN
RETURN {am} U RecursiveActivitySelection(s, f, m, n)
RETURN Φ

Initial call: RecursiveActivitySelection(s, f, 0, n)
Complexity: (n) – go through each activity once
Iterative Algorithm


Can turn the recursive algorithm into an iterative one

IterativeActivitySelection(s, f, n)
A = {a1}
i=1
FOR (m = 2..n)
IF (s[m] < f[i])
CONTINUE
ELSE
A = A U {am}
i=m
RETURN A
Complexity:

(n) – go once through each activity
Huffman Trees


Efficient method of compressing files




Especially text files

Builds a Huffman tree in a greedy fashion



Specific for the encoded text/file
It is used for compressing the file



The compressed file and the Huffman tree are used
to recreate the original file



Example text: “ana are mere”
Huffman Trees (2)




K – set of keys that are encoded (the characters in the
original text file)
In the original text, all the keys are represented on the
same number of bits
Objective: we want to find an alternative representation
for each key such that:






The keys that are most frequent are represented on a smaller
number of bits than the ones that are less frequent
We are able to distinguish easily in this new representation
what are the keys that were in the original file

Example: text files




Original representation: char – 8 bits or ASCII – 7 bits
New representation: 1 bit for the most frequent character in the
encoded text and so on…
Huffman Trees (3)


Huffman encoding tree:





An ordered binary tree
Only the leaves contain the keys from the set K
All internal nodes must have exactly 2 children
The edges are coded:







0 – left edge
1 – right edge

The code in the new representation for each key is the
set of codes from the root to the leaf containing that key
Start from the frequency of appearance of each key in
the original file: p(k) for each k in K
Example: “ana are mere”


p(a) = p(e) = 0.25; p(n) = p(m) = 0.083;p(r) = p( ) = 0.166
The Huffman Tree






T – encoding tree for the set of keys K
code_length(k) – the length of the code for key k in tree T
level(k, T) – the level in tree T for the leaf corresponding to key
k
The cost of an encoding tree T for a set of keys K that have
the frequencies p:
Cost (T )

code _ length(k ) * p(k )

k K



level (k , T ) * p(k )

k K

Huffman Tree = An encoding tree of minimum cost for a set of
keys K with frequencies p



The codes in this tree are called Huffman codes
Optimization problem!
Building the Huffman Tree
We can devise a greedy algorithm for building a Huffman
tree for any set of keys K
 Steps:
1. For each key k in K build a simple tree with a single
node that contains k and has the weight w = p(k). Let
the forest of trees be called Forest.
2. Choose any two trees from Forest that have the
minimum weights. Let them be t1 and t2.
3. Remove t1 and t2 from Forest and add a new tree:


a)
b)
c)

That has a new root r that does not contain any key (as it is
not a leaf)
The two descendents of r are t1 and t2 respectively.
The weight of the new tree is w(r) = w(t1) + w(t2)

Repeat steps 2 and 3 until Forest contains a single tree

4.


=> the Huffman tree
Example


Input: “ana are mere”



p(„a‟) = p(„e‟) = 0.25; p(„n‟) = p(„m‟) = 0.083; p(„r‟) =
p(„ „) = 0.166



Initially:
W(a)=
0.25

W(e)=
0.25

W(r)=
0.16

W( )=
0.16

W(m)=
0.08

W(n)=
0.08
Example – Building the Huffman Tree



Intermediate steps: on whiteboard
Solution:





Encoding: „a‟ : 00 , „e‟ : 11 , „r‟ : 010 , „ ‟ : 011 , „m‟ : 100
, „n‟ : 101
Cost of the tree: Cost(Tree) = 2 * 0.25 + 2 * 0.25 + 3 *
0.083 + 3 * 0.083 + 3 * 0.166 + 3 * 0.166 = 2.2 bits
Huffman Tree
0
W(a+r+ )=0.57
0
1
W(r+ )=0.32
W(a)
1
0
W(r)
W( )

1
W(m+n+e)=0.41
0
1
W(m+n)=0.16
W(e)
0
1
W(m)
W(n)
Algorithm for building the Huffman Tree



On the whiteboard
Straightforward from the pseudocode
Decoding the File


Encoded text:



0010100011000101000111001101011
a n a „‟ a r
e „‟ m e r e



We also need the Huffman tree



Starting from the first bit, we walk the tree from the
root to the first leaf we encounter



When at a leaf, append the key corresponding to that leaf
to the decoded text
Go to the root again and repeat until we reach the end of
the encoded text
Greedy Algorithms – Conclusions




Greedy algorithms that build the globally optimal solution
can be devised for some problems that have an optimal
substructure
Steps for devising a greedy algorithm:
 Determine the optimal substructure
 Develop a recursive solution
 Prove that at any stage of recursion, one of the optimal
choices is the greedy choice. Therefore, it‟s always safe
to make the greedy choice
 Show that all but one of the sub-problems resulting from
the greedy choice are empty
 Develop a recursive greedy algorithm
 Convert it to an iterative algorithm
Greedy Algorithms – Conclusions (2)


Properties for optimization problems that accept
correct greedy solutions:





Optimal substructure
Greedy choice property

Preprocessing is essential for efficient greedy
algorithms:


E.g. sort some data prior to process it with the greedy
algorithm
Greedy vs. DP


Similarities:






Optimization problems
Optimal substructure (including division into subproblems)
Make a choice at each step

Differences:





Greedy: 1 choice, 1 sub-problem to be solved
Greedy is top-down, DP is bottom-up
Greedy has the greedy choice property
Greedy does not use memoization as the other subproblems are not important (they are discarded if they are
not used by the greedy choice)
Knapsack Problem


Given a set on n items:





Which are the items that should be carried in order
to maximize the total value that can be carried in a
knapsack of total weight W?




Values v[i]
Weights w[i]

Optimization problem

Similar to the change-making problem


Given a set of divisions (coins and banknotes for a
currency), find the minimum number of coins and
banknotes needed to change a given amount of money
Knapsack Problem (2)


Can be solved efficiently if:


Are allowed to carry fractions of the items





Fractional knapsack problem
Greedy solution: sort the items according to the ratio v[i]/w[i] and
choose the items in the order of the highest ratio until the
knapsack is full

We are not allowed to carry fractions of the items




Integer (0/1) knapsack problem
But the values for weights and values are relatively small
integers
DP solution: on whiteboard
Knapsack Problem (3)


However, in the general case:





Real values for weights
Very high values for weights

The problem can only be solved using a
backtracking approach



The problem is NP-complete
The class of the most difficult problems that can be solved
on a computer (at this moment, it‟s considered that these
problems cannot be solved in polynomial time)
References


CLRS – Chapter 16



MIT OCW – Introduction to Algorithms – video
lecture 16



http://www.math.fau.edu/locke/Greedy.htm

More Related Content

What's hot (19)

PDF
Dynamic programming
Jay Nagar
 
PPTX
Algorithm Design and Complexity - Course 6
Traian Rebedea
 
PDF
Algorithm chapter 9
chidabdu
 
PPTX
Comparitive Analysis of Algorithm strategies
Talha Shaikh
 
DOC
algorithm Unit 3
Monika Choudhery
 
PPT
5.1 greedy
Krish_ver2
 
PPT
5.2 divide and conquer
Krish_ver2
 
PDF
01. design & analysis of agorithm intro & complexity analysis
Onkar Nath Sharma
 
PPT
Dinive conquer algorithm
Mohd Arif
 
PPT
Lecture 8 dynamic programming
Oye Tu
 
PPTX
Divide and Conquer - Part 1
Amrinder Arora
 
PPTX
Asymptotic Notation and Data Structures
Amrinder Arora
 
PPT
Np cooks theorem
Narayana Galla
 
PPT
5.3 dynamic programming
Krish_ver2
 
Dynamic programming
Jay Nagar
 
Algorithm Design and Complexity - Course 6
Traian Rebedea
 
Algorithm chapter 9
chidabdu
 
Comparitive Analysis of Algorithm strategies
Talha Shaikh
 
algorithm Unit 3
Monika Choudhery
 
5.1 greedy
Krish_ver2
 
5.2 divide and conquer
Krish_ver2
 
01. design & analysis of agorithm intro & complexity analysis
Onkar Nath Sharma
 
Dinive conquer algorithm
Mohd Arif
 
Lecture 8 dynamic programming
Oye Tu
 
Divide and Conquer - Part 1
Amrinder Arora
 
Asymptotic Notation and Data Structures
Amrinder Arora
 
Np cooks theorem
Narayana Galla
 
5.3 dynamic programming
Krish_ver2
 

Similar to Algorithm Design and Complexity - Course 5 (20)

PPT
Dynamic programming
Gopi Saiteja
 
PPTX
Design and Analysis of Algorithm-Lecture.pptx
bani30122004
 
PDF
Daa chapter4
B.Kirron Reddi
 
PPTX
Module-1.pptxbdjdhcdbejdjhdbchchchchchjcjcjc
shashashashashank
 
PPSX
Design and Analysis of Algorithms (Greedy Algorithm)
SababAshfakFahim
 
PPS
Slides
xbj25kl
 
PPTX
Presentation_23953_Content_Document_20240906040454PM.pptx
rameshmanoj733
 
PPTX
Ch3(1).pptxbbbbbbbbbbbbbbbbbbbhhhhhhhhhh
danielgetachew0922
 
PPT
lecture 23
sajinsc
 
PPT
Chapter 17
ashish bansal
 
PPTX
2-Algorithms and Complexity analysis.pptx
231b209
 
PPT
Data Structures- Part2 analysis tools
Abdullah Al-hazmy
 
PDF
Unit-3 greedy method, Prim's algorithm, Kruskal's algorithm.pdf
yashodamb
 
PPTX
Algorithm Design Techiques, divide and conquer
Minakshee Patil
 
PPTX
Dynamic Programming
Sahil Kumar
 
PPTX
TIME EXECUTION OF DIFFERENT SORTED ALGORITHMS
Tanya Makkar
 
PPT
test pre
farazch
 
PPT
Kk20503 1 introduction
Low Ying Hao
 
PPT
Big-O notations, Algorithm and complexity analaysis
drsomya2019
 
PPTX
DSA Complexity.pptx What is Complexity Analysis? What is the need for Compl...
2022cspaawan12556
 
Dynamic programming
Gopi Saiteja
 
Design and Analysis of Algorithm-Lecture.pptx
bani30122004
 
Daa chapter4
B.Kirron Reddi
 
Module-1.pptxbdjdhcdbejdjhdbchchchchchjcjcjc
shashashashashank
 
Design and Analysis of Algorithms (Greedy Algorithm)
SababAshfakFahim
 
Slides
xbj25kl
 
Presentation_23953_Content_Document_20240906040454PM.pptx
rameshmanoj733
 
Ch3(1).pptxbbbbbbbbbbbbbbbbbbbhhhhhhhhhh
danielgetachew0922
 
lecture 23
sajinsc
 
Chapter 17
ashish bansal
 
2-Algorithms and Complexity analysis.pptx
231b209
 
Data Structures- Part2 analysis tools
Abdullah Al-hazmy
 
Unit-3 greedy method, Prim's algorithm, Kruskal's algorithm.pdf
yashodamb
 
Algorithm Design Techiques, divide and conquer
Minakshee Patil
 
Dynamic Programming
Sahil Kumar
 
TIME EXECUTION OF DIFFERENT SORTED ALGORITHMS
Tanya Makkar
 
test pre
farazch
 
Kk20503 1 introduction
Low Ying Hao
 
Big-O notations, Algorithm and complexity analaysis
drsomya2019
 
DSA Complexity.pptx What is Complexity Analysis? What is the need for Compl...
2022cspaawan12556
 
Ad

More from Traian Rebedea (20)

PPTX
An Evolution of Deep Learning Models for AI2 Reasoning Challenge
Traian Rebedea
 
PDF
AI @ Wholi - Bucharest.AI Meetup #5
Traian Rebedea
 
PDF
Deep neural networks for matching online social networking profiles
Traian Rebedea
 
PDF
Intro to Deep Learning for Question Answering
Traian Rebedea
 
PPTX
What is word2vec?
Traian Rebedea
 
PPT
How useful are semantic links for the detection of implicit references in csc...
Traian Rebedea
 
PPT
A focused crawler for romanian words discovery
Traian Rebedea
 
PPTX
Detecting and Describing Historical Periods in a Large Corpora
Traian Rebedea
 
PDF
Practical machine learning - Part 1
Traian Rebedea
 
PPT
Propunere de dezvoltare a carierei universitare
Traian Rebedea
 
PPT
Automatic plagiarism detection system for specialized corpora
Traian Rebedea
 
PPT
Relevance based ranking of video comments on YouTube
Traian Rebedea
 
PPT
Opinion mining for social media and news items in Romanian
Traian Rebedea
 
PPT
PhD Defense: Computer-Based Support and Feedback for Collaborative Chat Conve...
Traian Rebedea
 
PPT
Importanța algoritmilor pentru problemele de la interviuri
Traian Rebedea
 
PPTX
Web services for supporting the interactions of learners in the social web - ...
Traian Rebedea
 
PPT
Automatic assessment of collaborative chat conversations with PolyCAFe - EC-T...
Traian Rebedea
 
PPT
Conclusions and Recommendations of the Romanian ICT RTD Survey
Traian Rebedea
 
PPT
Istoria Web-ului - part 2 - tentativ How to Web 2009
Traian Rebedea
 
PPT
Istoria Web-ului - part 1 (2) - tentativ How to Web 2009
Traian Rebedea
 
An Evolution of Deep Learning Models for AI2 Reasoning Challenge
Traian Rebedea
 
AI @ Wholi - Bucharest.AI Meetup #5
Traian Rebedea
 
Deep neural networks for matching online social networking profiles
Traian Rebedea
 
Intro to Deep Learning for Question Answering
Traian Rebedea
 
What is word2vec?
Traian Rebedea
 
How useful are semantic links for the detection of implicit references in csc...
Traian Rebedea
 
A focused crawler for romanian words discovery
Traian Rebedea
 
Detecting and Describing Historical Periods in a Large Corpora
Traian Rebedea
 
Practical machine learning - Part 1
Traian Rebedea
 
Propunere de dezvoltare a carierei universitare
Traian Rebedea
 
Automatic plagiarism detection system for specialized corpora
Traian Rebedea
 
Relevance based ranking of video comments on YouTube
Traian Rebedea
 
Opinion mining for social media and news items in Romanian
Traian Rebedea
 
PhD Defense: Computer-Based Support and Feedback for Collaborative Chat Conve...
Traian Rebedea
 
Importanța algoritmilor pentru problemele de la interviuri
Traian Rebedea
 
Web services for supporting the interactions of learners in the social web - ...
Traian Rebedea
 
Automatic assessment of collaborative chat conversations with PolyCAFe - EC-T...
Traian Rebedea
 
Conclusions and Recommendations of the Romanian ICT RTD Survey
Traian Rebedea
 
Istoria Web-ului - part 2 - tentativ How to Web 2009
Traian Rebedea
 
Istoria Web-ului - part 1 (2) - tentativ How to Web 2009
Traian Rebedea
 
Ad

Recently uploaded (20)

PPTX
How to Configure Refusal of Applicants in Odoo 18 Recruitment
Celine George
 
PDF
Learning Styles Inventory for Senior High School Students
Thelma Villaflores
 
PPTX
PLANNING FOR EMERGENCY AND DISASTER MANAGEMENT ppt.pptx
PRADEEP ABOTHU
 
PPTX
The Gift of the Magi by O Henry-A Story of True Love, Sacrifice, and Selfless...
Beena E S
 
PPTX
Matatag Curriculum English 8-Week 1 Day 1-5.pptx
KirbieJaneGasta1
 
PPTX
How Physics Enhances Our Quality of Life.pptx
AngeliqueTolentinoDe
 
PPTX
Elo the Hero is an story about a young boy who became hero.
TeacherEmily1
 
PPT
21st Century Literature from the Philippines and the World QUARTER 1/ MODULE ...
isaacmendoza76
 
PDF
TLE 8 QUARTER 1 MODULE WEEK 1 MATATAG CURRICULUM
denniseraya1997
 
PDF
DIGESTION OF CARBOHYDRATES ,PROTEINS AND LIPIDS
raviralanaresh2
 
PPTX
Practice Gardens and Polytechnic Education: Utilizing Nature in 1950s’ Hu...
Lajos Somogyvári
 
PPTX
Urban Hierarchy and Service Provisions.pptx
Islamic University of Bangladesh
 
PPTX
Comparing Translational and Rotational Motion.pptx
AngeliqueTolentinoDe
 
PDF
Rapid Mathematics Assessment Score sheet for all Grade levels
DessaCletSantos
 
PDF
Lesson 1 : Science and the Art of Geography Ecosystem
marvinnbustamante1
 
PPTX
Ward Management: Patient Care, Personnel, Equipment, and Environment.pptx
PRADEEP ABOTHU
 
PPTX
How to Manage Wins & Losses in Odoo 18 CRM
Celine George
 
PPTX
How to Setup Automatic Reordering Rule in Odoo 18 Inventory
Celine George
 
PDF
Andreas Schleicher_Teaching Compass_Education 2040.pdf
EduSkills OECD
 
PPTX
ESP 10 Edukasyon sa Pagpapakatao PowerPoint Lessons Quarter 1.pptx
Sir J.
 
How to Configure Refusal of Applicants in Odoo 18 Recruitment
Celine George
 
Learning Styles Inventory for Senior High School Students
Thelma Villaflores
 
PLANNING FOR EMERGENCY AND DISASTER MANAGEMENT ppt.pptx
PRADEEP ABOTHU
 
The Gift of the Magi by O Henry-A Story of True Love, Sacrifice, and Selfless...
Beena E S
 
Matatag Curriculum English 8-Week 1 Day 1-5.pptx
KirbieJaneGasta1
 
How Physics Enhances Our Quality of Life.pptx
AngeliqueTolentinoDe
 
Elo the Hero is an story about a young boy who became hero.
TeacherEmily1
 
21st Century Literature from the Philippines and the World QUARTER 1/ MODULE ...
isaacmendoza76
 
TLE 8 QUARTER 1 MODULE WEEK 1 MATATAG CURRICULUM
denniseraya1997
 
DIGESTION OF CARBOHYDRATES ,PROTEINS AND LIPIDS
raviralanaresh2
 
Practice Gardens and Polytechnic Education: Utilizing Nature in 1950s’ Hu...
Lajos Somogyvári
 
Urban Hierarchy and Service Provisions.pptx
Islamic University of Bangladesh
 
Comparing Translational and Rotational Motion.pptx
AngeliqueTolentinoDe
 
Rapid Mathematics Assessment Score sheet for all Grade levels
DessaCletSantos
 
Lesson 1 : Science and the Art of Geography Ecosystem
marvinnbustamante1
 
Ward Management: Patient Care, Personnel, Equipment, and Environment.pptx
PRADEEP ABOTHU
 
How to Manage Wins & Losses in Odoo 18 CRM
Celine George
 
How to Setup Automatic Reordering Rule in Odoo 18 Inventory
Celine George
 
Andreas Schleicher_Teaching Compass_Education 2040.pdf
EduSkills OECD
 
ESP 10 Edukasyon sa Pagpapakatao PowerPoint Lessons Quarter 1.pptx
Sir J.
 

Algorithm Design and Complexity - Course 5

  • 1. Algorithm Design and Complexity Course 5
  • 2. Overview      Greedy Algorithms Activity Selection Huffman Trees Greedy vs Dynamic Programming Knapsack Problem    Greedy DP Generic problem
  • 3. Greedy Algorithms   Efficient method to solve some optimization problems The solutions to an optimization problem must satisfy a global optimum     Advantages:    More difficult to verify Simplification: choose the solution that looks best at each step This is called a locally optimal solution Simpler to build the solution Less time / Better complexity Disadvantage:   The locally optimal solution does not always lead to the globally optimal solution May not correctly solve the problem (but may provide good approximations)
  • 4. Greedy Algorithms (2)   At each step, we choose the best solution according to the local optimum (greedy) choice We abandon all the other possible solutions   We‟ll look at two problems that have a greedy solution that leads to the global optimum as well    The solving paths that are not considered by the greedy choice are discarded! Activity selection Huffman trees Greedy is an algorithm design technique (pattern)!
  • 5. General Greedy Scheme SolveGreedy(Local_choice, Problem) partial_sols = InitialSolution(problem); // determine the starting point final_sols = Φ; WHILE (partial_sols ≠ Φ) FOREACH (s IN partial_sols) IF (s is a solution for Problem) { final_sols = final_sols U {s}; partial_sols = partial_sols {s}; } ELSE // can you optimize current solving path locally ? IF(CanOptimize(s, Local_choice, Problem)) // YES partial_sols = partial_sols {s} U OptimizeLocally(s, Local_choice, Problem) ELSE partial_sols = partial_sols {s}; // NO RETURN final_sols;  Most times we follow only a single solving path!
  • 6. Activity Selection Problem  Given a set of n activities that require exclusive use of a common resource for a given period of time, determine the largest subset of non-overlapping activities    These activities are called mutually compatible There might be more than a single solution We want to identify one of these best solutions   Similar to DP, not suitable for finding all possible solutions Notations:    S = {a1, … , an} are the activities Each activity has a start time, si, and a finish time, fi Each activity requires the common resource for the interval [si, fi)
  • 7. Activity Selection Problem (2)  E.g.    Activity = classes Activity = processes Resource = classroom Resource = CPU There exist some other activity selection problems that are more difficult:   Maximize the usage time of the resource Maximize income if each activity pays for the usage of the resource
  • 8. Example – from CLRS  We can devise a greedy solution if we consider the activities sorted by their finish times i 1 2 3 4 5 6 7 8 9 s[i] 1 2 4 1 5 8 9 11 13 f[i] 3 5 7 8 9 10 11 14 16  Solution: {a1, a3, a6, a8}  Not unique: {a2, a5, a7, a9}
  • 9. Define the Sub-Problems   First, define the similar sub-problems Let‟s consider the subset of activities that:    Start after ai finishes (start after fi) Finish before aj starts (finish before sj) They are compatible with all activities that:   Finish before fi Start after sj  Si,j = {all ak in S | fi <= sk < fk < sj}  We also add two invented activities:   a0 = [-INF, 0) an+1 = [INF, INF + 1)
  • 10. Define the Sub-Problems (2)  S0,n+1 = S = the entire set of activities  When the activities are sorted by their finish time   f0 <= f1 <= f2 <= … <= fn <= fn+1 Si,j = Φ if i > j    fi <= sk < fk < sj < fj => fi < fj Therefore, the sub-problems are Si,j with 0 <= i < j <= n+1
  • 11. Optimal Substructure   Suppose an optimal solution to Si,j includes the activity ak Then, we need to solve two sub-problems:    Therefore, the solution to Si,j is made of:     Si,k: all activities that start after ai and finish before ak Sk,j: all activities that start after ak and finish before aj The solution to Si,k ak The solution to Sk,j Because ak is compatible with both Si,k and Sk,j  |Solution to Si,j| = |Solution to Si,k| + 1 + |Solution to Sk,j|
  • 12. Optimal Substructure (2)  If an optimal solution to Si,j includes ak, then the subsolutions for Si,k and Sk,j must also be optimal Ai,j = optimal solution for Si,j  Ai,j = Ai,k U {ak} U Ak,j     If Si,j is not empty We know ak c[i, j] = |Ai,j| = maximum size of the subset of mutually compatible activities in Ai,j  c[i, j] = 0 if i >= j
  • 13. Recursive Formulation  As we do not know the value of k, we must try all the possible choices in order to find it  Now, we can solve this problem using DP     O(n2) sub-problems O(n) choices at each step O(n3) complexity for the DP solution We can find a better one by using a greedy strategy!
  • 14. Greedy Choice      Theorem If Si,j is not empty and am is the activity with the earliest finish time in Si,j Then, am is used by at least one of the maximum size subset of mutually independent activities in Si,j Si,m = Φ , therefore only Sm,j needs to be solved For any other solution to Si,j , we can replace the activity that finishes earliest in this solution (let‟s call it ak) with am, and these activities are still mutually independent, as am finishes earlier than ak
  • 15. Greedy Choice (2)   The previous theorem offers the greedy choice The number of sub-problems considered in the optimal solution at each step:    The number of choices to be considered at each step:    DP: 2 Greedy: 1 DP: j-i-1 Greedy: 1 As we have a single choice and a single subproblem to solve, we can solve the problem topdown
  • 16. Greedy Solution  In order to solve Si,j  Just choose the activity with the earliest finish time in Si,j    am Then, solve Sm,j In order to solve S = S0,n+1      First choice am1 (is always a1 – why?) for S0,n+1 Then need to solve Sm1,n+1 Second choice am2 for Sm1,n+1 Then need to solve Sm2,n+1 …
  • 17. Recursive Algorithm  Because the greedy algorithm considers the activities sorted by their finish time, we first need to sort by the finish time!  O(n logn) RecursiveActivitySelection(s, f, i, n) m = i +1 WHILE (m <= n AND s[m] < f[i]) m++ // find the activity with the earliest // start time that starts after activity i finishes IF (m <= n) THEN RETURN {am} U RecursiveActivitySelection(s, f, m, n) RETURN Φ Initial call: RecursiveActivitySelection(s, f, 0, n) Complexity: (n) – go through each activity once
  • 18. Iterative Algorithm  Can turn the recursive algorithm into an iterative one IterativeActivitySelection(s, f, n) A = {a1} i=1 FOR (m = 2..n) IF (s[m] < f[i]) CONTINUE ELSE A = A U {am} i=m RETURN A Complexity: (n) – go once through each activity
  • 19. Huffman Trees  Efficient method of compressing files   Especially text files Builds a Huffman tree in a greedy fashion   Specific for the encoded text/file It is used for compressing the file  The compressed file and the Huffman tree are used to recreate the original file  Example text: “ana are mere”
  • 20. Huffman Trees (2)    K – set of keys that are encoded (the characters in the original text file) In the original text, all the keys are represented on the same number of bits Objective: we want to find an alternative representation for each key such that:    The keys that are most frequent are represented on a smaller number of bits than the ones that are less frequent We are able to distinguish easily in this new representation what are the keys that were in the original file Example: text files   Original representation: char – 8 bits or ASCII – 7 bits New representation: 1 bit for the most frequent character in the encoded text and so on…
  • 21. Huffman Trees (3)  Huffman encoding tree:     An ordered binary tree Only the leaves contain the keys from the set K All internal nodes must have exactly 2 children The edges are coded:      0 – left edge 1 – right edge The code in the new representation for each key is the set of codes from the root to the leaf containing that key Start from the frequency of appearance of each key in the original file: p(k) for each k in K Example: “ana are mere”  p(a) = p(e) = 0.25; p(n) = p(m) = 0.083;p(r) = p( ) = 0.166
  • 22. The Huffman Tree     T – encoding tree for the set of keys K code_length(k) – the length of the code for key k in tree T level(k, T) – the level in tree T for the leaf corresponding to key k The cost of an encoding tree T for a set of keys K that have the frequencies p: Cost (T ) code _ length(k ) * p(k ) k K  level (k , T ) * p(k ) k K Huffman Tree = An encoding tree of minimum cost for a set of keys K with frequencies p   The codes in this tree are called Huffman codes Optimization problem!
  • 23. Building the Huffman Tree We can devise a greedy algorithm for building a Huffman tree for any set of keys K  Steps: 1. For each key k in K build a simple tree with a single node that contains k and has the weight w = p(k). Let the forest of trees be called Forest. 2. Choose any two trees from Forest that have the minimum weights. Let them be t1 and t2. 3. Remove t1 and t2 from Forest and add a new tree:  a) b) c) That has a new root r that does not contain any key (as it is not a leaf) The two descendents of r are t1 and t2 respectively. The weight of the new tree is w(r) = w(t1) + w(t2) Repeat steps 2 and 3 until Forest contains a single tree 4.  => the Huffman tree
  • 24. Example  Input: “ana are mere”  p(„a‟) = p(„e‟) = 0.25; p(„n‟) = p(„m‟) = 0.083; p(„r‟) = p(„ „) = 0.166  Initially: W(a)= 0.25 W(e)= 0.25 W(r)= 0.16 W( )= 0.16 W(m)= 0.08 W(n)= 0.08
  • 25. Example – Building the Huffman Tree   Intermediate steps: on whiteboard Solution:    Encoding: „a‟ : 00 , „e‟ : 11 , „r‟ : 010 , „ ‟ : 011 , „m‟ : 100 , „n‟ : 101 Cost of the tree: Cost(Tree) = 2 * 0.25 + 2 * 0.25 + 3 * 0.083 + 3 * 0.083 + 3 * 0.166 + 3 * 0.166 = 2.2 bits Huffman Tree 0 W(a+r+ )=0.57 0 1 W(r+ )=0.32 W(a) 1 0 W(r) W( ) 1 W(m+n+e)=0.41 0 1 W(m+n)=0.16 W(e) 0 1 W(m) W(n)
  • 26. Algorithm for building the Huffman Tree   On the whiteboard Straightforward from the pseudocode
  • 27. Decoding the File  Encoded text:   0010100011000101000111001101011 a n a „‟ a r e „‟ m e r e  We also need the Huffman tree  Starting from the first bit, we walk the tree from the root to the first leaf we encounter   When at a leaf, append the key corresponding to that leaf to the decoded text Go to the root again and repeat until we reach the end of the encoded text
  • 28. Greedy Algorithms – Conclusions   Greedy algorithms that build the globally optimal solution can be devised for some problems that have an optimal substructure Steps for devising a greedy algorithm:  Determine the optimal substructure  Develop a recursive solution  Prove that at any stage of recursion, one of the optimal choices is the greedy choice. Therefore, it‟s always safe to make the greedy choice  Show that all but one of the sub-problems resulting from the greedy choice are empty  Develop a recursive greedy algorithm  Convert it to an iterative algorithm
  • 29. Greedy Algorithms – Conclusions (2)  Properties for optimization problems that accept correct greedy solutions:    Optimal substructure Greedy choice property Preprocessing is essential for efficient greedy algorithms:  E.g. sort some data prior to process it with the greedy algorithm
  • 30. Greedy vs. DP  Similarities:     Optimization problems Optimal substructure (including division into subproblems) Make a choice at each step Differences:     Greedy: 1 choice, 1 sub-problem to be solved Greedy is top-down, DP is bottom-up Greedy has the greedy choice property Greedy does not use memoization as the other subproblems are not important (they are discarded if they are not used by the greedy choice)
  • 31. Knapsack Problem  Given a set on n items:    Which are the items that should be carried in order to maximize the total value that can be carried in a knapsack of total weight W?   Values v[i] Weights w[i] Optimization problem Similar to the change-making problem  Given a set of divisions (coins and banknotes for a currency), find the minimum number of coins and banknotes needed to change a given amount of money
  • 32. Knapsack Problem (2)  Can be solved efficiently if:  Are allowed to carry fractions of the items    Fractional knapsack problem Greedy solution: sort the items according to the ratio v[i]/w[i] and choose the items in the order of the highest ratio until the knapsack is full We are not allowed to carry fractions of the items    Integer (0/1) knapsack problem But the values for weights and values are relatively small integers DP solution: on whiteboard
  • 33. Knapsack Problem (3)  However, in the general case:    Real values for weights Very high values for weights The problem can only be solved using a backtracking approach   The problem is NP-complete The class of the most difficult problems that can be solved on a computer (at this moment, it‟s considered that these problems cannot be solved in polynomial time)
  • 34. References  CLRS – Chapter 16  MIT OCW – Introduction to Algorithms – video lecture 16  http://www.math.fau.edu/locke/Greedy.htm