SlideShare a Scribd company logo
SEG4630 2009-2010
Tutorial 2 – Frequent Pattern
Mining
2
Frequent Patterns
 Frequent pattern: a pattern (a set of items,
subsequences, substructures, etc.) that occurs
frequently in a data set
 itemset: A set of one or more items
 k-itemset: X = {x1, …, xk}
 Mining algorithms
 Apriori
 FP-growth
Tid Items bought
10 Beer, Nuts, Diaper
20 Beer, Coffee, Diaper
30 Beer, Diaper, Eggs
40 Nuts, Eggs, Milk
50 Nuts, Coffee, Diaper, Eggs, Beer
3
Support & Confidence
 Support
 (absolute) support, or, support count of X: Frequency or
occurrence of an itemset X
 (relative) support, s, is the fraction of transactions that
contains X (i.e., the probability that a transaction contains X)
 An itemset X is frequent if X’s support is no less than a minsup
threshold
 Confidence (association rule: XY )
 sup(XY)/sup(x) (conditional prob.: Pr(Y|X) = Pr(X^Y)/Pr(X) )
 confidence, c, conditional probability that a transaction
having X also contains Y
 Find all the rules XY with minimum support and confidence
 sup(XY) ≥ minsup
 sup(XY)/sup(X) ≥ minconf
4
Apriori Principle
 If an itemset is frequent, then all of its subsets must also be
frequent
 If an itemset is infrequent, then all of its supersets must be
infrequent too
null
AB AC AD AE BC BD BE CD CE DE
A B C D E
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE
ABCD ABCE ABDE ACDE BCDE
ABCDE
frequent
frequent infrequent
infrequent
(X  Y)
(¬Y  ¬X)
5
Apriori: A Candidate Generation & Test
Approach
 Initially, scan DB once to get frequent 1-
itemset
 Loop
 Generate length (k+1) candidate
itemsets from length k frequent
itemsets
 Test the candidates against DB
 Terminate when no frequent or candidate set
can be generated
6
Generate candidate itemsets
Example
Frequent 3-itemsets:
{1, 2, 3}, {1, 2, 4}, {1, 2, 5}, {1, 3, 4},
{1, 3, 5}, {2, 3, 4}, {2, 3, 5} and {3, 4, 5}
 Candidate 4-itemset:
{1, 2, 3, 4}, {1, 2, 3, 5}, {1, 2, 4, 5}, {1, 3,
4, 5}, {2, 3, 4, 5}
 Which need not to be counted?
{1, 2, 4, 5} & {1, 3, 4, 5} & {2, 3, 4, 5}
7
Maximal vs Closed Frequent Itemsets
 An itemset X is a max-pattern if X is frequent and
there exists no frequent super-pattern Y ‫כ‬ X
 An itemset X is closed if X is frequent and there
exists no super-pattern Y ‫כ‬ X, with the same
support as X
Frequent
Itemsets
Closed
Frequent
Itemsets
Maximal
Frequent
Itemsets
Closed Frequent Itemsets are Lossless:
the support for any frequent itemset
can be deduced from the closed
frequent itemsets
8
Maximal vs Closed Frequent Itemsets
# Closed = 9
# Maximal = 4
null
AB AC AD AE BC BD BE CD CE DE
A B C D E
ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE
ABCD ABCE ABDE ACDE BCDE
ABCDE
124 123 1234 245 345
12 124 24 4 123 2 3 24 34 45
12 2 24 4 4 2 3 4
2 4
Closed and
maximal
frequent
Closed but
not maximal
minsup=2
9
Algorithms to find frequent pattern
 Apriori: uses a generate-and-test approach –
generates candidate itemsets and tests if they
are frequent
 Generation of candidate itemsets is expensive (in both
space and time)
 Support counting is expensive
 Subset checking (computationally expensive)
 Multiple Database scans (I/O)
 FP-Growth: allows frequent itemset discovery
without candidate generation. Two step:
 1.Build a compact data structure called the FP-tree
 2 passes over the database
 2.extracts frequent itemsets directly from the FP-tree
 Traverse through FP-tree
10
Pattern-Growth Approach: Mining Frequent
Patterns Without Candidate Generation
 The FP-Growth Approach
 Depth-first search (Apriori: Breadth-first search)
 Avoid explicit candidate generation
Fp-tree construatioin:
• Scan DB once, find frequent
1-itemset (single item
pattern)
• Sort frequent items in
frequency descending order,
f-list
• Scan DB again, construct FP-
tree
FP-Growth approach:
• For each frequent item, construct its
conditional pattern-base, and then
its conditional FP-tree
• Repeat the process on each newly
created conditional FP-tree
• Until the resulting FP-tree is empty,
or it contains only one path—single
path will generate all the
combinations of its sub-paths, each
of which is a frequent pattern
11
FP-tree Size
 The size of an FPtree is typically smaller than the
size of the uncompressed data because many
transactions often share a few items in common
 Bestcase scenario: All transactions have the same
set of items, and the FPtree contains only a single
branch of nodes.
 Worstcase scenario: Every transaction has a unique
set of items. As none of the transactions have any
items in common, the size of the FPtree is
effectively the same as the size of the original
data.
 The size of an FPtree also depends on how the
items are ordered
12
Example
 FP-tree with item
descending ordering
 FP-tree with item ascending
ordering
13
Find Patterns Having p From P-conditional
Database
 Starting at the frequent item header table in the FP-tree
 Traverse the FP-tree by following the link of each
frequent item p
 Accumulate all of transformed prefix paths of item p to
form p’s conditional pattern base
Conditional pattern bases
item cond. pattern base
c f:3
a fc:3
b fca:1, f:1, c:1
m fca:2, fcab:1
p fcam:2, cb:1
{}
f:4 c:1
b:1
p:1
b:1
c:3
a:3
b:1
m:2
p:2 m:1
Header Table
Item frequency head
f 4
c 4
a 3
b 3
m 3
p 3
14
f, c, a, m, p
5
c, b, p
4
f, b
3
f, c, a, b, m
2
f, c, a, m, p
1
f, c, a, m, p
5
c, b, p
4
f, b
3
f, c, a, b, m
2
f, c, a, m, p
1
f, c, a
5
c, b
4
f, b
3
f, c, a, b
2
f, c, a
1
f, c, a
5
c, b
4
f, b
3
f, c, a, b
2
f, c, a
1
f, c, a, m
5
c, b
4
f, c, a, m
1
f, c, a, m
5
c, b
4
f, c, a, m
1
f, c, a
5
f, c, a, b
2
f, c, a
1
f, c, a
5
f, c, a, b
2
f, c, a
1
f, c, a, m
5
c, b
4
f, b
3
f, c, a, b, m
2
f, c, a, m
1
f, c, a, m
5
c, b
4
f, b
3
f, c, a, b, m
2
f, c, a, m
1
c
4
f
3
f, c, a
2
c
4
f
3
f, c, a
2
f, c, a
5
c
4
f
3
f, c, a
2
f, c, a
1
f, c, a
5
c
4
f
3
f, c, a
2
f, c, a
1 f, c
5
f, c
2
f, c
1
f, c
5
f, c
2
f, c
1
f, c
5
c
4
f
3
f, c
2
f, c
1
f, c
5
c
4
f
3
f, c
2
f, c
1
+ p
+ m
+ b
+ a
FP-Growth
15
f, c, a, m, p
5
c, b, p
4
f, b
3
f, c, a, b, m
2
f, c, a, m, p
1
f, c, a, m, p
5
c, b, p
4
f, b
3
f, c, a, b, m
2
f, c, a, m, p
1
f, c, a, m
5
c, b
4
f, c, a, m
1
f, c, a, m
5
c, b
4
f, c, a, m
1
+ p
f, c, a
5
f, c, a, b
2
f, c, a
1
f, c, a
5
f, c, a, b
2
f, c, a
1
+ m
c
4
f
3
f, c, a
2
c
4
f
3
f, c, a
2
+ b
f, c
5
f, c
2
f, c
1
f, c
5
f, c
2
f, c
1
+ a
f: 1,2,3,5
(1) (2)
(3) (4)
(5)
(6)
+ c
f
5
4
f
2
f
1
f
5
4
f
2
f
1
FP-Growth
16
{}
f:4 c:1
b:1
p:1
b:1
c:3
a:3
b:1
m:2
p:2 m:1
{}
f:2 c:1
b:1
p:1
c:2
a:2
m:2
{}
f:3
c:3
a:3
b:1
{}
f:2 c:1
c:1
a:1
{}
f:3
c:3
{}
f:3
+
p
+
m
+
b
+
a
+
c
f:4
(1) (2)
(3) (4) (5) (6)
17
f, c, a, m, p
5
c, b, p
4
f, b
3
f, c, a, b, m
2
f, c, a, m, p
1
f, c, a, m, p
5
c, b, p
4
f, b
3
f, c, a, b, m
2
f, c, a, m, p
1
f, c, a, m
5
c, b
4
f, c, a, m
1
f, c, a, m
5
c, b
4
f, c, a, m
1
+ p
f, c, a
5
f, c, a, b
2
f, c, a
1
f, c, a
5
f, c, a, b
2
f, c, a
1
+ m
c
4
f
3
f, c, a
2
c
4
f
3
f, c, a
2
+ b
f, c
5
f, c
2
f, c
1
f, c
5
f, c
2
f, c
1
+ a
f: 1,2,3,5
+ p
c
5
c
4
c
1
c
5
c
4
c
1
p: 3
cp: 3
f, c, a
5
f, c, a
2
f, c, a
1
f, c, a
5
f, c, a
2
f, c, a
1
+ m
m: 3
fm: 3
cm: 3
am: 3
fcm: 3
fam: 3
cam: 3
fcam: 3
b: 3
f: 4
a: 3
fa: 3
ca: 3
fca: 3
c: 4
fc: 3
+ c
f
5
4
f
2
f
1
f
5
4
f
2
f
1
min_sup = 3

More Related Content

Similar to FP growth algorithm, data mining, data analystics (20)

PDF
Welcome to International Journal of Engineering Research and Development (IJERD)
IJERD Editor
 
PPT
UNIT 3.2 -Mining Frquent Patterns (part1).ppt
RaviKiranVarma4
 
PPTX
Association Rule Learning Part 1: Frequent Itemset Generation
Knoldus Inc.
 
PDF
06FPBasic02.pdf
Alireza418370
 
PPTX
Module2_Part 2_Apriori and FP Growth.pptx
tivoy24550
 
PPTX
Association Rule Mining, Correlation,Clustering
RupaRaj6
 
PPT
frequent pattern mining without candidate
ahidayat
 
PPT
Cs501 mining frequentpatterns
Kamal Singh Lodhi
 
PPT
DM -Unit 2-PPT.ppt
raju980973
 
PPT
06FPBasic.ppt
KomalBanik
 
PPT
06FPBasic.ppt
KomalBanik
 
PDF
06 fp basic
JoonyoungJayGwak
 
PPT
Chapter 6. Mining Frequent Patterns, Associations and Correlations Basic Conc...
Subrata Kumer Paul
 
PPTX
7 algorithm
Vishal Dutt
 
PPT
The comparative study of apriori and FP-growth algorithm
deepti92pawar
 
PPT
Lecture20
mattriley
 
PPTX
Rules of data mining
Sulman Ahmed
 
PPTX
Rules of data mining
Sulman Ahmed
 
PPT
Mining Frequent Itemsets.ppt
NBACriteria2SICET
 
PDF
Frequent Pattern Analysis, Apriori and FP Growth Algorithm
ShivarkarSandip
 
Welcome to International Journal of Engineering Research and Development (IJERD)
IJERD Editor
 
UNIT 3.2 -Mining Frquent Patterns (part1).ppt
RaviKiranVarma4
 
Association Rule Learning Part 1: Frequent Itemset Generation
Knoldus Inc.
 
06FPBasic02.pdf
Alireza418370
 
Module2_Part 2_Apriori and FP Growth.pptx
tivoy24550
 
Association Rule Mining, Correlation,Clustering
RupaRaj6
 
frequent pattern mining without candidate
ahidayat
 
Cs501 mining frequentpatterns
Kamal Singh Lodhi
 
DM -Unit 2-PPT.ppt
raju980973
 
06FPBasic.ppt
KomalBanik
 
06FPBasic.ppt
KomalBanik
 
06 fp basic
JoonyoungJayGwak
 
Chapter 6. Mining Frequent Patterns, Associations and Correlations Basic Conc...
Subrata Kumer Paul
 
7 algorithm
Vishal Dutt
 
The comparative study of apriori and FP-growth algorithm
deepti92pawar
 
Lecture20
mattriley
 
Rules of data mining
Sulman Ahmed
 
Rules of data mining
Sulman Ahmed
 
Mining Frequent Itemsets.ppt
NBACriteria2SICET
 
Frequent Pattern Analysis, Apriori and FP Growth Algorithm
ShivarkarSandip
 

Recently uploaded (20)

PPTX
Artificial intelligence Presentation1.pptx
SaritaMahajan5
 
PPTX
MENU-DRIVEN PROGRAM ON ARUNACHAL PRADESH.pptx
manvi200807
 
PDF
Orchestrating Data Workloads With Airflow.pdf
ssuserae5511
 
PDF
NSEST - 2025-Brochure srm institute of science and technology
MaiyalaganT
 
DOCX
COT Feb 19, 2025 DLLgvbbnnjjjjjj_Digestive System and its Functions_PISA_CBA....
kayemorales1105
 
PPTX
Data Analytics using sparkabcdefghi.pptx
KarkuzhaliS3
 
PDF
Business Automation Solution with Excel 1.1.pdf
Vivek Kedia
 
PDF
CT-2-Ancient ancient accept-Criticism.pdf
DepartmentofEnglishC1
 
PPTX
Model Evaluation & Visualisation part of a series of intro modules for data ...
brandonlee626749
 
PPT
Reliability Monitoring of Aircrfat commerce
Rizk2
 
PPT
intro to AI dfg fgh gggdrhre ghtwhg ewge
traineramrsiam
 
PPTX
Indigo dyeing Presentation (2).pptx as dye
shreeroop1335
 
DOCX
brigada_PROGRAM_25.docx the boys white house
RonelNebrao
 
PPTX
PPT2 W1L2.pptx.........................................
palicteronalyn26
 
PPTX
Module-2_3-1eentzyssssssssssssssssssssss.pptx
ShahidHussain66691
 
PDF
Microsoft Power BI - Advanced Certificate for Business Intelligence using Pow...
Prasenjit Debnath
 
PDF
Data science AI/Ml basics to learn .pdf
deokhushi04
 
PDF
Kafka Use Cases Real-World Applications
Accentfuture
 
PPTX
Data anlytics Hospitals Research India.pptx
SayantanChakravorty2
 
Artificial intelligence Presentation1.pptx
SaritaMahajan5
 
MENU-DRIVEN PROGRAM ON ARUNACHAL PRADESH.pptx
manvi200807
 
Orchestrating Data Workloads With Airflow.pdf
ssuserae5511
 
NSEST - 2025-Brochure srm institute of science and technology
MaiyalaganT
 
COT Feb 19, 2025 DLLgvbbnnjjjjjj_Digestive System and its Functions_PISA_CBA....
kayemorales1105
 
Data Analytics using sparkabcdefghi.pptx
KarkuzhaliS3
 
Business Automation Solution with Excel 1.1.pdf
Vivek Kedia
 
CT-2-Ancient ancient accept-Criticism.pdf
DepartmentofEnglishC1
 
Model Evaluation & Visualisation part of a series of intro modules for data ...
brandonlee626749
 
Reliability Monitoring of Aircrfat commerce
Rizk2
 
intro to AI dfg fgh gggdrhre ghtwhg ewge
traineramrsiam
 
Indigo dyeing Presentation (2).pptx as dye
shreeroop1335
 
brigada_PROGRAM_25.docx the boys white house
RonelNebrao
 
PPT2 W1L2.pptx.........................................
palicteronalyn26
 
Module-2_3-1eentzyssssssssssssssssssssss.pptx
ShahidHussain66691
 
Microsoft Power BI - Advanced Certificate for Business Intelligence using Pow...
Prasenjit Debnath
 
Data science AI/Ml basics to learn .pdf
deokhushi04
 
Kafka Use Cases Real-World Applications
Accentfuture
 
Data anlytics Hospitals Research India.pptx
SayantanChakravorty2
 
Ad

FP growth algorithm, data mining, data analystics

  • 1. SEG4630 2009-2010 Tutorial 2 – Frequent Pattern Mining
  • 2. 2 Frequent Patterns  Frequent pattern: a pattern (a set of items, subsequences, substructures, etc.) that occurs frequently in a data set  itemset: A set of one or more items  k-itemset: X = {x1, …, xk}  Mining algorithms  Apriori  FP-growth Tid Items bought 10 Beer, Nuts, Diaper 20 Beer, Coffee, Diaper 30 Beer, Diaper, Eggs 40 Nuts, Eggs, Milk 50 Nuts, Coffee, Diaper, Eggs, Beer
  • 3. 3 Support & Confidence  Support  (absolute) support, or, support count of X: Frequency or occurrence of an itemset X  (relative) support, s, is the fraction of transactions that contains X (i.e., the probability that a transaction contains X)  An itemset X is frequent if X’s support is no less than a minsup threshold  Confidence (association rule: XY )  sup(XY)/sup(x) (conditional prob.: Pr(Y|X) = Pr(X^Y)/Pr(X) )  confidence, c, conditional probability that a transaction having X also contains Y  Find all the rules XY with minimum support and confidence  sup(XY) ≥ minsup  sup(XY)/sup(X) ≥ minconf
  • 4. 4 Apriori Principle  If an itemset is frequent, then all of its subsets must also be frequent  If an itemset is infrequent, then all of its supersets must be infrequent too null AB AC AD AE BC BD BE CD CE DE A B C D E ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE ABCDE frequent frequent infrequent infrequent (X  Y) (¬Y  ¬X)
  • 5. 5 Apriori: A Candidate Generation & Test Approach  Initially, scan DB once to get frequent 1- itemset  Loop  Generate length (k+1) candidate itemsets from length k frequent itemsets  Test the candidates against DB  Terminate when no frequent or candidate set can be generated
  • 6. 6 Generate candidate itemsets Example Frequent 3-itemsets: {1, 2, 3}, {1, 2, 4}, {1, 2, 5}, {1, 3, 4}, {1, 3, 5}, {2, 3, 4}, {2, 3, 5} and {3, 4, 5}  Candidate 4-itemset: {1, 2, 3, 4}, {1, 2, 3, 5}, {1, 2, 4, 5}, {1, 3, 4, 5}, {2, 3, 4, 5}  Which need not to be counted? {1, 2, 4, 5} & {1, 3, 4, 5} & {2, 3, 4, 5}
  • 7. 7 Maximal vs Closed Frequent Itemsets  An itemset X is a max-pattern if X is frequent and there exists no frequent super-pattern Y ‫כ‬ X  An itemset X is closed if X is frequent and there exists no super-pattern Y ‫כ‬ X, with the same support as X Frequent Itemsets Closed Frequent Itemsets Maximal Frequent Itemsets Closed Frequent Itemsets are Lossless: the support for any frequent itemset can be deduced from the closed frequent itemsets
  • 8. 8 Maximal vs Closed Frequent Itemsets # Closed = 9 # Maximal = 4 null AB AC AD AE BC BD BE CD CE DE A B C D E ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE ABCD ABCE ABDE ACDE BCDE ABCDE 124 123 1234 245 345 12 124 24 4 123 2 3 24 34 45 12 2 24 4 4 2 3 4 2 4 Closed and maximal frequent Closed but not maximal minsup=2
  • 9. 9 Algorithms to find frequent pattern  Apriori: uses a generate-and-test approach – generates candidate itemsets and tests if they are frequent  Generation of candidate itemsets is expensive (in both space and time)  Support counting is expensive  Subset checking (computationally expensive)  Multiple Database scans (I/O)  FP-Growth: allows frequent itemset discovery without candidate generation. Two step:  1.Build a compact data structure called the FP-tree  2 passes over the database  2.extracts frequent itemsets directly from the FP-tree  Traverse through FP-tree
  • 10. 10 Pattern-Growth Approach: Mining Frequent Patterns Without Candidate Generation  The FP-Growth Approach  Depth-first search (Apriori: Breadth-first search)  Avoid explicit candidate generation Fp-tree construatioin: • Scan DB once, find frequent 1-itemset (single item pattern) • Sort frequent items in frequency descending order, f-list • Scan DB again, construct FP- tree FP-Growth approach: • For each frequent item, construct its conditional pattern-base, and then its conditional FP-tree • Repeat the process on each newly created conditional FP-tree • Until the resulting FP-tree is empty, or it contains only one path—single path will generate all the combinations of its sub-paths, each of which is a frequent pattern
  • 11. 11 FP-tree Size  The size of an FPtree is typically smaller than the size of the uncompressed data because many transactions often share a few items in common  Bestcase scenario: All transactions have the same set of items, and the FPtree contains only a single branch of nodes.  Worstcase scenario: Every transaction has a unique set of items. As none of the transactions have any items in common, the size of the FPtree is effectively the same as the size of the original data.  The size of an FPtree also depends on how the items are ordered
  • 12. 12 Example  FP-tree with item descending ordering  FP-tree with item ascending ordering
  • 13. 13 Find Patterns Having p From P-conditional Database  Starting at the frequent item header table in the FP-tree  Traverse the FP-tree by following the link of each frequent item p  Accumulate all of transformed prefix paths of item p to form p’s conditional pattern base Conditional pattern bases item cond. pattern base c f:3 a fc:3 b fca:1, f:1, c:1 m fca:2, fcab:1 p fcam:2, cb:1 {} f:4 c:1 b:1 p:1 b:1 c:3 a:3 b:1 m:2 p:2 m:1 Header Table Item frequency head f 4 c 4 a 3 b 3 m 3 p 3
  • 14. 14 f, c, a, m, p 5 c, b, p 4 f, b 3 f, c, a, b, m 2 f, c, a, m, p 1 f, c, a, m, p 5 c, b, p 4 f, b 3 f, c, a, b, m 2 f, c, a, m, p 1 f, c, a 5 c, b 4 f, b 3 f, c, a, b 2 f, c, a 1 f, c, a 5 c, b 4 f, b 3 f, c, a, b 2 f, c, a 1 f, c, a, m 5 c, b 4 f, c, a, m 1 f, c, a, m 5 c, b 4 f, c, a, m 1 f, c, a 5 f, c, a, b 2 f, c, a 1 f, c, a 5 f, c, a, b 2 f, c, a 1 f, c, a, m 5 c, b 4 f, b 3 f, c, a, b, m 2 f, c, a, m 1 f, c, a, m 5 c, b 4 f, b 3 f, c, a, b, m 2 f, c, a, m 1 c 4 f 3 f, c, a 2 c 4 f 3 f, c, a 2 f, c, a 5 c 4 f 3 f, c, a 2 f, c, a 1 f, c, a 5 c 4 f 3 f, c, a 2 f, c, a 1 f, c 5 f, c 2 f, c 1 f, c 5 f, c 2 f, c 1 f, c 5 c 4 f 3 f, c 2 f, c 1 f, c 5 c 4 f 3 f, c 2 f, c 1 + p + m + b + a FP-Growth
  • 15. 15 f, c, a, m, p 5 c, b, p 4 f, b 3 f, c, a, b, m 2 f, c, a, m, p 1 f, c, a, m, p 5 c, b, p 4 f, b 3 f, c, a, b, m 2 f, c, a, m, p 1 f, c, a, m 5 c, b 4 f, c, a, m 1 f, c, a, m 5 c, b 4 f, c, a, m 1 + p f, c, a 5 f, c, a, b 2 f, c, a 1 f, c, a 5 f, c, a, b 2 f, c, a 1 + m c 4 f 3 f, c, a 2 c 4 f 3 f, c, a 2 + b f, c 5 f, c 2 f, c 1 f, c 5 f, c 2 f, c 1 + a f: 1,2,3,5 (1) (2) (3) (4) (5) (6) + c f 5 4 f 2 f 1 f 5 4 f 2 f 1 FP-Growth
  • 16. 16 {} f:4 c:1 b:1 p:1 b:1 c:3 a:3 b:1 m:2 p:2 m:1 {} f:2 c:1 b:1 p:1 c:2 a:2 m:2 {} f:3 c:3 a:3 b:1 {} f:2 c:1 c:1 a:1 {} f:3 c:3 {} f:3 + p + m + b + a + c f:4 (1) (2) (3) (4) (5) (6)
  • 17. 17 f, c, a, m, p 5 c, b, p 4 f, b 3 f, c, a, b, m 2 f, c, a, m, p 1 f, c, a, m, p 5 c, b, p 4 f, b 3 f, c, a, b, m 2 f, c, a, m, p 1 f, c, a, m 5 c, b 4 f, c, a, m 1 f, c, a, m 5 c, b 4 f, c, a, m 1 + p f, c, a 5 f, c, a, b 2 f, c, a 1 f, c, a 5 f, c, a, b 2 f, c, a 1 + m c 4 f 3 f, c, a 2 c 4 f 3 f, c, a 2 + b f, c 5 f, c 2 f, c 1 f, c 5 f, c 2 f, c 1 + a f: 1,2,3,5 + p c 5 c 4 c 1 c 5 c 4 c 1 p: 3 cp: 3 f, c, a 5 f, c, a 2 f, c, a 1 f, c, a 5 f, c, a 2 f, c, a 1 + m m: 3 fm: 3 cm: 3 am: 3 fcm: 3 fam: 3 cam: 3 fcam: 3 b: 3 f: 4 a: 3 fa: 3 ca: 3 fca: 3 c: 4 fc: 3 + c f 5 4 f 2 f 1 f 5 4 f 2 f 1 min_sup = 3