SlideShare a Scribd company logo
Ch6: Knowledge Representation Using Rules
Procedural vs. Declarative Knowledge
Logic Programming
Forward vs. backward reasoning
Matching
Control knowledge
Slide 1
Procedural vs. Declarative Knowledge
Q Declarative representation
–Knowledge is specified but the use is not given.
–Need a program that specifies what is to be done to the k
nowledge and how.
–Example:
• Logical assertions and Resolution theorem prover
–A different way: Logical assertions can be viewed as a pr
ogram, rather than as data to a program.
=> Logical assertions = Procedural representations of kno
wledge
Slide 2
Procedural vs. Declarative Knowledge
Q Procedural representation
–The control information that is necessary to use the kno
wledge is considered to be embedded in the knowledge it
self.
–Need an interpreter that follows the instructions given in
the knowledge.
–The real difference between the declarative and the pro
cedural views of knowledge lines in where control informa
tion resides.
• Kowalski’s equation: Algorithm = Logic + Control
Slide 3
Slide 4
Procedural knowledge Declarative knowledge
– Knowledge about "how to do
something"; e.g., to
determine if Peter or Robert
is older, first find their ages.
– ◊ Focuses on tasks that
must be performed to reach
a particular objective or
goal.
– ◊ Examples : procedures,
rules, strategies, agendas,
models.
– Knowledge about "that
something is true or false".
e.g., A car has four tyres;
Peter is older than Robert;
– ◊ Refers to representations
of objects and events;
knowledge about facts and
relationships;
– ◊ Example : concepts,
objects, facts, propositions,
assertions, semantic nets,
logic and descriptive
models.
–
Procedural vs. Declarative Knowledge
The real difference between the declarative and the proce
dural views of knowledge lies in where control information reside
s. Example:
man(Marcus)
man(Caesar)
person(Cleopatra)
x: man(x) person(x)
person(x)?
X is to be binded to a particular value for which person is true. O
ur knowledge base justifies any of the following answers
y=Marcus
y=ceasar
Y=Cleopatra. Slide 5
Procedural vs. Declarative Knowledge
•Because there is no more than one value that satisfies the predic
ate, but only one value is needed, the answer depends on the ord
er in which the assertions are examined.
•Declarative assertions do not say how they will be examined.
•y=cleopatra is the answer for the question when viewed declarati
vely.
When viewed procedurally, the answer is Marcus.this happens be
cause the first statement the person goal is the inference rule x:
man(x) person(x)
Slide 6
Procedural vs. Declarative Knowledge
•This rule sets up a subgoal to find a man.Again the statements ar
e examined from the beginning and now Marcus is found to satisfy
the subgoal and thus also the goal.
•So Marcus is reported as the answer.
•There is no clear cut answer whether declarative or procedural kn
owledge representation frameworks are better.
Slide 7
Logic Programming
•Logic Programming is a programming language paradigm o
n which logical assertions are viewed as programs.
•PROLOG program is described as a series of logical asserti
ons each of which is a Horn Clause.
Prolog program = {Horn Clauses}
–Horn clause: disjunction of literals of which at most one is p
ositive literal
p,¬pVq,and pq are horn clauses.
=> Prolog program is decidable
–Control structure: Prolog interpreter = backward reasoning
+ depth-first with backtracking
Slide 8
Logic Programming
Q Logic:
X: pet(X) ^ small(X) apartmentpet(X)
X: cat(X) v dog(X) pet(X)
X: poodle(X) dog(X) ^ small(X) poodle(fluffy)
Q Prolog:
apartmentpet(X) :- pet(X) , small(X). pet(X) :- cat(
X).
pet(X) :- dog(X).
dog(X) :- poodle(X).
small(X) :- poodle(X). poodle(fluffy).
Slide 9
Logic Programming
Q Prolog vs. Logic
–Quantification is provided implicitly by the way the variabl
es are interpreted.
• Variables: begin with UPPERCASE letter
• Constants: begin with lowercase letters or number
–There is an explicit symbol for AND (,), but there’s none f
or OR. Instead, disjunction must be represented as a list of
alternative statements
–“p implies q” is written as q :- p.
Slide 1
0
Logic Programming
Logical negation cannot be represented explicitly in pure
Prolog.
– Example: x: dog(x) cat(x)
=> problem-solving strategy: NEGATION AS FAILURE
?- cat(fluffy). => false b/c it’s unable to prove Fluffy is a cat.
Q Negation as failure requires: CLOSED WORLD ASSUM
PTION which states that all relevant ,true assertions are
contained in our knowledge base or derivable from asser
tions that are so contained
Slide 1
1
Forward vs. Backward Reasoning
•Reason backward from the goal states: Begin building a tree of mov
e sequences that might be solutions by starting with the goal configu
rations at the root of the tree. Generate the next level of the tree by fi
nding all the rules whose right side match the root node. Use the left
sides of the rules to generate the nodes at this second level of the tr
ee. Generate the next level of the tree by taking each node at the pr
evious level and finding all the rules whose right sides match it. The
n use the corresponding left sides to generate the new nodes. Conti
nue until a node that matches the initial state is generated. This met
hod of reasoning backward from the desired final state if often called
goal-directed reasoning.
Slide 12
Forward vs. Backward Reasoning
Q Forward: from the start states.
Q Backward: from the goal states.
•Reason forward from the initial states: Begin building a tree of move
sequences that might be solutions by starting with the initial configur
ations at the root of the tree. Generate the next level of the tree by f
inding all the rules whose left sides match the root node and using th
eir right sides to create the new configurations. Continue until a confi
guration that matches the goal state is generated.
•Reason backward from the goal states: Begin building a tree of mov
e sequences that might be solutions by starting with the goal configu
rations at the root of the tree. Generate the next level of the tree by fi
nding all the rules whose right side match the root node.
Slide 13
Forward vs. Backward Reasoning
Q Four factors influence forward or Backward?
–Move from the smaller set of states to the larger set of sta
tes
–Proceed in the direction with the lower branching factor
–Proceed in the direction that corresponds more closely wi
th the way the user will think
–Proceed in the direction that corresponds more closely wi
th the way the problem-solving episodes will be triggered
Slide 14
Forward vs. Backward Reasoning
Q To encode the knowledge for reasoning, we need 2 kinds
of rules:
– Forward rules: to encode knowledge about how to respo
nd to certain input.
– Backward rules: to encode knowledge about how to achi
eve particular goals.
Slide 15
KR Using rules
IF . . THEN
ECA (Event Condition Action)
RULES
. APLLICATIONS
EXAMPLES
1. If flammable liquid was spilled, call the fire depart
ment.
2. If the pH of the spill is less than 6, the spill materi
al is an acid.
3. If the spill material is an acid, and the spill smells li
ke vinegar, the spill material is acetic acid.
( are used to represent rules)
FACTS
MATCH EXECUTE
[ ] [ ] [ ]
[ ] [ ] [ ]
Fig. 1 the rule Interpreted cycles through a
Match- Execute sequence
FACTS
A flammable
liquid was sp
illed
The pH of the
spill is < 6
Spill smells l
ike vinegar
The spill ma
terial is an a
cid
MATCH
EXECUTE
If the pH of the spill is less than 6,the spill
material is acid
RULES
Fig.2 Rules execution can modify the facts
in the knowledge base
New fact added to the KB
FACTS
A flammable
liquid was sp
illed
The pH of the
spill is < 6
Spill smells l
ike vinegar
The spill ma
terial is an a
cid
ACETIC
ACID
MATCH
EXECUTE
If the spill material is an acid and the spill
smells like vinegar, the spill material is acet
ic acid
RULES
Fig.3 Facts added by rules can match rules
FACTS
A flammable
liquid was sp
illed
The pH of the
spill is < 6
Spill smells l
ike vinegar
MATCH
EXECUTE
If a flammable liquid was spilled, call the fi
re department
RULES
Fig.4 Rule execution can affect the real world
Fire d
ept is
called
The pH of th
e spill is < 6
The spill ma
terial is an a
cid
Spill smells l
ike vinegar
The spill ma
terial is an a
cetic acid
Fig.5 Inference chain for inferring the spill material
A
B
G
C
E
H
D
A E
G
C B
H
B F
A E
G C
H
D
Z
A
G
F
D
E
H
B
C
MATCH
MATCH MATCH EXECUTE
EXECUTE
EXECUTE
F &B  Z
C &D  F
A  D
F &B  Z
C &D  F
A  D
F &B  Z
C &D  F
A  D
RULES RULES
RULES
Fig. 6 An example of forward chaining
A D
C
F
B
Z
Fig. 7 Inference chain produced by Fig. 6
FACTS FACTS FACTS FACTS FACTS FACTS FACTS FACTS FACTS
Step
1 2 3 4 5 6 7 8
RULES RULES RULESRULESRULESRULESRULESRULESRULES
A E H
G CB
A E
HG
B C
A E
G H
B C C C C C C
C
A A A A A A
E E E E E E
G G G G G G
H H H H H
H
B B B B B BD F
D
FZ
F&B  Z
C&D F
A D
F&B  Z
C&D F
A D
F&B  Z
C&D F
A D
F&B  Z
C&D F
A D
F&B  Z
C&D F
A D
F&B  Z
C&D F
A D
F&B  Z
C&D F
A D
F&B  Z
C&D F
A D
F&B  Z
C&D F
A D
Need to get
F
B
Z not here
Want Z
Z
h
e
r
e
Get
C D
F not
here
Want F
F here
C here
Want
C
Need to
Get A
D not
here
Want D Want A
A here
Have
C & D
Have
F & B
Have Z
Execute Execute Execute
D
h
e
r
e
Fig. 8 An example of Backward Chaining
Matching
Q How to extract from the entire collection of rules that can be appli
ed at a given point?
=> Matching between current state and the precondition of the rule
s
Indexing
• One way to select applicable rules is to do a simple search throug
h all the rules, comparing each one’s preconditions to the current
state and extracting all the ones that match. But there are two pro
blems with this simple solution:
• It will be necessary to use a large number of rules. scanning throu
gh all of them at every step of the search would be hopelessly ine
fficient.
• It is not always immediately obvious whether a rule’s precondition
’s are satisfied by a particular state.
Slide 25
Matching
Q Indexing
–A large number of rules => too slow to find a rule
–Indexing: Use the current state as an index into rules an
d select the matching ones immediately
–There’s a trade-off between the ease of writing rules (hig
h-level descriptions) and the simplicity of the matching pr
ocess
Slide 26
Matching
– RETE gains efficiency from three major sources.
– The temporal nature of data. rules usually do not alter the
state description radically. Instead a rule will add one or t
wo elements or delete one or two elements but the state r
emains the same.RETE maintains a network of rule condi
tions and it uses changes in the state description to deter
mine which new rules might apply.
– Structural similarity in rules.Eg.one rule concludes jaguar(
X)if mammal(x),feline(x),carnivorous(x) and has-spots(x).
Another rule concludes tiger(x) and is identical to the first r
ule except that it replaces has-spots with has-stripes. If tw
o rules are matched independently, a lot of work is repeat
ed unnecessarily. RETE stores rules so that they share str
uctures in memory.sets of conditions that appear in sever
al rules are matched once per cycle. Slide 27
Matching
-Persistence of variable binding consistency: while all the
individual preconditions of a rule might be met there may b
e variable binding conflicts that prevent the rule from firing.
Son(Mary,joe) and son (Bill,Bob) .
The individual preconditions of the rule can be matched
Son(x,y)A son(y,z)grandparent(x,z)
Can be matched, but not in a manner that satisfies the con
straint imposed by the variable y.
Slide 28
Matching
Q Approximate matching
–Rules should be applied if their preconditions approxim
ately match the current situation
–Example: A speech-understanding program
• Rules: A description of a physical waveform to p
hones (a, e, …)
• Physical signal: differences in the way individuals
speak, result of background noise, …
Slide 29
Matching
ELIZA (Weizenbaum, 1966): Person: Men all are alike.
ELIZA: In what way?
Person: They're always bugging us about something or
other.
ELIZA: Can you think of specific example? Person: Well,
my boyfriend made me come here. ELIZA: Your boyfri
end made you come here.
Person: He says I'm depressed much of the time. ELIZA:
I am sorry to hear you are depressed.
Slide 30
Matching
ELIZA (Weizenbaum, 1966): Person: It's true. I am unhap
py.
ELIZA: Do you think coming here will help you not to be u
nhappy?
Person: I need some help, that much seems certain. ELIZ
A: What would it mean to you if you got some help?
Person: Perhaps I could learn to get along with my brothe
r.
ELIZA: Tell me more about your family.
....
Slide 31
Matching
ELIZA (Weizenbaum, 1966):
(X me Y)
(I remember X)
(X you Y)
(Why do you remember X just now?)
(My {family-member} is Y) (Who else in your family is Y?)
(X {family-member} Y) (Tell me more about your family)
....
Person: My brother is mean to me.
ELIZA: Who else in your family is mean to you?
....
Slide 32
Matching
Conflict resolution:
The result of the matching process is a list of rules whose
antecedents
–Preferences based on rules:
• Specificity of rules
• Physical order of rules
–Preferences based on objects:
• Importance of objects
• Position of objects
–Preferences based on action:
• Evaluation of states Slide 33
Control Knowledge
Knowledge about which paths are most likely to lead quickl
y to a goal state is often called search control knowledge.
– Which states are more preferable to others.
– Which rule to apply in a given situation.
– The order in which to pursue subgoals
– Useful sequences of rules to apply.
Search control knowledge = Meta knowledge
Slide 34
Control Knowledge
There are a number of AI systems that represent their control
knowledge with rules. Example SOAR,PRODIGY
SOAR is a general architecture for building intelligent systems.
Slide 35
Control Knowledge
PRODIGY is a general purpose problem solving system, th
at incorporates several different learning mechanisms.
It can acquire control rules in a number of ways:
Through hand coding by programmers
Through a static analysis of the domain’s operators.
Through looking at traces of its own problem solving behav
ior.
PRODIGY learns control rules from its experience, but unlik
e SOAR it learns from its failures.
PRODIGY pursues an unfruitful path, it will try to come uo
with an explanation of why that path failed. It will then us
e that explanation to build control knowledge that will hel
p it avoid fruitless search paths in future.
Slide 36
Control Knowledge
Two issues concerning control rules:
• The first issue is called the utility problem. As we add mo
re and more control knowledge to a system, the system i
s able to search more judiciously. If there are many contr
ol rules, simply matching them all can be very time consu
ming.
• the second issue concerns with the complexity of the pro
duction system interpreter.
Slide 37

More Related Content

PPTX
Predicate logic
PPTX
Virtual reality ppt
PDF
Penetration Testing Tutorial | Penetration Testing Tools | Cyber Security Tra...
PPTX
daa-unit-3-greedy method
PPTX
Knowledge representation In Artificial Intelligence
PPT
Knowledge Representation & Reasoning
PPTX
Pen Testing Explained
PPTX
Knowledge representation and Predicate logic
Predicate logic
Virtual reality ppt
Penetration Testing Tutorial | Penetration Testing Tools | Cyber Security Tra...
daa-unit-3-greedy method
Knowledge representation In Artificial Intelligence
Knowledge Representation & Reasoning
Pen Testing Explained
Knowledge representation and Predicate logic

What's hot (20)

PPTX
Issues in knowledge representation
PPT
Conceptual dependency
PPT
Problems, Problem spaces and Search
PPTX
Logics for non monotonic reasoning-ai
PPT
Heuristic Search Techniques {Artificial Intelligence}
PDF
I. AO* SEARCH ALGORITHM
PPTX
Propositional logic
PDF
I.BEST FIRST SEARCH IN AI
PPTX
Semantic nets in artificial intelligence
PPTX
Control Strategies in AI
PPTX
Dempster shafer theory
PPT
Heuristic Search Techniques Unit -II.ppt
PPTX
Knowledge representation in AI
PDF
Artificial Intelligence Notes Unit 2
PPT
Planning
PDF
Hill climbing algorithm in artificial intelligence
PPT
Knowledge Representation in Artificial intelligence
PPTX
Planning in AI(Partial order planning)
PDF
Production System in AI
Issues in knowledge representation
Conceptual dependency
Problems, Problem spaces and Search
Logics for non monotonic reasoning-ai
Heuristic Search Techniques {Artificial Intelligence}
I. AO* SEARCH ALGORITHM
Propositional logic
I.BEST FIRST SEARCH IN AI
Semantic nets in artificial intelligence
Control Strategies in AI
Dempster shafer theory
Heuristic Search Techniques Unit -II.ppt
Knowledge representation in AI
Artificial Intelligence Notes Unit 2
Planning
Hill climbing algorithm in artificial intelligence
Knowledge Representation in Artificial intelligence
Planning in AI(Partial order planning)
Production System in AI
Ad

Similar to knowledge representation using rules (20)

DOC
Chapter 5 (final)
PDF
17 1 knowledge-based system
PDF
AI Lesson 12
PPTX
Knowledge & logic in Artificial Intelligence.pptx
PPT
Problem space
PPT
Problem space
PPT
Problem space
PPT
Basics of Machine Learning
PDF
Prompt it, not Google it - Prompt Engineering for Data Scientists
PPTX
Akhil.pptxkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPTX
Akhil.pptxkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PDF
AI Lesson 17
PDF
Mixed Effects Models - Random Intercepts
PPT
2.Problems Problem Spaces and Search.ppt
PPTX
AI IMPORTANT QUESTION
PPTX
Understanding data dfm_1_yogi_schulz_2017_05
PPTX
Kr using rules
PPT
Lecture 10 job evaluation
PPTX
Statistical tests
DOCX
Chapter 4Optimization Manifesto OurMission and Our Unif.docx
Chapter 5 (final)
17 1 knowledge-based system
AI Lesson 12
Knowledge & logic in Artificial Intelligence.pptx
Problem space
Problem space
Problem space
Basics of Machine Learning
Prompt it, not Google it - Prompt Engineering for Data Scientists
Akhil.pptxkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
Akhil.pptxkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
AI Lesson 17
Mixed Effects Models - Random Intercepts
2.Problems Problem Spaces and Search.ppt
AI IMPORTANT QUESTION
Understanding data dfm_1_yogi_schulz_2017_05
Kr using rules
Lecture 10 job evaluation
Statistical tests
Chapter 4Optimization Manifesto OurMission and Our Unif.docx
Ad

Recently uploaded (20)

PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PPTX
Orientation - ARALprogram of Deped to the Parents.pptx
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PPTX
Pharma ospi slides which help in ospi learning
PPTX
GDM (1) (1).pptx small presentation for students
PDF
O7-L3 Supply Chain Operations - ICLT Program
PPTX
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PDF
Complications of Minimal Access Surgery at WLH
PDF
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PPTX
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PDF
01-Introduction-to-Information-Management.pdf
PPTX
202450812 BayCHI UCSC-SV 20250812 v17.pptx
PDF
Microbial disease of the cardiovascular and lymphatic systems
PDF
VCE English Exam - Section C Student Revision Booklet
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PPTX
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
human mycosis Human fungal infections are called human mycosis..pptx
Orientation - ARALprogram of Deped to the Parents.pptx
Module 4: Burden of Disease Tutorial Slides S2 2025
Pharma ospi slides which help in ospi learning
GDM (1) (1).pptx small presentation for students
O7-L3 Supply Chain Operations - ICLT Program
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
FourierSeries-QuestionsWithAnswers(Part-A).pdf
Complications of Minimal Access Surgery at WLH
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
Chinmaya Tiranga quiz Grand Finale.pdf
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
01-Introduction-to-Information-Management.pdf
202450812 BayCHI UCSC-SV 20250812 v17.pptx
Microbial disease of the cardiovascular and lymphatic systems
VCE English Exam - Section C Student Revision Booklet
Abdominal Access Techniques with Prof. Dr. R K Mishra
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...

knowledge representation using rules

  • 1. Ch6: Knowledge Representation Using Rules Procedural vs. Declarative Knowledge Logic Programming Forward vs. backward reasoning Matching Control knowledge Slide 1
  • 2. Procedural vs. Declarative Knowledge Q Declarative representation –Knowledge is specified but the use is not given. –Need a program that specifies what is to be done to the k nowledge and how. –Example: • Logical assertions and Resolution theorem prover –A different way: Logical assertions can be viewed as a pr ogram, rather than as data to a program. => Logical assertions = Procedural representations of kno wledge Slide 2
  • 3. Procedural vs. Declarative Knowledge Q Procedural representation –The control information that is necessary to use the kno wledge is considered to be embedded in the knowledge it self. –Need an interpreter that follows the instructions given in the knowledge. –The real difference between the declarative and the pro cedural views of knowledge lines in where control informa tion resides. • Kowalski’s equation: Algorithm = Logic + Control Slide 3
  • 4. Slide 4 Procedural knowledge Declarative knowledge – Knowledge about "how to do something"; e.g., to determine if Peter or Robert is older, first find their ages. – ◊ Focuses on tasks that must be performed to reach a particular objective or goal. – ◊ Examples : procedures, rules, strategies, agendas, models. – Knowledge about "that something is true or false". e.g., A car has four tyres; Peter is older than Robert; – ◊ Refers to representations of objects and events; knowledge about facts and relationships; – ◊ Example : concepts, objects, facts, propositions, assertions, semantic nets, logic and descriptive models. –
  • 5. Procedural vs. Declarative Knowledge The real difference between the declarative and the proce dural views of knowledge lies in where control information reside s. Example: man(Marcus) man(Caesar) person(Cleopatra) x: man(x) person(x) person(x)? X is to be binded to a particular value for which person is true. O ur knowledge base justifies any of the following answers y=Marcus y=ceasar Y=Cleopatra. Slide 5
  • 6. Procedural vs. Declarative Knowledge •Because there is no more than one value that satisfies the predic ate, but only one value is needed, the answer depends on the ord er in which the assertions are examined. •Declarative assertions do not say how they will be examined. •y=cleopatra is the answer for the question when viewed declarati vely. When viewed procedurally, the answer is Marcus.this happens be cause the first statement the person goal is the inference rule x: man(x) person(x) Slide 6
  • 7. Procedural vs. Declarative Knowledge •This rule sets up a subgoal to find a man.Again the statements ar e examined from the beginning and now Marcus is found to satisfy the subgoal and thus also the goal. •So Marcus is reported as the answer. •There is no clear cut answer whether declarative or procedural kn owledge representation frameworks are better. Slide 7
  • 8. Logic Programming •Logic Programming is a programming language paradigm o n which logical assertions are viewed as programs. •PROLOG program is described as a series of logical asserti ons each of which is a Horn Clause. Prolog program = {Horn Clauses} –Horn clause: disjunction of literals of which at most one is p ositive literal p,¬pVq,and pq are horn clauses. => Prolog program is decidable –Control structure: Prolog interpreter = backward reasoning + depth-first with backtracking Slide 8
  • 9. Logic Programming Q Logic: X: pet(X) ^ small(X) apartmentpet(X) X: cat(X) v dog(X) pet(X) X: poodle(X) dog(X) ^ small(X) poodle(fluffy) Q Prolog: apartmentpet(X) :- pet(X) , small(X). pet(X) :- cat( X). pet(X) :- dog(X). dog(X) :- poodle(X). small(X) :- poodle(X). poodle(fluffy). Slide 9
  • 10. Logic Programming Q Prolog vs. Logic –Quantification is provided implicitly by the way the variabl es are interpreted. • Variables: begin with UPPERCASE letter • Constants: begin with lowercase letters or number –There is an explicit symbol for AND (,), but there’s none f or OR. Instead, disjunction must be represented as a list of alternative statements –“p implies q” is written as q :- p. Slide 1 0
  • 11. Logic Programming Logical negation cannot be represented explicitly in pure Prolog. – Example: x: dog(x) cat(x) => problem-solving strategy: NEGATION AS FAILURE ?- cat(fluffy). => false b/c it’s unable to prove Fluffy is a cat. Q Negation as failure requires: CLOSED WORLD ASSUM PTION which states that all relevant ,true assertions are contained in our knowledge base or derivable from asser tions that are so contained Slide 1 1
  • 12. Forward vs. Backward Reasoning •Reason backward from the goal states: Begin building a tree of mov e sequences that might be solutions by starting with the goal configu rations at the root of the tree. Generate the next level of the tree by fi nding all the rules whose right side match the root node. Use the left sides of the rules to generate the nodes at this second level of the tr ee. Generate the next level of the tree by taking each node at the pr evious level and finding all the rules whose right sides match it. The n use the corresponding left sides to generate the new nodes. Conti nue until a node that matches the initial state is generated. This met hod of reasoning backward from the desired final state if often called goal-directed reasoning. Slide 12
  • 13. Forward vs. Backward Reasoning Q Forward: from the start states. Q Backward: from the goal states. •Reason forward from the initial states: Begin building a tree of move sequences that might be solutions by starting with the initial configur ations at the root of the tree. Generate the next level of the tree by f inding all the rules whose left sides match the root node and using th eir right sides to create the new configurations. Continue until a confi guration that matches the goal state is generated. •Reason backward from the goal states: Begin building a tree of mov e sequences that might be solutions by starting with the goal configu rations at the root of the tree. Generate the next level of the tree by fi nding all the rules whose right side match the root node. Slide 13
  • 14. Forward vs. Backward Reasoning Q Four factors influence forward or Backward? –Move from the smaller set of states to the larger set of sta tes –Proceed in the direction with the lower branching factor –Proceed in the direction that corresponds more closely wi th the way the user will think –Proceed in the direction that corresponds more closely wi th the way the problem-solving episodes will be triggered Slide 14
  • 15. Forward vs. Backward Reasoning Q To encode the knowledge for reasoning, we need 2 kinds of rules: – Forward rules: to encode knowledge about how to respo nd to certain input. – Backward rules: to encode knowledge about how to achi eve particular goals. Slide 15
  • 16. KR Using rules IF . . THEN ECA (Event Condition Action) RULES . APLLICATIONS EXAMPLES 1. If flammable liquid was spilled, call the fire depart ment. 2. If the pH of the spill is less than 6, the spill materi al is an acid. 3. If the spill material is an acid, and the spill smells li ke vinegar, the spill material is acetic acid. ( are used to represent rules)
  • 17. FACTS MATCH EXECUTE [ ] [ ] [ ] [ ] [ ] [ ] Fig. 1 the rule Interpreted cycles through a Match- Execute sequence
  • 18. FACTS A flammable liquid was sp illed The pH of the spill is < 6 Spill smells l ike vinegar The spill ma terial is an a cid MATCH EXECUTE If the pH of the spill is less than 6,the spill material is acid RULES Fig.2 Rules execution can modify the facts in the knowledge base New fact added to the KB
  • 19. FACTS A flammable liquid was sp illed The pH of the spill is < 6 Spill smells l ike vinegar The spill ma terial is an a cid ACETIC ACID MATCH EXECUTE If the spill material is an acid and the spill smells like vinegar, the spill material is acet ic acid RULES Fig.3 Facts added by rules can match rules
  • 20. FACTS A flammable liquid was sp illed The pH of the spill is < 6 Spill smells l ike vinegar MATCH EXECUTE If a flammable liquid was spilled, call the fi re department RULES Fig.4 Rule execution can affect the real world Fire d ept is called
  • 21. The pH of th e spill is < 6 The spill ma terial is an a cid Spill smells l ike vinegar The spill ma terial is an a cetic acid Fig.5 Inference chain for inferring the spill material
  • 22. A B G C E H D A E G C B H B F A E G C H D Z A G F D E H B C MATCH MATCH MATCH EXECUTE EXECUTE EXECUTE F &B  Z C &D  F A  D F &B  Z C &D  F A  D F &B  Z C &D  F A  D RULES RULES RULES Fig. 6 An example of forward chaining
  • 23. A D C F B Z Fig. 7 Inference chain produced by Fig. 6
  • 24. FACTS FACTS FACTS FACTS FACTS FACTS FACTS FACTS FACTS Step 1 2 3 4 5 6 7 8 RULES RULES RULESRULESRULESRULESRULESRULESRULES A E H G CB A E HG B C A E G H B C C C C C C C A A A A A A E E E E E E G G G G G G H H H H H H B B B B B BD F D FZ F&B  Z C&D F A D F&B  Z C&D F A D F&B  Z C&D F A D F&B  Z C&D F A D F&B  Z C&D F A D F&B  Z C&D F A D F&B  Z C&D F A D F&B  Z C&D F A D F&B  Z C&D F A D Need to get F B Z not here Want Z Z h e r e Get C D F not here Want F F here C here Want C Need to Get A D not here Want D Want A A here Have C & D Have F & B Have Z Execute Execute Execute D h e r e Fig. 8 An example of Backward Chaining
  • 25. Matching Q How to extract from the entire collection of rules that can be appli ed at a given point? => Matching between current state and the precondition of the rule s Indexing • One way to select applicable rules is to do a simple search throug h all the rules, comparing each one’s preconditions to the current state and extracting all the ones that match. But there are two pro blems with this simple solution: • It will be necessary to use a large number of rules. scanning throu gh all of them at every step of the search would be hopelessly ine fficient. • It is not always immediately obvious whether a rule’s precondition ’s are satisfied by a particular state. Slide 25
  • 26. Matching Q Indexing –A large number of rules => too slow to find a rule –Indexing: Use the current state as an index into rules an d select the matching ones immediately –There’s a trade-off between the ease of writing rules (hig h-level descriptions) and the simplicity of the matching pr ocess Slide 26
  • 27. Matching – RETE gains efficiency from three major sources. – The temporal nature of data. rules usually do not alter the state description radically. Instead a rule will add one or t wo elements or delete one or two elements but the state r emains the same.RETE maintains a network of rule condi tions and it uses changes in the state description to deter mine which new rules might apply. – Structural similarity in rules.Eg.one rule concludes jaguar( X)if mammal(x),feline(x),carnivorous(x) and has-spots(x). Another rule concludes tiger(x) and is identical to the first r ule except that it replaces has-spots with has-stripes. If tw o rules are matched independently, a lot of work is repeat ed unnecessarily. RETE stores rules so that they share str uctures in memory.sets of conditions that appear in sever al rules are matched once per cycle. Slide 27
  • 28. Matching -Persistence of variable binding consistency: while all the individual preconditions of a rule might be met there may b e variable binding conflicts that prevent the rule from firing. Son(Mary,joe) and son (Bill,Bob) . The individual preconditions of the rule can be matched Son(x,y)A son(y,z)grandparent(x,z) Can be matched, but not in a manner that satisfies the con straint imposed by the variable y. Slide 28
  • 29. Matching Q Approximate matching –Rules should be applied if their preconditions approxim ately match the current situation –Example: A speech-understanding program • Rules: A description of a physical waveform to p hones (a, e, …) • Physical signal: differences in the way individuals speak, result of background noise, … Slide 29
  • 30. Matching ELIZA (Weizenbaum, 1966): Person: Men all are alike. ELIZA: In what way? Person: They're always bugging us about something or other. ELIZA: Can you think of specific example? Person: Well, my boyfriend made me come here. ELIZA: Your boyfri end made you come here. Person: He says I'm depressed much of the time. ELIZA: I am sorry to hear you are depressed. Slide 30
  • 31. Matching ELIZA (Weizenbaum, 1966): Person: It's true. I am unhap py. ELIZA: Do you think coming here will help you not to be u nhappy? Person: I need some help, that much seems certain. ELIZ A: What would it mean to you if you got some help? Person: Perhaps I could learn to get along with my brothe r. ELIZA: Tell me more about your family. .... Slide 31
  • 32. Matching ELIZA (Weizenbaum, 1966): (X me Y) (I remember X) (X you Y) (Why do you remember X just now?) (My {family-member} is Y) (Who else in your family is Y?) (X {family-member} Y) (Tell me more about your family) .... Person: My brother is mean to me. ELIZA: Who else in your family is mean to you? .... Slide 32
  • 33. Matching Conflict resolution: The result of the matching process is a list of rules whose antecedents –Preferences based on rules: • Specificity of rules • Physical order of rules –Preferences based on objects: • Importance of objects • Position of objects –Preferences based on action: • Evaluation of states Slide 33
  • 34. Control Knowledge Knowledge about which paths are most likely to lead quickl y to a goal state is often called search control knowledge. – Which states are more preferable to others. – Which rule to apply in a given situation. – The order in which to pursue subgoals – Useful sequences of rules to apply. Search control knowledge = Meta knowledge Slide 34
  • 35. Control Knowledge There are a number of AI systems that represent their control knowledge with rules. Example SOAR,PRODIGY SOAR is a general architecture for building intelligent systems. Slide 35
  • 36. Control Knowledge PRODIGY is a general purpose problem solving system, th at incorporates several different learning mechanisms. It can acquire control rules in a number of ways: Through hand coding by programmers Through a static analysis of the domain’s operators. Through looking at traces of its own problem solving behav ior. PRODIGY learns control rules from its experience, but unlik e SOAR it learns from its failures. PRODIGY pursues an unfruitful path, it will try to come uo with an explanation of why that path failed. It will then us e that explanation to build control knowledge that will hel p it avoid fruitless search paths in future. Slide 36
  • 37. Control Knowledge Two issues concerning control rules: • The first issue is called the utility problem. As we add mo re and more control knowledge to a system, the system i s able to search more judiciously. If there are many contr ol rules, simply matching them all can be very time consu ming. • the second issue concerns with the complexity of the pro duction system interpreter. Slide 37