SlideShare a Scribd company logo
Inference in First-Order
Logic
Artificial Intelligence: Inference in First-Order Logic
1
A Brief History of Reasoning
450B.C. Stoics propositional logic, inference (maybe)
322B.C. Aristotle “syllogisms” (inference rules), quantifiers
1565 Cardano probability theory (propositional logic + uncertainty)
1847 Boole propositional logic (again)
1879 Frege first-order logic
1922 Wittgenstein proof by truth tables
1930 Go¨ del ∃ complete algorithm for FOL
1930 Herbrand complete algorithm for FOL (reduce to propositional)
1931 Go¨ del ¬∃ complete algorithm for arithmetic systems
1960 Davis/Putnam “practical” algorithm for propositional logic
1965 Robinson “practical” algorithm for FOL—resolution
Artificial Intelligence: Inference in First-Order Logic 5 March 2024
2
The Story So Far
●Propositional logic
●Subset of propositional logic: horn clauses
●Inference algorithms
– forward chaining
– backward chaining
– resolution (for full propositional logic)
●First order logic (FOL)
– variables
– functions
– quantifiers
– etc.
●Today: inference for first order logic
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
3
Outline
●Reducing first-order inference to propositional inference
●Unification
●Generalized Modus Ponens
●Forward and backward chaining
●Logic programming
●Resolution
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
4
reduction to
propositional inference
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
5
Universal Instantiation
●Every instantiation of a universally quantified sentence is entailed by it:
∀ v α
SUBST({v/g}, α)
for any variable v and ground term g
●E.g., ∀ x King(x) ∧ Greedy(x) =⇒ Evil(x) yields
King(John) ∧ Greedy(John) =⇒ Evil(John)
King(Richard) ∧ Greedy(Richard) =⇒ Evil(Richard)
King(Father(John)) ∧ Greedy(Father(John)) =⇒
Evil(Father(John))
⋮
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
6
Existential Instantiation
●For any sentence α, variable v, and constant symbol k
that does not appear elsewhere in the knowledge base:
∃ v
α
SUBST(Iv/k}, α)
●E.g., ∃ x Crown(x) ∧ OnHead(x, John) yields
Crown(C1) ∧ OnHead(C1, John)
provided C1 is a new constant symbol, called a Skolem constant
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
7
Instantiation
●Universal Instantiation
– can be applied several times to add new sentences
– the new KB is logically equivalent to the old
●Existential Instantiation
– can be applied once to replace the existential sentence
– the new KB is not equivalent to the old
– but is satisfiable iff the old KB was satisfiable
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
8
Reduction to Propositional
Inference
●Suppose the KB contains just the following:
∀ x King(x) ∧ Greedy(x) =⇒ Evil(x)
King(John)
Greedy(John)
Brother(Richard, John)
●Instantiating the universal sentence in all
possible ways, we have
King(John) ∧ Greedy(John) =⇒ Evil(John)
King(Richard) ∧ Greedy(Richard) =⇒ Evil(Richard)
King(John)
Greedy(John)
Brother(Richard, John)
●The new KB is propositionalized:
proposition symbols are
King(John), Greedy(John),
Evil(John), Brother(Richard,
John), etc.
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
9
Reduction to Propositional
Inference
●Claim: a ground sentence is entailed by new KB iff entailed by original KB
●Claim: every FOL KB can be propositionalized so as to preserve entailment
●Idea: propositionalize KB and query, apply resolution, return result
●Problem: with function symbols, there are infinitely many ground terms,
e.g., Father(Father(Father(John)))
●Theorem: Herbrand (1930). If a sentence α is entailed by an FOL KB,
it is entailed by a finite subset of the propositional KB
●Idea: For n = 0 to ∞ do
create a propositional KB by instantiating with depth-n terms
see if α is entailed by this KB
●Problem: works if α is entailed, loops if α is not entailed
●Theorem: Turing (1936), Church (1936), entailment in FOL is
semidecidable
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
Practical Problems with
Propositionalization10
●Propositionalization seems to generate lots of irrelevant sentences.
●E.g., from ∀ x King(x) ∧ Greedy(x) =⇒ Evil(x)
King(John)
∀ y Greedy(y)
Brother(Richard, John)
it seems obvious that Evil(John), but propositionalization produces lots of
facts such as Greedy(Richard) that are irrelevant
●With p k-ary predicates and n constants, there are p ⋅ nk instantiations
●With function symbols, it gets nuch much worse!
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
11
unification
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
12
Plan
●We have the inference rule
– ∀ x King(x) ∧ Greedy(x) =⇒ Evil(x)
●We have facts that (partially) match the precondition
– King(John)
– ∀ y Greedy(y)
●We need to match them up with substitutions: θ = Ix/John, y/John} works
– unification
– generalized modus ponens
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
13
Unification
●UNIFY(α, β) = θ if αθ =
βθ
p
q
θ
Knows(John, x)
Knows(John, x)
Knows(John, x)
Knows(John, x)
Knows(John, Jane)
Knows(y, Mary)
Knows(y, Mother(y))
Knows(x, Mary)
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
14
Unification
●UNIFY(α, β) = θ if αθ =
βθ
p
q
θ
Ix/Jane}
Knows(John, x)
Knows(John, x)
Knows(John, x)
Knows(John, x)
Knows(John, Jane)
Knows(y, Mary)
Knows(y, Mother(y))
Knows(x, Mary)
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
15
Unification
●UNIFY(α, β) = θ if αθ =
βθ
p
q
θ
Ix/Jane}
Ix/Mary,
y/John}
Knows(John, x)
Knows(John, x)
Knows(John, x)
Knows(John, x)
Knows(John, Jane)
Knows(y, Mary)
Knows(y, Mother(y))
Knows(x, Mary)
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
16
Unification
●UNIFY(α, β) = θ if αθ =
βθ
p
q
θ
Ix/Jane}
Ix/Mary, y/John}
Iy/John, x/Mother(John)}
Knows(John, x)
Knows(John, x)
Knows(John, x)
Knows(John, x)
Knows(John, Jane)
Knows(y, Mary)
Knows(y, Mother(y))
Knows(x, Mary)
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
17
Unification
●UNIFY(α, β) = θ if αθ =
βθ
p
q
θ
Knows(John, x)
Knows(John, x)
Knows(John, x)
Knows(John, x)
Knows(John, Jane)
Knows(y, Mary)
Knows(y, Mother(y))
Knows(x, Mary)
Ix/Jane}
Ix/Mary, y/John}
Iy/John, x/Mother(John)}
fail
●Standardizing apart eliminates overlap of variables, e.g., Knows(z17, Mary)
Knows(John, x) Knows(z17, Mary) Iz17/John, x/Mary}
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
18
generalized modus ponens
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
19
Generalized Modus Ponens
●Generalized modus ponens used with KB of definite clauses
(exactly one positive literal)
●All variables assumed universally quantified
p1
′, p2
′, . . . , pn
′, (p1 ∧ p2 ∧ . . . ∧ pn ⇒
q) q
θ
′
where pi θ = piθ for all
i
●Rule:
●Precondition of rule:
●Implication:
●Facts:
●Substitution:
⇒ Result of modus ponens:
King(x) ∧ Greedy(x) =⇒ Evil(x)
p2 is Greedy(x)
p1 is King(x)
q is Evil(x)
p1
′ is
King(John)
p2
′ is Greedy(y)
θ is Ix/John,
y/John}
qθ is Evil(John)
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
20
forward chaining
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
21
Example Knowledge
●The law says that it is a crime for an American to sell weapons to hostile
nations. The country Nono, an enemy of America, has some
missiles, and all of its missiles were sold to it by Colonel West, who is
American.
●Prove that Col. West is a criminal
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
22
Example Knowledge Base
●. . . it is a crime for an American to sell weapons to hostile nations:
American(x) ∧ Weapon(y) ∧ Sells(x, y, z) ∧ Hostile(z) =⇒ Criminal(x)
●Nono . . . has some missiles, i.e., ∃ x Owns(Nono, x) ∧
Missile(x): Owns(Nono, M1) and Missile(M1)
●. . . all of its missiles were sold to it by Colonel West
Missile(x) ∧ Owns(Nono, x) =⇒ Sells(West, x, Nono)
●Missiles are weapons:
Missile(x) ⇒ Weapon(x)
●An enemy of America counts as “hostile”:
Enemy(x, America) =⇒ Hostile(x)
●West, who is American . . .
American(West)
●The country Nono, an enemy of America . . .
Enemy(Nono, America)
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
23
Forward Chaining Proof
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
24
Forward Chaining Proof
(Note: ∀ x Missile(x) ∧ Owns(Nono, x) =⇒ Sells(West, x, Nono))
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
25
Forward Chaining Proof
(Note: American(x) ∧ Weapon(y) ∧ Sells(x, y, z) ∧ Hostile(z) =⇒ Criminal(x))
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
26
Properties of Forward Chaining
●Sound and complete for first-order definite clauses
(proof similar to propositional proof)
●Datalog (1977) = first-order definite clauses + no functions (e.g., crime example)
Forward chaining terminates for Datalog in poly iterations: at most p ⋅ nk
literals
●May not terminate in general if α is not entailed
●This is unavoidable: entailment with definite clauses is semidecidable
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
27
Efficiency of Forward Chaining
●Simple observation: no need to match a rule on iteration k
if a premise wasn’t added on iteration k − 1
=⇒ match each rule whose premise contains a newly
added literal
●Matching itself can be expensive
●Database indexing allows O(1) retrieval of known facts
e.g., query Missile(x) retrieves Missile(M1)
●Matching conjunctive premises against known facts is
NP-hard
●Forward chaining is widely used in deductive
databases
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
28
Hard Matching Example
Diff(wa, nt) ∧ Diff(wa, sa) ∧
Diff(nt, q)Diff(nt, sa) ∧
Diff(q, nsw) ∧ Diff(q, sa)
∧
Diff(nsw, v) ∧ Diff(nsw, sa) ∧
Diff(v, sa) =⇒ Colorable()
Diff(Red, Blue)
Diff(Green, Red)
Diff(Blue, Red)
Diff(Red, Green)
Diff(Green, Blue)
Diff(Blue, Green)
●Colorable() is inferred iff the constraint satisfaction problem has a solution
●CSPs include 3SAT as a special case, hence matching is NP-hard
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
29
Forward Chaining Algorithm
function FOL-FC-ASK(KB, α) returns a substitution or false
repeat until new is empty
new ← g
for each sentence r in KB do
( p1 ∧ . . . ∧ pn =⇒ q ) ←
STANDARDIZE-APART(r) 1 n
for each θ such that (p1 ∧ . . . ∧ pn)θ = (p′ ∧ . . . ∧ p
′ )θ 1 n
for some p′ , . . . , p′ in KB
q ′ ← SUBST(θ, q )
if q ′ is not a renaming of a sentence already in KB or new then do
add q ′ to new
φ ← UNIFY(q ′, α)
if φ is not fail then return φ
add new to KB
return false
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
30
backward chaining
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
31
Backward Chaining
●Start with query
●Check if it can be derived by given rules and facts
– apply rules that infer the query
– recurse over pre-conditions
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
32
Backward Chaining Example
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
33
Backward Chaining Example
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
34
Backward Chaining Example
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
35
Backward Chaining Example
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
36
Backward Chaining Example
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
37
Backward Chaining Example
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
38
Backward Chaining Example
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
39
Properties of Backward
Chaining
●Depth-first recursive proof search: space is linear in size of proof
●Incomplete due to infinite loops
=⇒ fix by checking current goal against every goal on
stack
●Inefficient due to repeated subgoals (both success and failure)
=⇒ fix using caching of previous results (extra space!)
●Widely used (without improvements!) for logic programming
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
40
Backward Chaining Algorithm
function FOL-BC-ASK(KB, goals, θ) returns a set of substitutions
inputs: KB, a knowledge base
goals, a list of conjuncts forming a query (θ already applied)
θ, the current substitution, initially the empty substitution g
local variables: answers, a set of substitutions, initially empty
if goals is empty then return Iθ}
q ′ ← SUBST(θ, FIRST(goals))
for each sentence r in KB
where STANDARDIZE-APART(r) = ( p1 ∧ . . . ∧ pn ⇒ q)
and θ′ ← UNIFY(q, q ′) succeeds
new goals ← [ p1, . . . , pn|REST(goals)]
answers ← FOL-BC-ASK(KB, new goals, COMPOSE(θ′, θ)) ∪
answers
return answers
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
41
logic programming
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
42
Logic Programming
●Computation as inference on logical KBs
Logic programming
1. Identify problem
2. Assemble information
3. Tea break
4. Encode information in KB
5. Encode problem instance as facts
6. Ask queries
7. Find false facts
Ordinary programming
Identify problem
Assemble information
Figure out solution
Program solution
Encode problem instance as data
Apply program to data
Debug procedural errors
●Should be easier to debug Capital(NewY ork, US) than x =
∶ x + 2 !
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
43
Prolog
●Basis: backward chaining with Horn clauses + bells & whistles
●Widely used in Europe, Japan (basis of 5th Generation project)
●Compilation techniques ⇒ approaching a billion logical inferences per second
●Program = set of clauses = head :- literal1, . . . literaln.
criminal(X) :- american(X), weapon(Y), sells(X,Y,Z), hostile(Z). missile(M1).
owns(Nono,M1).
sells(West,X,Nono) :- missile(X), owns(Nono,X).
weapon(X) :- missile(X).
hostile(X) :- enemy(X,America).
American(West).
Enemy(Nono,America).
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
44
Prolog Systems
●Depth-first, left-to-right backward chaining
●Built-in predicates for arithmetic etc., e.g., X is Y*Z+3
●Closed-world assumption (“negation as failure”)
e.g., given alive(X) :- not dead(X). alive(joe)
succeeds if dead(joe) fails
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
45
resolution
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
46
Resolution: Brief Summary
●Full first-order version:
l1 ∨ ⋯ ∨ lk , m1 ∨ ⋯ ∨ mn
(l1 ∨ ⋯ ∨ li−1 ∨ li+1 ∨ ⋯ ∨ lk ∨ m1 ∨ ⋯ ∨ mj−1 ∨ mj+1 ∨ ⋯ ∨
mn)θ
where UNIFY(li, ¬mj) = θ.
●For example, ¬Rich(x) ∨ Unhappy(x) Rich(Ken)
Unhappy(Ken)
with θ = Ix/Ken}
●Apply resolution steps to CNF (KB ∧ ¬α); complete for FOL
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
47
Conversion to CNF
Everyone who loves all animals is loved by someone:
∀ x [∀ y Animal(y) =⇒ Loves(x, y)] =⇒ [∃ y Loves(y, x)]
1. Eliminate biconditionals and implications
∀ x [¬∀ y ¬Animal(y) ∨ Loves(x, y)] ∨ [∃
y Loves(y, x)]
2. Move ¬ inwards: ¬∀ x, p ≡ ∃ x ¬p, ¬∃ x, p ≡ ∀ x ¬p:
∀ x [∃ y ¬(¬Animal(y) ∨ Loves(x, y))] ∨ [∃ y Loves(y, x)]
∀ x [∃ y ¬¬Animal(y) ∧ ¬Loves(x, y)] ∨ [∃ y Loves(y, x)]
∀ x [∃ y Animal(y) ∧ ¬Loves(x, y)] ∨ [∃ y Loves(y, x)]
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
48
Conversion to CNF
3. Standardize variables: each quantifier should use a different one
∀ x [∃ y Animal(y) ∧ ¬Loves(x, y)] ∨ [∃ z Loves(z, x)]
4. Skolemize: a more general form of existential instantiation.
Each existential variable is replaced by a Skolem
function of the enclosing universally quantified
variables:
∀ x [Animal(F (x)) ∧ ¬Loves(x, F (x))] ∨ Loves(G(x), x)
5. Drop universal quantifiers:
[Animal(F (x)) ∧ ¬Loves(x, F (x))] ∨ Loves(G(x), x)
6. Distribute ∧ over ∨:
[Animal(F (x)) ∨ Loves(G(x), x)] ∧ [¬Loves(x, F (x)) ∨
Loves(G(x), x)]
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
49
Our Previous Example
●Rules
– American(x) ∧ Weapon(y) ∧ Sells(x, y, z) ∧ Hostile(z) =⇒ Criminal(x)
– Missile(M1) and Owns(Nono, M1)
– Missile(x) ∧ Owns(Nono, x) =⇒ Sells(West, x, Nono)
– Missile(x) ⇒ Weapon(x)
– Enemy(x, America) =⇒ Hostile(x)
– American(West)
– Enemy(Nono, America)
●Converted to CNF
– ¬American(x) ∨ ¬Weapon(y) ∨ ¬Sells(x, y, z) ∨ ¬Hostile(z) ∨ Criminal(x)
– Missile(M1) and Owns(Nono, M1)
– ¬Missile(x) ∨ ¬Owns(Nono, x) ∨ Sells(West, x, Nono)
– ¬Missile(x) ∨ Weapon(x)
– ¬Enemy(x, America) ∨ Hostile(x)
– American(West)
– Enemy(Nono, America)
●Query: ¬Criminal(West)
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
50
Resolution Proof
Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024

More Related Content

PDF
lecture-inference-in-first-order-logic.pdf
PDF
First Order Logic resolution
PPTX
Inference in First-Order Logic
PPT
Inference in first-order logic Reducing first-order
PDF
Ai lecture 10(unit03)
PPT
9.class-notesr9.ppt
PPTX
Knowledge Representation & Reasoning AI UNIT 3
PPT
Unit III Knowledge Representation in AI K.Sundar,AP/CSE,VEC
lecture-inference-in-first-order-logic.pdf
First Order Logic resolution
Inference in First-Order Logic
Inference in first-order logic Reducing first-order
Ai lecture 10(unit03)
9.class-notesr9.ppt
Knowledge Representation & Reasoning AI UNIT 3
Unit III Knowledge Representation in AI K.Sundar,AP/CSE,VEC

Similar to lecture-inference-in-first-order-logic.pptx (20)

PPT
Propositional and first-order logic different chapters
PPTX
Knowledge Representation and Reasoning.pptx
PPT
Logic.ppt
PDF
16_FirstOrderLogic.p_4_moduleModuleNotespdf
PPTX
Propositional logic(part 2)
PPTX
Module4_AI 4th semester engineering.pptx
PPT
10a.ppt
PPTX
First order logic
PPT
Propositional and first order logic - AI
PPTX
Jarrar: First Order Logic- Inference Methods
PPTX
Foundations of Knowledge Representation in Artificial Intelligence.pptx
PPTX
AI3391 Artificial intelligence Session 28 Resolution.pptx
PPT
chapter9.ppt
PPTX
Knowledge representation and Predicate logic
PPTX
Untitled presentation in first order logic .pptx
PPTX
logic part of where the ai takes place p
PPTX
AI_05_First Order Logic.pptx
PPTX
Unification and Lifting
PDF
Knowledge base artificial intelligence.pdf
PDF
first_order_logic.pdf
Propositional and first-order logic different chapters
Knowledge Representation and Reasoning.pptx
Logic.ppt
16_FirstOrderLogic.p_4_moduleModuleNotespdf
Propositional logic(part 2)
Module4_AI 4th semester engineering.pptx
10a.ppt
First order logic
Propositional and first order logic - AI
Jarrar: First Order Logic- Inference Methods
Foundations of Knowledge Representation in Artificial Intelligence.pptx
AI3391 Artificial intelligence Session 28 Resolution.pptx
chapter9.ppt
Knowledge representation and Predicate logic
Untitled presentation in first order logic .pptx
logic part of where the ai takes place p
AI_05_First Order Logic.pptx
Unification and Lifting
Knowledge base artificial intelligence.pdf
first_order_logic.pdf
Ad

More from RaghavendraPrasad179187 (10)

PPT
gmatrix distro_gmatrix distro_gmatrix distro
PPT
Project Planning and control in Software Engineering
PPT
spatial surveillance techniques in artificial intelligence
PPT
hillclimb algorithm for heuristic search
PPT
bayessian structures and its role in artificial intelligence
PPT
bayesian in artificial intelligence and search methods
PPT
HEURISTIC SEARCH IN ARTIFICIAL INTELLEGENCE
PPT
Dijkstra_Algorithm with illustarted example
PPTX
Linked list data structures and algorithms
PPT
Heuristic Search Algorithm in AI and its Techniques
gmatrix distro_gmatrix distro_gmatrix distro
Project Planning and control in Software Engineering
spatial surveillance techniques in artificial intelligence
hillclimb algorithm for heuristic search
bayessian structures and its role in artificial intelligence
bayesian in artificial intelligence and search methods
HEURISTIC SEARCH IN ARTIFICIAL INTELLEGENCE
Dijkstra_Algorithm with illustarted example
Linked list data structures and algorithms
Heuristic Search Algorithm in AI and its Techniques
Ad

Recently uploaded (20)

PDF
Sciences of Europe No 170 (2025)
PPTX
ECG_Course_Presentation د.محمد صقران ppt
PDF
SEHH2274 Organic Chemistry Notes 1 Structure and Bonding.pdf
PPTX
2. Earth - The Living Planet earth and life
PPTX
Pharmacology of Autonomic nervous system
PDF
Formation of Supersonic Turbulence in the Primordial Star-forming Cloud
PPTX
neck nodes and dissection types and lymph nodes levels
PDF
Looking into the jet cone of the neutrino-associated very high-energy blazar ...
PDF
Assessment of environmental effects of quarrying in Kitengela subcountyof Kaj...
PPTX
Introduction to Fisheries Biotechnology_Lesson 1.pptx
PPTX
Overview of calcium in human muscles.pptx
PPTX
Taita Taveta Laboratory Technician Workshop Presentation.pptx
PDF
CAPERS-LRD-z9:AGas-enshroudedLittleRedDotHostingaBroad-lineActive GalacticNuc...
PDF
The scientific heritage No 166 (166) (2025)
PPTX
Protein & Amino Acid Structures Levels of protein structure (primary, seconda...
PPTX
famous lake in india and its disturibution and importance
PDF
Placing the Near-Earth Object Impact Probability in Context
PDF
Warm, water-depleted rocky exoplanets with surfaceionic liquids: A proposed c...
PDF
Cosmic Outliers: Low-spin Halos Explain the Abundance, Compactness, and Redsh...
PPTX
Introduction to Cardiovascular system_structure and functions-1
Sciences of Europe No 170 (2025)
ECG_Course_Presentation د.محمد صقران ppt
SEHH2274 Organic Chemistry Notes 1 Structure and Bonding.pdf
2. Earth - The Living Planet earth and life
Pharmacology of Autonomic nervous system
Formation of Supersonic Turbulence in the Primordial Star-forming Cloud
neck nodes and dissection types and lymph nodes levels
Looking into the jet cone of the neutrino-associated very high-energy blazar ...
Assessment of environmental effects of quarrying in Kitengela subcountyof Kaj...
Introduction to Fisheries Biotechnology_Lesson 1.pptx
Overview of calcium in human muscles.pptx
Taita Taveta Laboratory Technician Workshop Presentation.pptx
CAPERS-LRD-z9:AGas-enshroudedLittleRedDotHostingaBroad-lineActive GalacticNuc...
The scientific heritage No 166 (166) (2025)
Protein & Amino Acid Structures Levels of protein structure (primary, seconda...
famous lake in india and its disturibution and importance
Placing the Near-Earth Object Impact Probability in Context
Warm, water-depleted rocky exoplanets with surfaceionic liquids: A proposed c...
Cosmic Outliers: Low-spin Halos Explain the Abundance, Compactness, and Redsh...
Introduction to Cardiovascular system_structure and functions-1

lecture-inference-in-first-order-logic.pptx

  • 1. Inference in First-Order Logic Artificial Intelligence: Inference in First-Order Logic
  • 2. 1 A Brief History of Reasoning 450B.C. Stoics propositional logic, inference (maybe) 322B.C. Aristotle “syllogisms” (inference rules), quantifiers 1565 Cardano probability theory (propositional logic + uncertainty) 1847 Boole propositional logic (again) 1879 Frege first-order logic 1922 Wittgenstein proof by truth tables 1930 Go¨ del ∃ complete algorithm for FOL 1930 Herbrand complete algorithm for FOL (reduce to propositional) 1931 Go¨ del ¬∃ complete algorithm for arithmetic systems 1960 Davis/Putnam “practical” algorithm for propositional logic 1965 Robinson “practical” algorithm for FOL—resolution Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 3. 2 The Story So Far ●Propositional logic ●Subset of propositional logic: horn clauses ●Inference algorithms – forward chaining – backward chaining – resolution (for full propositional logic) ●First order logic (FOL) – variables – functions – quantifiers – etc. ●Today: inference for first order logic Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 4. 3 Outline ●Reducing first-order inference to propositional inference ●Unification ●Generalized Modus Ponens ●Forward and backward chaining ●Logic programming ●Resolution Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 5. 4 reduction to propositional inference Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 6. 5 Universal Instantiation ●Every instantiation of a universally quantified sentence is entailed by it: ∀ v α SUBST({v/g}, α) for any variable v and ground term g ●E.g., ∀ x King(x) ∧ Greedy(x) =⇒ Evil(x) yields King(John) ∧ Greedy(John) =⇒ Evil(John) King(Richard) ∧ Greedy(Richard) =⇒ Evil(Richard) King(Father(John)) ∧ Greedy(Father(John)) =⇒ Evil(Father(John)) ⋮ Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 7. 6 Existential Instantiation ●For any sentence α, variable v, and constant symbol k that does not appear elsewhere in the knowledge base: ∃ v α SUBST(Iv/k}, α) ●E.g., ∃ x Crown(x) ∧ OnHead(x, John) yields Crown(C1) ∧ OnHead(C1, John) provided C1 is a new constant symbol, called a Skolem constant Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 8. 7 Instantiation ●Universal Instantiation – can be applied several times to add new sentences – the new KB is logically equivalent to the old ●Existential Instantiation – can be applied once to replace the existential sentence – the new KB is not equivalent to the old – but is satisfiable iff the old KB was satisfiable Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 9. 8 Reduction to Propositional Inference ●Suppose the KB contains just the following: ∀ x King(x) ∧ Greedy(x) =⇒ Evil(x) King(John) Greedy(John) Brother(Richard, John) ●Instantiating the universal sentence in all possible ways, we have King(John) ∧ Greedy(John) =⇒ Evil(John) King(Richard) ∧ Greedy(Richard) =⇒ Evil(Richard) King(John) Greedy(John) Brother(Richard, John) ●The new KB is propositionalized: proposition symbols are King(John), Greedy(John), Evil(John), Brother(Richard, John), etc. Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 10. 9 Reduction to Propositional Inference ●Claim: a ground sentence is entailed by new KB iff entailed by original KB ●Claim: every FOL KB can be propositionalized so as to preserve entailment ●Idea: propositionalize KB and query, apply resolution, return result ●Problem: with function symbols, there are infinitely many ground terms, e.g., Father(Father(Father(John))) ●Theorem: Herbrand (1930). If a sentence α is entailed by an FOL KB, it is entailed by a finite subset of the propositional KB ●Idea: For n = 0 to ∞ do create a propositional KB by instantiating with depth-n terms see if α is entailed by this KB ●Problem: works if α is entailed, loops if α is not entailed ●Theorem: Turing (1936), Church (1936), entailment in FOL is semidecidable Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 11. Practical Problems with Propositionalization10 ●Propositionalization seems to generate lots of irrelevant sentences. ●E.g., from ∀ x King(x) ∧ Greedy(x) =⇒ Evil(x) King(John) ∀ y Greedy(y) Brother(Richard, John) it seems obvious that Evil(John), but propositionalization produces lots of facts such as Greedy(Richard) that are irrelevant ●With p k-ary predicates and n constants, there are p ⋅ nk instantiations ●With function symbols, it gets nuch much worse! Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 12. 11 unification Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 13. 12 Plan ●We have the inference rule – ∀ x King(x) ∧ Greedy(x) =⇒ Evil(x) ●We have facts that (partially) match the precondition – King(John) – ∀ y Greedy(y) ●We need to match them up with substitutions: θ = Ix/John, y/John} works – unification – generalized modus ponens Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 14. 13 Unification ●UNIFY(α, β) = θ if αθ = βθ p q θ Knows(John, x) Knows(John, x) Knows(John, x) Knows(John, x) Knows(John, Jane) Knows(y, Mary) Knows(y, Mother(y)) Knows(x, Mary) Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 15. 14 Unification ●UNIFY(α, β) = θ if αθ = βθ p q θ Ix/Jane} Knows(John, x) Knows(John, x) Knows(John, x) Knows(John, x) Knows(John, Jane) Knows(y, Mary) Knows(y, Mother(y)) Knows(x, Mary) Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 16. 15 Unification ●UNIFY(α, β) = θ if αθ = βθ p q θ Ix/Jane} Ix/Mary, y/John} Knows(John, x) Knows(John, x) Knows(John, x) Knows(John, x) Knows(John, Jane) Knows(y, Mary) Knows(y, Mother(y)) Knows(x, Mary) Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 17. 16 Unification ●UNIFY(α, β) = θ if αθ = βθ p q θ Ix/Jane} Ix/Mary, y/John} Iy/John, x/Mother(John)} Knows(John, x) Knows(John, x) Knows(John, x) Knows(John, x) Knows(John, Jane) Knows(y, Mary) Knows(y, Mother(y)) Knows(x, Mary) Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 18. 17 Unification ●UNIFY(α, β) = θ if αθ = βθ p q θ Knows(John, x) Knows(John, x) Knows(John, x) Knows(John, x) Knows(John, Jane) Knows(y, Mary) Knows(y, Mother(y)) Knows(x, Mary) Ix/Jane} Ix/Mary, y/John} Iy/John, x/Mother(John)} fail ●Standardizing apart eliminates overlap of variables, e.g., Knows(z17, Mary) Knows(John, x) Knows(z17, Mary) Iz17/John, x/Mary} Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 19. 18 generalized modus ponens Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 20. 19 Generalized Modus Ponens ●Generalized modus ponens used with KB of definite clauses (exactly one positive literal) ●All variables assumed universally quantified p1 ′, p2 ′, . . . , pn ′, (p1 ∧ p2 ∧ . . . ∧ pn ⇒ q) q θ ′ where pi θ = piθ for all i ●Rule: ●Precondition of rule: ●Implication: ●Facts: ●Substitution: ⇒ Result of modus ponens: King(x) ∧ Greedy(x) =⇒ Evil(x) p2 is Greedy(x) p1 is King(x) q is Evil(x) p1 ′ is King(John) p2 ′ is Greedy(y) θ is Ix/John, y/John} qθ is Evil(John) Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 21. 20 forward chaining Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 22. 21 Example Knowledge ●The law says that it is a crime for an American to sell weapons to hostile nations. The country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is American. ●Prove that Col. West is a criminal Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 23. 22 Example Knowledge Base ●. . . it is a crime for an American to sell weapons to hostile nations: American(x) ∧ Weapon(y) ∧ Sells(x, y, z) ∧ Hostile(z) =⇒ Criminal(x) ●Nono . . . has some missiles, i.e., ∃ x Owns(Nono, x) ∧ Missile(x): Owns(Nono, M1) and Missile(M1) ●. . . all of its missiles were sold to it by Colonel West Missile(x) ∧ Owns(Nono, x) =⇒ Sells(West, x, Nono) ●Missiles are weapons: Missile(x) ⇒ Weapon(x) ●An enemy of America counts as “hostile”: Enemy(x, America) =⇒ Hostile(x) ●West, who is American . . . American(West) ●The country Nono, an enemy of America . . . Enemy(Nono, America) Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 24. 23 Forward Chaining Proof Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 25. 24 Forward Chaining Proof (Note: ∀ x Missile(x) ∧ Owns(Nono, x) =⇒ Sells(West, x, Nono)) Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 26. 25 Forward Chaining Proof (Note: American(x) ∧ Weapon(y) ∧ Sells(x, y, z) ∧ Hostile(z) =⇒ Criminal(x)) Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 27. 26 Properties of Forward Chaining ●Sound and complete for first-order definite clauses (proof similar to propositional proof) ●Datalog (1977) = first-order definite clauses + no functions (e.g., crime example) Forward chaining terminates for Datalog in poly iterations: at most p ⋅ nk literals ●May not terminate in general if α is not entailed ●This is unavoidable: entailment with definite clauses is semidecidable Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 28. 27 Efficiency of Forward Chaining ●Simple observation: no need to match a rule on iteration k if a premise wasn’t added on iteration k − 1 =⇒ match each rule whose premise contains a newly added literal ●Matching itself can be expensive ●Database indexing allows O(1) retrieval of known facts e.g., query Missile(x) retrieves Missile(M1) ●Matching conjunctive premises against known facts is NP-hard ●Forward chaining is widely used in deductive databases Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 29. 28 Hard Matching Example Diff(wa, nt) ∧ Diff(wa, sa) ∧ Diff(nt, q)Diff(nt, sa) ∧ Diff(q, nsw) ∧ Diff(q, sa) ∧ Diff(nsw, v) ∧ Diff(nsw, sa) ∧ Diff(v, sa) =⇒ Colorable() Diff(Red, Blue) Diff(Green, Red) Diff(Blue, Red) Diff(Red, Green) Diff(Green, Blue) Diff(Blue, Green) ●Colorable() is inferred iff the constraint satisfaction problem has a solution ●CSPs include 3SAT as a special case, hence matching is NP-hard Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 30. 29 Forward Chaining Algorithm function FOL-FC-ASK(KB, α) returns a substitution or false repeat until new is empty new ← g for each sentence r in KB do ( p1 ∧ . . . ∧ pn =⇒ q ) ← STANDARDIZE-APART(r) 1 n for each θ such that (p1 ∧ . . . ∧ pn)θ = (p′ ∧ . . . ∧ p ′ )θ 1 n for some p′ , . . . , p′ in KB q ′ ← SUBST(θ, q ) if q ′ is not a renaming of a sentence already in KB or new then do add q ′ to new φ ← UNIFY(q ′, α) if φ is not fail then return φ add new to KB return false Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 31. 30 backward chaining Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 32. 31 Backward Chaining ●Start with query ●Check if it can be derived by given rules and facts – apply rules that infer the query – recurse over pre-conditions Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 33. 32 Backward Chaining Example Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 34. 33 Backward Chaining Example Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 35. 34 Backward Chaining Example Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 36. 35 Backward Chaining Example Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 37. 36 Backward Chaining Example Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 38. 37 Backward Chaining Example Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 39. 38 Backward Chaining Example Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 40. 39 Properties of Backward Chaining ●Depth-first recursive proof search: space is linear in size of proof ●Incomplete due to infinite loops =⇒ fix by checking current goal against every goal on stack ●Inefficient due to repeated subgoals (both success and failure) =⇒ fix using caching of previous results (extra space!) ●Widely used (without improvements!) for logic programming Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 41. 40 Backward Chaining Algorithm function FOL-BC-ASK(KB, goals, θ) returns a set of substitutions inputs: KB, a knowledge base goals, a list of conjuncts forming a query (θ already applied) θ, the current substitution, initially the empty substitution g local variables: answers, a set of substitutions, initially empty if goals is empty then return Iθ} q ′ ← SUBST(θ, FIRST(goals)) for each sentence r in KB where STANDARDIZE-APART(r) = ( p1 ∧ . . . ∧ pn ⇒ q) and θ′ ← UNIFY(q, q ′) succeeds new goals ← [ p1, . . . , pn|REST(goals)] answers ← FOL-BC-ASK(KB, new goals, COMPOSE(θ′, θ)) ∪ answers return answers Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 42. 41 logic programming Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 43. 42 Logic Programming ●Computation as inference on logical KBs Logic programming 1. Identify problem 2. Assemble information 3. Tea break 4. Encode information in KB 5. Encode problem instance as facts 6. Ask queries 7. Find false facts Ordinary programming Identify problem Assemble information Figure out solution Program solution Encode problem instance as data Apply program to data Debug procedural errors ●Should be easier to debug Capital(NewY ork, US) than x = ∶ x + 2 ! Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 44. 43 Prolog ●Basis: backward chaining with Horn clauses + bells & whistles ●Widely used in Europe, Japan (basis of 5th Generation project) ●Compilation techniques ⇒ approaching a billion logical inferences per second ●Program = set of clauses = head :- literal1, . . . literaln. criminal(X) :- american(X), weapon(Y), sells(X,Y,Z), hostile(Z). missile(M1). owns(Nono,M1). sells(West,X,Nono) :- missile(X), owns(Nono,X). weapon(X) :- missile(X). hostile(X) :- enemy(X,America). American(West). Enemy(Nono,America). Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 45. 44 Prolog Systems ●Depth-first, left-to-right backward chaining ●Built-in predicates for arithmetic etc., e.g., X is Y*Z+3 ●Closed-world assumption (“negation as failure”) e.g., given alive(X) :- not dead(X). alive(joe) succeeds if dead(joe) fails Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 46. 45 resolution Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 47. 46 Resolution: Brief Summary ●Full first-order version: l1 ∨ ⋯ ∨ lk , m1 ∨ ⋯ ∨ mn (l1 ∨ ⋯ ∨ li−1 ∨ li+1 ∨ ⋯ ∨ lk ∨ m1 ∨ ⋯ ∨ mj−1 ∨ mj+1 ∨ ⋯ ∨ mn)θ where UNIFY(li, ¬mj) = θ. ●For example, ¬Rich(x) ∨ Unhappy(x) Rich(Ken) Unhappy(Ken) with θ = Ix/Ken} ●Apply resolution steps to CNF (KB ∧ ¬α); complete for FOL Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 48. 47 Conversion to CNF Everyone who loves all animals is loved by someone: ∀ x [∀ y Animal(y) =⇒ Loves(x, y)] =⇒ [∃ y Loves(y, x)] 1. Eliminate biconditionals and implications ∀ x [¬∀ y ¬Animal(y) ∨ Loves(x, y)] ∨ [∃ y Loves(y, x)] 2. Move ¬ inwards: ¬∀ x, p ≡ ∃ x ¬p, ¬∃ x, p ≡ ∀ x ¬p: ∀ x [∃ y ¬(¬Animal(y) ∨ Loves(x, y))] ∨ [∃ y Loves(y, x)] ∀ x [∃ y ¬¬Animal(y) ∧ ¬Loves(x, y)] ∨ [∃ y Loves(y, x)] ∀ x [∃ y Animal(y) ∧ ¬Loves(x, y)] ∨ [∃ y Loves(y, x)] Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 49. 48 Conversion to CNF 3. Standardize variables: each quantifier should use a different one ∀ x [∃ y Animal(y) ∧ ¬Loves(x, y)] ∨ [∃ z Loves(z, x)] 4. Skolemize: a more general form of existential instantiation. Each existential variable is replaced by a Skolem function of the enclosing universally quantified variables: ∀ x [Animal(F (x)) ∧ ¬Loves(x, F (x))] ∨ Loves(G(x), x) 5. Drop universal quantifiers: [Animal(F (x)) ∧ ¬Loves(x, F (x))] ∨ Loves(G(x), x) 6. Distribute ∧ over ∨: [Animal(F (x)) ∨ Loves(G(x), x)] ∧ [¬Loves(x, F (x)) ∨ Loves(G(x), x)] Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 50. 49 Our Previous Example ●Rules – American(x) ∧ Weapon(y) ∧ Sells(x, y, z) ∧ Hostile(z) =⇒ Criminal(x) – Missile(M1) and Owns(Nono, M1) – Missile(x) ∧ Owns(Nono, x) =⇒ Sells(West, x, Nono) – Missile(x) ⇒ Weapon(x) – Enemy(x, America) =⇒ Hostile(x) – American(West) – Enemy(Nono, America) ●Converted to CNF – ¬American(x) ∨ ¬Weapon(y) ∨ ¬Sells(x, y, z) ∨ ¬Hostile(z) ∨ Criminal(x) – Missile(M1) and Owns(Nono, M1) – ¬Missile(x) ∨ ¬Owns(Nono, x) ∨ Sells(West, x, Nono) – ¬Missile(x) ∨ Weapon(x) – ¬Enemy(x, America) ∨ Hostile(x) – American(West) – Enemy(Nono, America) ●Query: ¬Criminal(West) Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024
  • 51. 50 Resolution Proof Philipp Koehn Artificial Intelligence: Inference in First-Order Logic 5 March 2024