Markov Logic Pedro Domingos Dept of Computer Science

  • Slides: 78
Download presentation
Markov Logic Pedro Domingos Dept. of Computer Science & Eng. University of Washington

Markov Logic Pedro Domingos Dept. of Computer Science & Eng. University of Washington

Desiderata A language for cognitive modeling should: l Handle uncertainty l l Noise Incomplete

Desiderata A language for cognitive modeling should: l Handle uncertainty l l Noise Incomplete information Ambiguity Handle complexity l l Many objects Relations among them Is. A and Is. Part hierarchies Etc.

Solution l l Probability handles uncertainty Logic handles complexity What is the simplest way

Solution l l Probability handles uncertainty Logic handles complexity What is the simplest way to combine the two?

Markov Logic l l Assign weights to logical formulas Treat formulas as templates for

Markov Logic l l Assign weights to logical formulas Treat formulas as templates for features of Markov networks

Overview l l Representation Inference Learning Applications

Overview l l Representation Inference Learning Applications

Propositional Logic l l l Atoms: Symbols representing propositions Logical connectives: ¬, Λ, V,

Propositional Logic l l l Atoms: Symbols representing propositions Logical connectives: ¬, Λ, V, etc. Knowledge base: Set of formulas World: Truth assignment to all atoms Every KB can be converted to CNF l l CNF: Conjunction of clauses Clause: Disjunction of literals Literal: Atom or its negation Entailment: Does KB entail query?

First-Order Logic l l Atom: Predicate(Variables, Constants) E. g. : Ground atom: All arguments

First-Order Logic l l Atom: Predicate(Variables, Constants) E. g. : Ground atom: All arguments are constants Quantifiers: This tutorial: Finite, Herbrand interpretations

Markov Networks l Undirected graphical models Smoking Cancer Asthma l Cough Potential functions defined

Markov Networks l Undirected graphical models Smoking Cancer Asthma l Cough Potential functions defined over cliques Smoking Cancer Ф(S, C) False 4. 5 False True 4. 5 True False 2. 7 True 4. 5

Markov Networks l Undirected graphical models Smoking Cancer Asthma l Cough Log-linear model: Weight

Markov Networks l Undirected graphical models Smoking Cancer Asthma l Cough Log-linear model: Weight of Feature i

Probabilistic Knowledge Bases PKB = Set of formulas and their probabilities + Consistency +

Probabilistic Knowledge Bases PKB = Set of formulas and their probabilities + Consistency + Maximum entropy = Set of formulas and their weights = Set of formulas and their potentials (1 if formula true, if formula false)

Markov Logic l A Markov Logic Network (MLN) is a set of pairs (F,

Markov Logic l A Markov Logic Network (MLN) is a set of pairs (F, w) where l l l F is a formula in first-order logic w is a real number An MLN defines a Markov network with l l One node for each grounding of each predicate in the MLN One feature for each grounding of each formula F in the MLN, with the corresponding weight w

Relation to Statistical Models l Special cases: l l l Markov networks Markov random

Relation to Statistical Models l Special cases: l l l Markov networks Markov random fields Bayesian networks Log-linear models Exponential models Max. entropy models Gibbs distributions Boltzmann machines Logistic regression Hidden Markov models Conditional random fields l Obtained by making all predicates zero-arity l Markov logic allows objects to be interdependent (non-i. i. d. ) l Markov logic facilitates composition

Relation to First-Order Logic l l l Infinite weights First-order logic Satisfiable KB, positive

Relation to First-Order Logic l l l Infinite weights First-order logic Satisfiable KB, positive weights Satisfying assignments = Modes of distribution Markov logic allows contradictions between formulas

Example

Example

Example

Example

Example

Example

Example

Example

Example

Example

Overview l l Representation Inference Learning Applications

Overview l l Representation Inference Learning Applications

Theorem Proving TP(KB, Query) KBQ ← KB U {¬ Query} return ¬SAT(CNF(KBQ))

Theorem Proving TP(KB, Query) KBQ ← KB U {¬ Query} return ¬SAT(CNF(KBQ))

Satisfiability (DPLL) SAT(CNF) if CNF is empty return True if CNF contains empty clause

Satisfiability (DPLL) SAT(CNF) if CNF is empty return True if CNF contains empty clause return False choose an atom A return SAT(CNF(A)) V SAT(CNF(¬A))

First-Order Theorem Proving l Propositionalization 1. Form all possible ground atoms 2. Apply propositional

First-Order Theorem Proving l Propositionalization 1. Form all possible ground atoms 2. Apply propositional theorem prover l Lifted Inference: Resolution l l Resolve pairs of clauses until empty clause derived Unify literals by substitution, e. g. : unifies and

Probabilistic Theorem Proving Given Probabilistic knowledge base K Query formula Q Output P(Q|K)

Probabilistic Theorem Proving Given Probabilistic knowledge base K Query formula Q Output P(Q|K)

Weighted Model Counting l l Model. Count(CNF) = # worlds that satisfy CNF Assign

Weighted Model Counting l l Model. Count(CNF) = # worlds that satisfy CNF Assign a weight to each literal Weight(world) = П weights(true literals) l Weighted model counting: Given CNF C and literal weights W l Output Σ weights(worlds that satisfy C) PTP is reducible to lifted WMC

Example

Example

Example

Example

Example If Then

Example If Then

Example

Example

Example

Example

Inference Problems

Inference Problems

Propositional Case l All conditional probabilities are ratios of partition functions: l All partition

Propositional Case l All conditional probabilities are ratios of partition functions: l All partition functions can be computed by weighted model counting

Conversion to CNF + Weights WCNF(PKB) for all (Fi, Φi) є PKB s. t.

Conversion to CNF + Weights WCNF(PKB) for all (Fi, Φi) є PKB s. t. Φi > 0 do PKB ← PKB U {(Fi Ai, 0)} {(Fi, Φi)} CNF ← CNF(PKB) for all ¬Ai literals do W¬Ai ← Φi for all other literals L do w. L ← 1 return (CNF, weights)

Probabilistic Theorem Proving PTP(PKB, Query) PKBQ ← PKB U {(Query, 0)} return WMC(WCNF(PKBQ)) /

Probabilistic Theorem Proving PTP(PKB, Query) PKBQ ← PKB U {(Query, 0)} return WMC(WCNF(PKBQ)) / WMC(WCNF(PKB))

Probabilistic Theorem Proving PTP(PKB, Query) PKBQ ← PKB U {(Query, 0)} return WMC(WCNF(PKBQ)) /

Probabilistic Theorem Proving PTP(PKB, Query) PKBQ ← PKB U {(Query, 0)} return WMC(WCNF(PKBQ)) / WMC(WCNF(PKB)) Compare: TP(KB, Query) KBQ ← KB U {¬ Query} return ¬SAT(CNF(KBQ))

Weighted Model Counting WMC(CNF, weights) if all clauses in CNF are satisfied Base Case

Weighted Model Counting WMC(CNF, weights) if all clauses in CNF are satisfied Base Case return if CNF has empty unsatisfied clause return 0

Weighted Model Counting WMC(CNF, weights) if all clauses in CNF are satisfied return if

Weighted Model Counting WMC(CNF, weights) if all clauses in CNF are satisfied return if CNF has empty unsatisfied clause return 0 if CNF can be partitioned into CNFs C 1, …, Ck sharing no atoms Decomp. return Step

Weighted Model Counting WMC(CNF, weights) if all clauses in CNF are satisfied return if

Weighted Model Counting WMC(CNF, weights) if all clauses in CNF are satisfied return if CNF has empty unsatisfied clause return 0 if CNF can be partitioned into CNFs C 1, …, Ck sharing no atoms return Splitting choose an atom A Step return

First-Order Case l l PTP schema remains the same Conversion of PKB to hard

First-Order Case l l PTP schema remains the same Conversion of PKB to hard CNF and weights: New atom in Fi Ai is now Predicatei(variables in Fi, constants in Fi) New argument in WMC: Set of substitution constraints of the form x = A, x ≠ A, x = y, x ≠ y Lift each step of WMC

Lifted Weighted Model Counting LWMC(CNF, substs, weights) if all clauses in CNF are satisfied

Lifted Weighted Model Counting LWMC(CNF, substs, weights) if all clauses in CNF are satisfied Base return Case if CNF has empty unsatisfied clause return 0

Lifted Weighted Model Counting LWMC(CNF, substs, weights) if all clauses in CNF are satisfied

Lifted Weighted Model Counting LWMC(CNF, substs, weights) if all clauses in CNF are satisfied return if CNF has empty unsatisfied clause return 0 if there exists a lifted decomposition of CNF return Decomp. Step

Lifted Weighted Model Counting LWMC(CNF, substs, weights) if all clauses in CNF are satisfied

Lifted Weighted Model Counting LWMC(CNF, substs, weights) if all clauses in CNF are satisfied return if CNF has empty unsatisfied clause return 0 if there exists a lifted decomposition of CNF return choose an atom A Splitting return Step

Extensions l l l Unit propagation, etc. Caching / Memoization Knowledge-based model construction

Extensions l l l Unit propagation, etc. Caching / Memoization Knowledge-based model construction

Approximate Inference WMC(CNF, weights) if all clauses in CNF are satisfied return if CNF

Approximate Inference WMC(CNF, weights) if all clauses in CNF are satisfied return if CNF has empty unsatisfied clause return 0 if CNF can be partitioned into CNFs C 1, …, Ck sharing no atoms Splitting return Step choose an atom A return with probability , etc.

MPE Inference l l l Replace sums by maxes Use branch-and-bound for efficiency Do

MPE Inference l l l Replace sums by maxes Use branch-and-bound for efficiency Do traceback

Overview l l Representation Inference Learning Applications

Overview l l Representation Inference Learning Applications

Learning l l l Data is a relational database Closed world assumption (if not:

Learning l l l Data is a relational database Closed world assumption (if not: EM) Learning parameters (weights) l l l Generatively Discriminatively Learning structure (formulas)

Generative Weight Learning l l l Maximize likelihood Use gradient ascent or L-BFGS No

Generative Weight Learning l l l Maximize likelihood Use gradient ascent or L-BFGS No local maxima No. of true groundings of clause i in data Expected no. true groundings according to model l Requires inference at each step (slow!)

Pseudo-Likelihood l l l Likelihood of each variable given its neighbors in the data

Pseudo-Likelihood l l l Likelihood of each variable given its neighbors in the data [Besag, 1975] Does not require inference at each step Consistent estimator Widely used in vision, spatial statistics, etc. But PL parameters may not work well for long inference chains

Discriminative Weight Learning l Maximize conditional likelihood of query (y) given evidence (x) No.

Discriminative Weight Learning l Maximize conditional likelihood of query (y) given evidence (x) No. of true groundings of clause i in data Expected no. true groundings according to model l Expected counts can be approximated by counts in MAP state of y given x

Voted Perceptron l l Originally proposed for training HMMs discriminatively [Collins, 2002] Assumes network

Voted Perceptron l l Originally proposed for training HMMs discriminatively [Collins, 2002] Assumes network is linear chain wi ← 0 for t ← 1 to T do y. MAP ← Viterbi(x) wi ← wi + η [counti(y. Data) – counti(y. MAP)] return ∑t wi / T

Voted Perceptron for MLNs l l l HMMs are special case of MLNs Replace

Voted Perceptron for MLNs l l l HMMs are special case of MLNs Replace Viterbi by prob. theorem proving Network can now be arbitrary graph wi ← 0 for t ← 1 to T do y. MAP ← PTP(MLN U {x}, y) wi ← wi + η [counti(y. Data) – counti(y. MAP)] return ∑t wi / T

Structure Learning l l l l Generalizes feature induction in Markov nets Any inductive

Structure Learning l l l l Generalizes feature induction in Markov nets Any inductive logic programming approach can be used, but. . . Goal is to induce any clauses, not just Horn Evaluation function should be likelihood Requires learning weights for each candidate Turns out not to be bottleneck Bottleneck is counting clause groundings Solution: Subsampling

Structure Learning l l Initial state: Unit clauses or hand-coded KB Operators: Add/remove literal,

Structure Learning l l Initial state: Unit clauses or hand-coded KB Operators: Add/remove literal, flip sign Evaluation function: Pseudo-likelihood + Structure prior Search: l l l Beam, shortest-first [Kok & Domingos, 2005] Bottom-up [Mihalkova & Mooney, 2007] Relational pathfinding [Kok & Domingos, 2009, 2010]

Alchemy Open-source software including: l Full first-order logic syntax l MAP and marginal/conditional inference

Alchemy Open-source software including: l Full first-order logic syntax l MAP and marginal/conditional inference l Generative & discriminative weight learning l Structure learning l Programming language features alchemy. cs. washington. edu

Alchemy Prolog BUGS Representation F. O. Logic + Markov nets Horn clauses Bayes nets

Alchemy Prolog BUGS Representation F. O. Logic + Markov nets Horn clauses Bayes nets Inference Probabilistic thm. proving Theorem Gibbs proving sampling Learning Parameters & structure No Params. Uncertainty Yes No Yes Relational Yes No Yes

Overview l l Representation Inference Learning Applications

Overview l l Representation Inference Learning Applications

Applications to Date l l l Natural language processing Information extraction Entity resolution Link

Applications to Date l l l Natural language processing Information extraction Entity resolution Link prediction Collective classification Social network analysis l l l l Robot mapping Activity recognition Scene analysis Computational biology Probabilistic Cyc Personal assistants Etc.

Information Extraction Parag Singla and Pedro Domingos, “Memory-Efficient Inference in Relational Domains” (AAAI-06). Singla,

Information Extraction Parag Singla and Pedro Domingos, “Memory-Efficient Inference in Relational Domains” (AAAI-06). Singla, P. , & Domingos, P. (2006). Memory-efficent inference in relatonal domains. In Proceedings of the Twenty-First National Conference on Artificial Intelligence (pp. 500 -505). Boston, MA: AAAI Press. H. Poon & P. Domingos, Sound and Efficient Inference with Probabilistic and Deterministic Dependencies”, in Proc. AAAI-06, Boston, MA, 2006. P. Hoifung (2006). Efficent inference. In Proceedings of the Twenty-First National Conference on Artificial Intelligence.

Segmentation Author Title Venue Parag Singla and Pedro Domingos, “Memory-Efficient Inference in Relational Domains”

Segmentation Author Title Venue Parag Singla and Pedro Domingos, “Memory-Efficient Inference in Relational Domains” (AAAI-06). Singla, P. , & Domingos, P. (2006). Memory-efficent inference in relatonal domains. In Proceedings of the Twenty-First National Conference on Artificial Intelligence (pp. 500 -505). Boston, MA: AAAI Press. H. Poon & P. Domingos, Sound and Efficient Inference with Probabilistic and Deterministic Dependencies”, in Proc. AAAI-06, Boston, MA, 2006. P. Hoifung (2006). Efficent inference. In Proceedings of the Twenty-First National Conference on Artificial Intelligence.

Entity Resolution Parag Singla and Pedro Domingos, “Memory-Efficient Inference in Relational Domains” (AAAI-06). Singla,

Entity Resolution Parag Singla and Pedro Domingos, “Memory-Efficient Inference in Relational Domains” (AAAI-06). Singla, P. , & Domingos, P. (2006). Memory-efficent inference in relatonal domains. In Proceedings of the Twenty-First National Conference on Artificial Intelligence (pp. 500 -505). Boston, MA: AAAI Press. H. Poon & P. Domingos, Sound and Efficient Inference with Probabilistic and Deterministic Dependencies”, in Proc. AAAI-06, Boston, MA, 2006. P. Hoifung (2006). Efficent inference. In Proceedings of the Twenty-First National Conference on Artificial Intelligence.

Entity Resolution Parag Singla and Pedro Domingos, “Memory-Efficient Inference in Relational Domains” (AAAI-06). Singla,

Entity Resolution Parag Singla and Pedro Domingos, “Memory-Efficient Inference in Relational Domains” (AAAI-06). Singla, P. , & Domingos, P. (2006). Memory-efficent inference in relatonal domains. In Proceedings of the Twenty-First National Conference on Artificial Intelligence (pp. 500 -505). Boston, MA: AAAI Press. H. Poon & P. Domingos, Sound and Efficient Inference with Probabilistic and Deterministic Dependencies”, in Proc. AAAI-06, Boston, MA, 2006. P. Hoifung (2006). Efficent inference. In Proceedings of the Twenty-First National Conference on Artificial Intelligence.

State of the Art l Segmentation l l Entity resolution l l l HMM

State of the Art l Segmentation l l Entity resolution l l l HMM (or CRF) to assign each token to a field Logistic regression to predict same field/citation Transitive closure Alchemy implementation: Seven formulas

Types and Predicates token = {Parag, Singla, and, Pedro, . . . } field

Types and Predicates token = {Parag, Singla, and, Pedro, . . . } field = {Author, Title, Venue} citation = {C 1, C 2, . . . } position = {0, 1, 2, . . . } Token(token, position, citation) In. Field(position, field, citation) Same. Field(field, citation) Same. Cit(citation, citation)

Types and Predicates token = {Parag, Singla, and, Pedro, . . . } field

Types and Predicates token = {Parag, Singla, and, Pedro, . . . } field = {Author, Title, Venue, . . . } citation = {C 1, C 2, . . . } position = {0, 1, 2, . . . } Token(token, position, citation) In. Field(position, field, citation) Same. Field(field, citation) Same. Cit(citation, citation) Optional

Types and Predicates token = {Parag, Singla, and, Pedro, . . . } field

Types and Predicates token = {Parag, Singla, and, Pedro, . . . } field = {Author, Title, Venue} citation = {C 1, C 2, . . . } position = {0, 1, 2, . . . } Token(token, position, citation) In. Field(position, field, citation) Same. Field(field, citation) Same. Cit(citation, citation) Evidence

Types and Predicates token = {Parag, Singla, and, Pedro, . . . } field

Types and Predicates token = {Parag, Singla, and, Pedro, . . . } field = {Author, Title, Venue} citation = {C 1, C 2, . . . } position = {0, 1, 2, . . . } Token(token, position, citation) In. Field(position, field, citation) Same. Field(field, citation) Same. Cit(citation, citation) Query

Formulas Token(+t, i, c) => In. Field(i, +f, c) <=> In. Field(i+1, +f, c)

Formulas Token(+t, i, c) => In. Field(i, +f, c) <=> In. Field(i+1, +f, c) f != f’ => (!In. Field(i, +f, c) v !In. Field(i, +f’, c)) Token(+t, i, c) ^ In. Field(i, +f, c) ^ Token(+t, i’, c’) ^ In. Field(i’, +f, c’) => Same. Field(+f, c, c’) <=> Same. Cit(c, c’) Same. Field(f, c, c’) ^ Same. Field(f, c’, c”) => Same. Field(f, c, c”) Same. Cit(c, c’) ^ Same. Cit(c’, c”) => Same. Cit(c, c”)

Formulas Token(+t, i, c) => In. Field(i, +f, c) <=> In. Field(i+1, +f, c)

Formulas Token(+t, i, c) => In. Field(i, +f, c) <=> In. Field(i+1, +f, c) f != f’ => (!In. Field(i, +f, c) v !In. Field(i, +f’, c)) Token(+t, i, c) ^ In. Field(i, +f, c) ^ Token(+t, i’, c’) ^ In. Field(i’, +f, c’) => Same. Field(+f, c, c’) <=> Same. Cit(c, c’) Same. Field(f, c, c’) ^ Same. Field(f, c’, c”) => Same. Field(f, c, c”) Same. Cit(c, c’) ^ Same. Cit(c’, c”) => Same. Cit(c, c”)

Formulas Token(+t, i, c) => In. Field(i, +f, c) <=> In. Field(i+1, +f, c)

Formulas Token(+t, i, c) => In. Field(i, +f, c) <=> In. Field(i+1, +f, c) f != f’ => (!In. Field(i, +f, c) v !In. Field(i, +f’, c)) Token(+t, i, c) ^ In. Field(i, +f, c) ^ Token(+t, i’, c’) ^ In. Field(i’, +f, c’) => Same. Field(+f, c, c’) <=> Same. Cit(c, c’) Same. Field(f, c, c’) ^ Same. Field(f, c’, c”) => Same. Field(f, c, c”) Same. Cit(c, c’) ^ Same. Cit(c’, c”) => Same. Cit(c, c”)

Formulas Token(+t, i, c) => In. Field(i, +f, c) <=> In. Field(i+1, +f, c)

Formulas Token(+t, i, c) => In. Field(i, +f, c) <=> In. Field(i+1, +f, c) f != f’ => (!In. Field(i, +f, c) v !In. Field(i, +f’, c)) Token(+t, i, c) ^ In. Field(i, +f, c) ^ Token(+t, i’, c’) ^ In. Field(i’, +f, c’) => Same. Field(+f, c, c’) <=> Same. Cit(c, c’) Same. Field(f, c, c’) ^ Same. Field(f, c’, c”) => Same. Field(f, c, c”) Same. Cit(c, c’) ^ Same. Cit(c’, c”) => Same. Cit(c, c”)

Formulas Token(+t, i, c) => In. Field(i, +f, c) <=> In. Field(i+1, +f, c)

Formulas Token(+t, i, c) => In. Field(i, +f, c) <=> In. Field(i+1, +f, c) f != f’ => (!In. Field(i, +f, c) v !In. Field(i, +f’, c)) Token(+t, i, c) ^ In. Field(i, +f, c) ^ Token(+t, i’, c’) ^ In. Field(i’, +f, c’) => Same. Field(+f, c, c’) <=> Same. Cit(c, c’) Same. Field(f, c, c’) ^ Same. Field(f, c’, c”) => Same. Field(f, c, c”) Same. Cit(c, c’) ^ Same. Cit(c’, c”) => Same. Cit(c, c”)

Formulas Token(+t, i, c) => In. Field(i, +f, c) <=> In. Field(i+1, +f, c)

Formulas Token(+t, i, c) => In. Field(i, +f, c) <=> In. Field(i+1, +f, c) f != f’ => (!In. Field(i, +f, c) v !In. Field(i, +f’, c)) Token(+t, i, c) ^ In. Field(i, +f, c) ^ Token(+t, i’, c’) ^ In. Field(i’, +f, c’) => Same. Field(+f, c, c’) <=> Same. Cit(c, c’) Same. Field(f, c, c’) ^ Same. Field(f, c’, c”) => Same. Field(f, c, c”) Same. Cit(c, c’) ^ Same. Cit(c’, c”) => Same. Cit(c, c”)

Formulas Token(+t, i, c) => In. Field(i, +f, c) <=> In. Field(i+1, +f, c)

Formulas Token(+t, i, c) => In. Field(i, +f, c) <=> In. Field(i+1, +f, c) f != f’ => (!In. Field(i, +f, c) v !In. Field(i, +f’, c)) Token(+t, i, c) ^ In. Field(i, +f, c) ^ Token(+t, i’, c’) ^ In. Field(i’, +f, c’) => Same. Field(+f, c, c’) <=> Same. Cit(c, c’) Same. Field(f, c, c’) ^ Same. Field(f, c’, c”) => Same. Field(f, c, c”) Same. Cit(c, c’) ^ Same. Cit(c’, c”) => Same. Cit(c, c”)

Formulas Token(+t, i, c) => In. Field(i, +f, c) ^ !Token(“. ”, i, c)

Formulas Token(+t, i, c) => In. Field(i, +f, c) ^ !Token(“. ”, i, c) <=> In. Field(i+1, +f, c) f != f’ => (!In. Field(i, +f, c) v !In. Field(i, +f’, c)) Token(+t, i, c) ^ In. Field(i, +f, c) ^ Token(+t, i’, c’) ^ In. Field(i’, +f, c’) => Same. Field(+f, c, c’) <=> Same. Cit(c, c’) Same. Field(f, c, c’) ^ Same. Field(f, c’, c”) => Same. Field(f, c, c”) Same. Cit(c, c’) ^ Same. Cit(c’, c”) => Same. Cit(c, c”)

Results: Segmentation on Cora

Results: Segmentation on Cora

Results: Matching Venues on Cora

Results: Matching Venues on Cora

Summary l l l Cognitive modeling requires combination of logical and statistical techniques We

Summary l l l Cognitive modeling requires combination of logical and statistical techniques We need to unify the two Markov logic l l l Syntax: Weighted logical formulas Semantics: Markov network templates Inference: Probabilistic theorem proving Learning: Statistical inductive logic programming Many applications to date

Resources l Open-source software/Web site: Alchemy l l Learning and inference algorithms Tutorials, manuals,

Resources l Open-source software/Web site: Alchemy l l Learning and inference algorithms Tutorials, manuals, etc. MLNs, datasets, etc. Publications alchemy. cs. washington. edu l Book: Domingos & Lowd, Markov Logic, Morgan & Claypool, 2009.