THE MATHEMATICS OF CAUSE AND EFFECT Thinking Nature

  • Slides: 84
Download presentation
 THE MATHEMATICS OF CAUSE AND EFFECT: Thinking Nature and Talking Counterfactuals Judea Pearl

THE MATHEMATICS OF CAUSE AND EFFECT: Thinking Nature and Talking Counterfactuals Judea Pearl Departments of Computer Science and Statistics UCLA

OUTLINE 1. From Turing test to Bayes networks 2. From Bayes networks to do-calculus

OUTLINE 1. From Turing test to Bayes networks 2. From Bayes networks to do-calculus 3. From messy science to counterfactuals 4. From counterfactuals to practical victories a) policy evaluation b) attribution c) mediation d) generalizability – extend validity e) new thrills – missing data

CAN MACHINE THINK? Alan M. Turing (1912 – 1954) • The Turing Test “Computing

CAN MACHINE THINK? Alan M. Turing (1912 – 1954) • The Turing Test “Computing Machinery and Intelligence” (1950) • Turing: Yes, if it acts like it thinks. • Acts = It answers non-trivial questions about a story, a topic or a situation?

HOW TURING ENVISIONED THE TEST CONVERSATION Q A Please write me a sonnet on

HOW TURING ENVISIONED THE TEST CONVERSATION Q A Please write me a sonnet on the subject of the Forth Bridge. Count me out on this one. I never could write poetry. Q A Add 34, 957 and 70, 764. (Pause about 30 seconds and then give an answer) 105, 721. Q A Do you play chess? Yes. Q A I have K at my K 1, and no other pieces. You have only K at K 6 and R at R 1. It is your move. What do you play? (After a pause of 15 seconds) R-R 8: mate!

A “MINI” TURING TEST IN CAUSAL CONVERSATION The Story Input: Story Question: What is?

A “MINI” TURING TEST IN CAUSAL CONVERSATION The Story Input: Story Question: What is? What if? Why? Answers: I believe that. . . Image adapted from Saygin, 2000. Q 1: If the season is dry, and the pavement is slippery did it rain? A 1: Unlikely, it is more likely the sprinkler was ON. with a very slight possibility that it is not even wet.

A “MINI” TURING TEST IN CAUSAL CONVERSATION The Story Image adapted from Saygin, 2000.

A “MINI” TURING TEST IN CAUSAL CONVERSATION The Story Image adapted from Saygin, 2000. Q 2: But what if we SEE that the sprinkler is OFF? A 2: Then it is more likely that it rained.

A “MINI” TURING TEST IN CAUSAL CONVERSATION The Story Image adapted from Saygin, 2000.

A “MINI” TURING TEST IN CAUSAL CONVERSATION The Story Image adapted from Saygin, 2000. Q 3: Do you mean that if we actually turn the sprinkler ON, the rain will be less likely? A 3: No, the likelihood of rain would remain the same but the pavement will surely get wet.

A “MINI” TURING TEST IN CAUSAL CONVERSATION The Story Image adapted from Saygin, 2000.

A “MINI” TURING TEST IN CAUSAL CONVERSATION The Story Image adapted from Saygin, 2000. Q 4: Suppose we SEE that the sprinkler is ON and the pavement wet. What if the sprinkler were OFF? A 4: The pavement would be dry, because the season is likely dry.

SEARLE’S CHINESE ROOM ARGUMENT

SEARLE’S CHINESE ROOM ARGUMENT

WHAT’S IN SEARLE’S RULE BOOK? Searle's oversight: there are not enough molecules in the

WHAT’S IN SEARLE’S RULE BOOK? Searle's oversight: there are not enough molecules in the universe to make the book. Even for the sprinkler example. Why causal conversation.

IS PARSIMONY NECESSARY (SUFFICIENT) FOR UNDERSTANDING? Understanding requires translating world constraints into a grammar

IS PARSIMONY NECESSARY (SUFFICIENT) FOR UNDERSTANDING? Understanding requires translating world constraints into a grammar (contraints over symbol strings) and harnessing it to answer queries swiftly and reliably. Parsimony can only be achieved by exploiting the constraints in the world to beat the combinatorial explosion.

THE PLURALITY OF MINI TURING TESTS Poetry Turing Test Data-intensive Scientific applications Thousands of

THE PLURALITY OF MINI TURING TESTS Poetry Turing Test Data-intensive Scientific applications Thousands of Hungry and aimless customers . . . Causal Reasoning . Medicine Chess Stock market Human Cognition and Ethics Robotics Scientific thinking

THE PLURALITY OF MINI TURING TESTS Poetry Turing Test . . . Causal Reasoning

THE PLURALITY OF MINI TURING TESTS Poetry Turing Test . . . Causal Reasoning . Medicine Chess Stock market Human Cognition and Ethics

Causal Explanation “She handed me the f and I ate” “The serpent deceived and

Causal Explanation “She handed me the f and I ate” “The serpent deceived and I ate”

COUNTERFACTUALS AND OUR SENSE OF JUSTICE Abraham: Are you about to smite the righteous

COUNTERFACTUALS AND OUR SENSE OF JUSTICE Abraham: Are you about to smite the righteous with the wicked? What if there were fifty righteous men in the city? And the Lord said, “If I find in the city of Sodom fi good men, I will pardon the wh place for their sake. ” Genesis 18: 26

THE PLURALITY OF MINI TURING TESTS Poetry Turing Test . . . Causal Reasoning

THE PLURALITY OF MINI TURING TESTS Poetry Turing Test . . . Causal Reasoning . Medicine Chess Stock market Human Cognition and Ethics Scientific thinking

WHY PHYSICS IS COUNTERFACTUAL Scientific Equations (e. g. , Hooke’s Law) are non-algebraic e.

WHY PHYSICS IS COUNTERFACTUAL Scientific Equations (e. g. , Hooke’s Law) are non-algebraic e. g. , Length (Y) equals a constant (2) times the weight (X) Correct notation: Y : ==2 X 2 X X=3 X=1 Process information X=1 Y=2 The solution Had X been 3, Y would be 6. If we raise X to 3, Y would be 6. Must “wipe out” X = 1. X = ½ Y X = 3 Y = X+1 Alternative

WHY PHYSICS IS COUNTERFACTUAL Scientific Equations (e. g. , Hooke’s Law) are non-algebraic e.

WHY PHYSICS IS COUNTERFACTUAL Scientific Equations (e. g. , Hooke’s Law) are non-algebraic e. g. , Length (Y) equals a constant (2) times the weight (X) Correct notation: (or) Y 2 X X=3 X=1 Process information X=1 Y=2 The solution Had X been 3, Y would be 6. If we raise X to 3, Y would be 6. Must “wipe out” X = 1. X = ½ Y X = 3 Y = X+1 Alternative

THE PLURALITY OF MINI TURING TESTS Poetry Turing Test . . . Causal Reasoning

THE PLURALITY OF MINI TURING TESTS Poetry Turing Test . . . Causal Reasoning . Medicine Chess Stock market Human Cognition and Ethics Robotics Scientific thinking

CAUSATION AS A PROGRAMMER'S NIGHTMARE Input: 1. “If the grass is wet, then it

CAUSATION AS A PROGRAMMER'S NIGHTMARE Input: 1. “If the grass is wet, then it rained” 2. “if we break this bottle, the grass will get wet” Output: “If we break this bottle, then it rained”

WHAT KIND OF QUESTIONS SHOULD THE ROBOT ANSWER? • • Observational Questions: “What if

WHAT KIND OF QUESTIONS SHOULD THE ROBOT ANSWER? • • Observational Questions: “What if we see A” (What is? ) P(y | A) Action Questions: “What if we do A? ” (What if? ) P(y | do(A) Counterfactuals Questions: “What if we did things differently? ” Options: “With what probability? ” (Why? ) P(y. A’ | A) THE CAUSAL HIERARCHY - SYNTACTIC DISTINCTION

THE PLURALITY OF MINI TURING TESTS Poetry Turing Test Data-intensive Scientific applications Thousands of

THE PLURALITY OF MINI TURING TESTS Poetry Turing Test Data-intensive Scientific applications Thousands of Hungry and aimless customers . . . Causal Reasoning . Medicine Chess Stock market Human Cognition and Ethics Robotics Scientific thinking

STRUCTURAL CAUSAL MODELS: THE WORLD AS A COLLECTION OF SPRINGS Definition: A structural causal

STRUCTURAL CAUSAL MODELS: THE WORLD AS A COLLECTION OF SPRINGS Definition: A structural causal model is a 4 -tuple <V, U, F, P(u)>, where • V = {V 1, . . . , Vn} are endogenous variables • U = {U 1, . . . , Um} are background variables • F = {f 1, . . . , fn} are functions determining V, vi = fi(v, u) e. g. , • P(u) is a distribution over U P(u) and F induce a distribution P(v) over observable variables

TRADITIONAL STATISTICAL INFERENCE PARADIGM Data P Joint Distribution Q(P) (Aspects of P) Inference e.

TRADITIONAL STATISTICAL INFERENCE PARADIGM Data P Joint Distribution Q(P) (Aspects of P) Inference e. g. , Infer whether customers who bought product A would also buy product B. Q = P(B | A)

THE STRUCTURAL MODEL PARADIGM Data Joint Distribution Data Generating Model Q(M) (Aspects of M)

THE STRUCTURAL MODEL PARADIGM Data Joint Distribution Data Generating Model Q(M) (Aspects of M) M Inference M – Invariant strategy (mechanism, recipe, law, protocol) by which Nature assigns values to variables in the analysis. • “A painful de-crowning of a beloved oracle!”

COUNTERFACTUALS ARE EMBARRASINGLY SIMPLE Definition: The sentence: “Y would be y (in situation u),

COUNTERFACTUALS ARE EMBARRASINGLY SIMPLE Definition: The sentence: “Y would be y (in situation u), had X been x, ” denoted Yx(u) = y, means: The solution for Y in a mutilated model Mx, (i. e. , the equations for X replaced by X = x) with input U=u, is equal to y. The Fundamental Equation of Counterfactuals:

READING COUNTERFACTUALS FROM SEM 2. 30 Data shows: A student named Joe, measured X

READING COUNTERFACTUALS FROM SEM 2. 30 Data shows: A student named Joe, measured X = 0. 5, Z = 1. 0, Y = 1. 9 Q 1: What would Joe’s score be had he doubled his study time? Answer:

THE TWO FUNDAMENTAL LAWS OF CAUSAL INFERENCE 1. The Law of Counterfactuals (M generates

THE TWO FUNDAMENTAL LAWS OF CAUSAL INFERENCE 1. The Law of Counterfactuals (M generates and evaluates all counterfactuals. ) 2. The Law of Conditional Independence (d-separation) (Separation in the model ⇒ independence in the distribution. )

THE LAW OF CONDITIONAL INDEPENDENCE C (Climate) S (Sprinkler) R (Rain) W (Wetness) Each

THE LAW OF CONDITIONAL INDEPENDENCE C (Climate) S (Sprinkler) R (Rain) W (Wetness) Each function summarizes millions of micro processes. U 3 U 2 U 4 S C U 1

THE LAW OF CONDITIONAL INDEPENDENCE C (Climate) S (Sprinkler) R (Rain) W (Wetness) Each

THE LAW OF CONDITIONAL INDEPENDENCE C (Climate) S (Sprinkler) R (Rain) W (Wetness) Each function summarizes millions of micro processes. U 3 Still, if the U 's are independent, the observed distribution U 2 U 4 P(C, R, S, W) must satisfy certain constraints that are: S C (1) independent of the f ‘s and of P(U) and (2) can be read from the structure of the graph. U 1

D-SEPARATION: NATURE’S LANGUAGE FOR COMMUNICATING ITS STRUCTURE C (Climate) S (Sprinkler) R (Rain) W

D-SEPARATION: NATURE’S LANGUAGE FOR COMMUNICATING ITS STRUCTURE C (Climate) S (Sprinkler) R (Rain) W (Wetness) Every missing arrow advertises an independency, conditional on a separating set. e. g. , C W | (S, R) S R | C Applications 1. Structure learning 2. Model testing 3. Reducing "what if I do" questions to symbolic calculus 4. Reducing scientific questions to symbolic calculus

SEEING VS. DOING Effect of turning the sprinkler ON

SEEING VS. DOING Effect of turning the sprinkler ON

THE FIVE NECESSARY STEPS FOR CAUSAL INFERENCE Define: Express the target quantity Q as

THE FIVE NECESSARY STEPS FOR CAUSAL INFERENCE Define: Express the target quantity Q as a property of the model M. Assume: Express causal assumptions in structural or graphical form. Identify: Determine if Q is identifiable. Estimate: Estimate Q if it is identifiable; approximate it, if it is not. Test: If M has testable implications

THE FIVE NECESSARY STEPS FOR EFFECT ESTIMATION Define: Express the target quantity Q as

THE FIVE NECESSARY STEPS FOR EFFECT ESTIMATION Define: Express the target quantity Q as a property of the model M. Assume: Express causal assumptions in structural or graphical form. Identify: Determine if Q is identifiable. Estimate: Estimate Q if it is identifiable; approximate it, if it is not. Test: If M has testable implications

THE FIVE NECESSARY STEPS FOR AVERAGE TREATMENT EFFECT Define: Express the target quantity Q

THE FIVE NECESSARY STEPS FOR AVERAGE TREATMENT EFFECT Define: Express the target quantity Q as a property of the model M. Assume: Express causal assumptions in structural or graphical form. Identify: Determine if Q is identifiable. Estimate: Estimate Q if it is identifiable; approximate it, if it is not. Test: If M has testable implications

THE FIVE NECESSARY STEPS FOR DYNAMIC POLICY ANALYSIS Define: Express the target quantity Q

THE FIVE NECESSARY STEPS FOR DYNAMIC POLICY ANALYSIS Define: Express the target quantity Q as a property of the model M. Assume: Express causal assumptions in structural or graphical form. Identify: Determine if Q is identifiable. Estimate: Estimate Q if it is identifiable; approximate it, if it is not. Test: If M has testable implications

THE FIVE NECESSARY STEPS FOR TIME VARYING POLICY ANALYSIS Define: Express the target quantity

THE FIVE NECESSARY STEPS FOR TIME VARYING POLICY ANALYSIS Define: Express the target quantity Q as a property of the model M. Assume: Express causal assumptions in structural or graphical form. Identify: Determine if Q is identifiable. Estimate: Estimate Q if it is identifiable; approximate it, if it is not. Test: If M has testable implications

THE FIVE NECESSARY STEPS FOR TREATMENT ON TREATED Define: Express the target quantity Q

THE FIVE NECESSARY STEPS FOR TREATMENT ON TREATED Define: Express the target quantity Q a property of the model M. Assume: Express causal assumptions in structural or graphical form. Identify: Determine if Q is identifiable. Estimate: Estimate Q if it is identifiable; approximate it, if it is not. Test: If M has testable implications

THE FIVE NECESSARY STEPS FOR INDIRECT EFFECTS Define: Express the target quantity Q a

THE FIVE NECESSARY STEPS FOR INDIRECT EFFECTS Define: Express the target quantity Q a property of the model M. Assume: Express causal assumptions in structural or graphical form. Identify: Determine if Q is identifiable. Estimate: Estimate Q if it is identifiable; approximate it, if it is not. Test: If M has testable implications

THE FIVE NECESSARY STEPS FROM DEFINITION TO ASSUMPTIONS Define: Express the target quantity Q

THE FIVE NECESSARY STEPS FROM DEFINITION TO ASSUMPTIONS Define: Express the target quantity Q as a property of the model M. Assume: Express causal assumptions in structural or graphical form. Identify: Determine if Q is identifiable. Estimate: Estimate Q if it is identifiable; approximate it, if it is not. Test: If M has testable implications

THE LOGIC OF CAUSAL ANALYSIS A - CAUSAL ASSUMPTIONS CAUSAL MODEL (MA) Q Queries

THE LOGIC OF CAUSAL ANALYSIS A - CAUSAL ASSUMPTIONS CAUSAL MODEL (MA) Q Queries of interest Q(P) - Identified estimands Data (D) A* - Logical implications of A Causal inference T(MA) - Testable implications Statistical inference Q - Estimates of Q(P) Provisional claims Goodness of fit Model testing

THE MACHINERY OF CAUSAL CALCULUS Rule 1: Ignoring observations P(y | do{x}, z, w)

THE MACHINERY OF CAUSAL CALCULUS Rule 1: Ignoring observations P(y | do{x}, z, w) = P(y | do{x}, w) Rule 2: Action/observation exchange P(y | do{x}, do{z}, w) = P(y | do{x}, z, w) Rule 3: Ignoring actions P(y | do{x}, do{z}, w) = P(y | do{x}, w) Completeness Theorem (Shpitser, 2006)

DERIVATION IN CAUSAL CALCULUS Genotype (Unobserved) Smoking Tar Cancer Probability Axioms Rule 2 Rule

DERIVATION IN CAUSAL CALCULUS Genotype (Unobserved) Smoking Tar Cancer Probability Axioms Rule 2 Rule 3

EFFECT OF WARM-UP ON INJURY (After Shrier & Platt, 2008) No, no!

EFFECT OF WARM-UP ON INJURY (After Shrier & Platt, 2008) No, no!

DETERMINING CAUSES OF EFFECTS A COUNTERFACTUAL VICTORY • Your Honor! My client (Mr. A)

DETERMINING CAUSES OF EFFECTS A COUNTERFACTUAL VICTORY • Your Honor! My client (Mr. A) died BECAUSE he used that drug. • Court to decide if it is MORE PROBABLE THAN NOT that A would be alive BUT FOR the drug! PN = P(? | A is dead, took the drug) > 0. 50

THE ATTRIBUTION PROBLEM Definition: 1. What is the meaning of PN(x, y): “Probability that

THE ATTRIBUTION PROBLEM Definition: 1. What is the meaning of PN(x, y): “Probability that event y would not have occurred if it were not for event x, given that x and y did in fact occur. ” 1. Answer: 2. Computable from M

THE ATTRIBUTION PROBLEM Definition: 1. What is the meaning of PN(x, y): “Probability that

THE ATTRIBUTION PROBLEM Definition: 1. What is the meaning of PN(x, y): “Probability that event y would not have occurred if it were not for event x, given that x and y did in fact occur. ” Identification: 2. Under what condition can PN(x, y) be learned from statistical data, i. e. , observational, experimental and combined.

ATTRIBUTION MATHEMATIZED (Tian and Pearl, 2000) • Bounds given combined nonexperimental and experimental data

ATTRIBUTION MATHEMATIZED (Tian and Pearl, 2000) • Bounds given combined nonexperimental and experimental data (P(y, x), P(yx), for all y and x) • Identifiability under monotonicity (Combined data)

CAN FREQUENCY DATA DECIDE LEGAL RESPONSIBILITY? Deaths (y) Survivals (y’ ) Experimental do(x) do(x’

CAN FREQUENCY DATA DECIDE LEGAL RESPONSIBILITY? Deaths (y) Survivals (y’ ) Experimental do(x) do(x’ ) 16 14 986 1, 000 Nonexperimental x x’ 2 28 998 972 1, 000 • Nonexperimental data: drug usage predicts longer life • Experimental data: drug has negligible effect on survival • Plaintiff: Mr. A is special. 1. He actually died 2. He used the drug by choice • Court to decide (given both data): Is it more probable than not that A would be alive but for the drug?

SOLUTION TO THE ATTRIBUTION PROBLEM • WITH PROBABILITY ONE • Combined data tell more

SOLUTION TO THE ATTRIBUTION PROBLEM • WITH PROBABILITY ONE • Combined data tell more that each study alone

MEDIATION: ANOTHER COUNTERFACTUAL TRIUMPH Why decompose effects? 1. To understand how Nature works 2.

MEDIATION: ANOTHER COUNTERFACTUAL TRIUMPH Why decompose effects? 1. To understand how Nature works 2. To comply with legal requirements 3. To predict the effects of new type of interventions: Signal re-routing and mechanism deactivating, rather than variable fixing

COUNTERFACTUAL DEFINITION OF INDIRECT EFFECTS X Z Y z = f (x, u) y

COUNTERFACTUAL DEFINITION OF INDIRECT EFFECTS X Z Y z = f (x, u) y = g (x, z, u) No Controlled Indirect Effect of X on Y: The expected change in Y when we keep X constant, say at x 0, and let Z change to whatever value it would have attained had X changed to x 1. In linear models, IE = TE - DE

POLICY IMPLICATIONS OF INDIRECT EFFECTS What is the indirect effect of X on Y?

POLICY IMPLICATIONS OF INDIRECT EFFECTS What is the indirect effect of X on Y? The effect of Gender on Hiring if sex discrimination is eliminated. GENDER X IGNORE Z QUALIFICATION f Y HIRING Deactivating a link – a new type of intervention

THE MEDIATION FORMULAS IN UNCONFOUNDED MODELS Z X Y z = f (x, u

THE MEDIATION FORMULAS IN UNCONFOUNDED MODELS Z X Y z = f (x, u 1) y = g (x, z, u 2) u 1 independent of u 2 Fraction of responses explained by mediation (sufficient) Fraction of responses owed to mediation (necessary)

THE MEDIATION FORMULAS IN UNCONFOUNDED MODELS Z X Y z = f (x, u

THE MEDIATION FORMULAS IN UNCONFOUNDED MODELS Z X Y z = f (x, u 1) y = g (x, z, u 2) u 1 independent of u 2 Fraction of responses explained by mediation Complete identification conditions for confounded models (sufficient) with multiple mediators. Fraction of responses owed to mediation (necessary)

TRANSPORTABILITY OF KNOWLEDGE ACROSS DOMAINS (with E. Bareinboim) 1. A Theory of causal transportability

TRANSPORTABILITY OF KNOWLEDGE ACROSS DOMAINS (with E. Bareinboim) 1. A Theory of causal transportability When causal relations learned from experiments be transferred to a different environment in which no experiment can be conducted? 2. A Theory of statistical transportability When can statistical information learned in one domain be transferred to a different domain in which a. only a subset of variables can be observed? Or, b. only a few samples are available?

MOTIVATION WHAT CAN EXPERIMENTS IN LA TELL ABOUT NYC? Z (Age) X (Intervention) Y

MOTIVATION WHAT CAN EXPERIMENTS IN LA TELL ABOUT NYC? Z (Age) X (Intervention) Y (Outcome) Z (Age) X (Observation) Experimental study in LA Measured: Needed: Transport Formula (calibration): Y (Outcome) Observational study in NYC Measured:

TRANSPORT FORMULAS DEPEND ON THE STORY Z S S S Y X Z X

TRANSPORT FORMULAS DEPEND ON THE STORY Z S S S Y X Z X (a) (b) a) Z represents age b) Z represents language skill ? S Factors producing differences Y

TRANSPORT FORMULAS DEPEND ON THE STORY Z S S S Y X Z X

TRANSPORT FORMULAS DEPEND ON THE STORY Z S S S Y X Z X (a) (b) a) Z represents age b) Z represents language skill c) Z represents a bio-marker ? Y X Z (c) Y

GOAL: ALGORITHM TO DETERMINE IF AN EFFECT IS TRANSPORTABLE U V T X S

GOAL: ALGORITHM TO DETERMINE IF AN EFFECT IS TRANSPORTABLE U V T X S W Z Y INPUT: Annotated Causal Graph S Factors creating differences OUTPUT: 1. Transportable or not? 2. Measurements to be taken in the experimental study 3. Measurements to be taken in the target population 4. A transport formula

TRANSPORTABILITY REDUCED TO CALCULUS Theorem A causal relation R is transportable from ∏ to

TRANSPORTABILITY REDUCED TO CALCULUS Theorem A causal relation R is transportable from ∏ to ∏* if and only if it is reducible, using the rules of do-calculus, to an expression in which S is separated from do( ). S Z W X Y

RESULT: ALGORITHM TO DETERMINE IF AN EFFECT IS TRANSPORTABLE U V T X S

RESULT: ALGORITHM TO DETERMINE IF AN EFFECT IS TRANSPORTABLE U V T X S W Z Y INPUT: Annotated Causal Graph S Factors creating differences OUTPUT: 1. Transportable or not? 2. Measurements to be taken in the experimental study 3. Measurements to be taken in the target population 4. A transport formula 5. Completeness (Bareinboim, 2012)

WHICH MODEL LICENSES THE TRANSPORT OF THE CAUSAL EFFECT X→Y S External factors creating

WHICH MODEL LICENSES THE TRANSPORT OF THE CAUSAL EFFECT X→Y S External factors creating disparities Yes No S X S S (a) S X Yes W Z (d) X Y Y X (b) Yes Y S X W Z (e) Yes Y Z (c) S X Z (f) Y No Y

STATISTICAL TRANSPORTABILITY (Transfer Learning) Why should we transport statistical information? i. e. , Why

STATISTICAL TRANSPORTABILITY (Transfer Learning) Why should we transport statistical information? i. e. , Why not re-learn things from scratch ? 1. Measurements are costly. Limit measurements to a subset V * of variables called “scope”. 2. Samples are scarce. Pooling samples from diverse populations will improve precision, if differences can be filtered out.

STATISTICAL TRANSPORTABILITY Definition: (Statistical Transportability) A statistical relation R(P) is said to be transportable

STATISTICAL TRANSPORTABILITY Definition: (Statistical Transportability) A statistical relation R(P) is said to be transportable from ∏ to ∏* over V * if R(P*) is identified from P, P*(V *), and D where P*(V *) is the marginal distribution of P* over a subset of variables V *. R=P* (y | x) is transportable over V* = {X, Z}, i. e. , R is estimable without re-measuring Y S X Z Y Transfer Learning If few samples (N 2) are available from ∏* and many samples (N 1) from ∏ then estimating R = P*(y | x) by achieves a much higher precision

META-ANALYSIS OR MULTI-SOURCE LEARNING Target population R = P*(y | do(x)) (a) W X

META-ANALYSIS OR MULTI-SOURCE LEARNING Target population R = P*(y | do(x)) (a) W X (d) (b) Z Y X (e) Z (c) Z W Y Z X (f) Z S W Z S S X (g) W Y X (h) Z W Y X (i) Z W W Y X W Y Z S S X Y Y X W Y

CAN WE GET A BIAS-FREE ESTIMATE OF THE TARGET QUANTITY? Target population R =

CAN WE GET A BIAS-FREE ESTIMATE OF THE TARGET QUANTITY? Target population R = P*(y | do(x)) Is R identifiable from (d) and (h) ? (a) Z X (d) W Y Z S R(∏*) is identifiable from studies (d) and (h). X W Y R(∏*) is not identifiable from studies (d) and (i). (h) (i) Z Z S S X W Y

FROM META-ANALYSIS TO META-SYNTHESIS The problem How to combine results of several experimental and

FROM META-ANALYSIS TO META-SYNTHESIS The problem How to combine results of several experimental and observational studies, each conducted on a different population and under a different set of conditions, so as to construct an aggregate measure of effect size that is "better" than any one study in isolation.

META-SYNTHESIS REDUCED TO CALCULUS Theorem {∏ 1, ∏ 2, …, ∏K} – a set

META-SYNTHESIS REDUCED TO CALCULUS Theorem {∏ 1, ∏ 2, …, ∏K} – a set of studies. {D 1, D 2, …, DK} – selection diagrams (relative to ∏*). A relation R(∏*) is "meta estimable" if it can be decomposed into terms of the form: such that each Qk is transportable from Dk. Open-problem: Systematic decomposition

BIAS VS. PRECISION IN META-SYNTHESIS Principle 1: Calibrate estimands before pooling (to minimize bias)

BIAS VS. PRECISION IN META-SYNTHESIS Principle 1: Calibrate estimands before pooling (to minimize bias) Principle 2: Decompose to sub-relations before calibrating (to improve precision) (a) (g) Z (h) Z (i) Z (d) Z S X W Y X W S Y X W Y X Calibration Pooling Z W Y

BIAS VS. PRECISION IN META-SYNTHESIS (a) Z (g) Z (h) Z (i) Z (d)

BIAS VS. PRECISION IN META-SYNTHESIS (a) Z (g) Z (h) Z (i) Z (d) S X W Y X W S Y X W Y X Pooling Composition Pooling Z W Y

MISSING DATA: A SEEMINGLY STATISTICAL PROBLEM (Mohan & Pearl, 2012) • Pervasive in every

MISSING DATA: A SEEMINGLY STATISTICAL PROBLEM (Mohan & Pearl, 2012) • Pervasive in every experimental science. • Huge literature, powerful software industry, deeply entrenched culture. • Current practices are based on statistical characterization (Rubin, 1976) of a problem that is inherently causal. • Consequence: Like Alchemy before Boyle and Dalton, the field is craving for (1) theoretical guidance and (2) performance guarantees.

ESTIMATE P(X, Y, Z) Sam- Observations Missingness ple # X* Y* Z* Rx Ry

ESTIMATE P(X, Y, Z) Sam- Observations Missingness ple # X* Y* Z* Rx Ry Rz 1 1 0 0 0 2 1 0 0 0 3 1 m m 0 1 1 4 0 1 m 0 0 1 5 m 1 0 1 6 m 0 1 1 0 0 7 m m 0 1 1 0 8 0 1 m 0 0 1 9 0 0 m 0 0 1 10 1 0 m 0 0 1 11 1 0 0 0 -

WHAT CAUSAL THEORY CAN DO FOR MISSING DATA Q-1. What should the world be

WHAT CAUSAL THEORY CAN DO FOR MISSING DATA Q-1. What should the world be like, for a given statistical procedure to produce the expected result? Q-2. Can we tell from the postulated world whether any method can produce a bias-free result? How? Q-3. Can we tell from data if the world does not work as postulated? • None of these questions can be answered by statistical characterization of the problem. • All can be answered using causal models (existence, guarantees, algorithms, testable implications).

MISSING DATA: TWO PERSPECTIVES Causal inference is a missing data problem. (Rubin 2012) Missing

MISSING DATA: TWO PERSPECTIVES Causal inference is a missing data problem. (Rubin 2012) Missing data is a causal inference problem. (Pearl 2012) Why is missingness a causal problem? • Which mechanism causes missingness makes a difference in whether / how we can recover information from the data. • Mechanisms require causal language to be properly described – statistics are not sufficient. • Different causal assumptions lead to different routines for recovering information from data, even when the assumptions are indistinguishable by any statistical means.

ESTIMATE P(X, Y, Z) Sam- Observations Missingness ple # X* Y* Z* Rx Ry

ESTIMATE P(X, Y, Z) Sam- Observations Missingness ple # X* Y* Z* Rx Ry Rz 1 1 0 0 0 2 1 0 0 0 3 1 m m 0 1 1 4 0 1 m 0 0 1 5 m 1 0 1 6 m 0 1 1 0 0 7 m m 0 1 1 0 8 0 1 m 0 0 1 9 0 0 m 0 0 1 10 1 0 m 0 0 1 11 1 0 0 0 -

ESTIMATE P(X, Y, Z) Sam- Observations Missingness ple # X* Y* Z* Rx Ry

ESTIMATE P(X, Y, Z) Sam- Observations Missingness ple # X* Y* Z* Rx Ry 1 1 0 0 0 2 1 0 1 3 1 m 4 0 5 Complete Cases Rz Row # X* Y* Z* Rx Ry Rz 0 0 1 1 0 0 0 0 2 1 0 0 0 m 0 1 1 11 1 0 0 0 1 m 0 0 1 - m 1 0 1 6 m 0 1 1 0 0 7 m m 0 1 1 0 8 0 1 m 0 0 1 9 0 0 m 0 0 1 10 1 0 m 0 0 1 11 1 0 0 0 - • Line deletion estimate is generally biased.

ESTIMATE P(X, Y, Z) Sam- Observations Missingness ple # X* Y* Z* Rx Ry

ESTIMATE P(X, Y, Z) Sam- Observations Missingness ple # X* Y* Z* Rx Ry Rz 1 1 0 0 0 2 1 0 0 0 3 1 m m 0 1 1 4 0 1 m 0 0 1 5 m 1 0 1 6 m 0 1 1 0 0 7 m m 0 1 1 0 8 0 1 m 0 0 1 9 0 0 m 0 0 1 10 1 0 m 0 0 1 11 1 0 0 0 - Z X Y Rz Rx Ry

ESTIMATE P(X, Y, Z) Row X* # Y* Z* Compute P(Y|Ry=0) 1 1 0

ESTIMATE P(X, Y, Z) Row X* # Y* Z* Compute P(Y|Ry=0) 1 1 0 0 2 1 0 1 3 1 m m 1 0 4 0 1 m 2 5 m 1 m 6 m 0 7 m 8 Row Y* # Compute P(X|Y, Rx=0, Ry=0) X* Y* 0 Row # 4 1 1 1 0 1 5 1 2 1 0 m 0 6 0 4 0 1 m 8 1 8 0 1 9 0 0 m 9 0 0 1 10 1 0 m 10 0 10 1 0 11 1 0 1 11 0 11 1 0 - - - Compute P(Z|X, Y, Rx=0, Ry=0, Rz=0) Row X* # Y* Z* 1 0 0 2 1 0 1 11 1 0 1 -

ESTIMATE P(X, Y, Z) Z X Rz Y Rx Ry

ESTIMATE P(X, Y, Z) Z X Rz Y Rx Ry

ESTIMATE P(X, Y, Z) (a) Z (b) X Y Rz Rx Ry Z X

ESTIMATE P(X, Y, Z) (a) Z (b) X Y Rz Rx Ry Z X Rz Y Rx • Statistically indistinguishable graphs, yet (a) permits recoverability, and (b) does not. • Consulting the wrong graph leads to the wrong deletion order and biases the estimates. Ry

CONCLUSIONS • Counterfactuals are the building blocks of scientific thought, free will and moral

CONCLUSIONS • Counterfactuals are the building blocks of scientific thought, free will and moral behavior. • The algorithmization of counterfactuals has benefited several problem areas in the empirical sciences, including policy evaluation, mediation analysis, generalizability, and credit / blame determination. • This brings us a step closer to achieving cooperative behavior among computers and humans.

Thank you

Thank you