Probabilistic Reasoning over Time CMSC 671 Fall 2010
Probabilistic Reasoning over Time CMSC 671 – Fall 2010 Class #19 – Monday, November 8 Russell and Norvig, Chapter 15. 1 -15. 2 material from Lise Getoor, Jean-Claude Latombe, and Daphne Koller 1
Temporal Probabilistic Agent sensors ? environment agent actuators t 1 , t 2 , t 3 , … 2
Time and Uncertainty The world changes, we need to track and predict it Examples: diabetes management, traffic monitoring Basic idea: n n n Copy state and evidence variables for each time step Model uncertainty in change over time, incorporating new observations as they arrive Reminiscent of situation calculus, but for stochastic domains Xt – set of unobservable state variables at time t n e. g. , Blood. Sugart, Stomach. Contentst Et – set of evidence variables at time t e. g. , Measured. Blood. Sugart, Pulse. Ratet, Food. Eatent Assumes discrete time steps 3
States and Observations Process of change is viewed as series of snapshots, each describing the state of the world at a particular time Each time slice is represented by a set of random variables indexed by t: 1. 2. n the set of unobservable state variables Xt the set of observable evidence variables Et The observation at time t is Et = et for some set of values et The notation Xa: b denotes the set of variables from Xa to Xb 4
Stationary Process/Markov Assumption: n Xt depends on some previous Xis First-order Markov process: P(Xt|X 0: t-1) = P(Xt|Xt-1) kth order: depends on previous k time steps Sensor Markov assumption: P(Et|X 0: t, E 0: t-1) = P(Et|Xt) That is: The agent’s observations depend only on the actual current state of the world Assume stationary process: n Transition model P(Xt|Xt-1) and sensor model P(Et|Xt) are time-invariant (i. e. , they are the same for all t) That is, the changes in the world state are governed by laws that do not themselves change over time 5
Complete Joint Distribution Given: n n n Transition model: P(Xt|Xt-1) Sensor model: P(Et|Xt) Prior probability: P(X 0) Then we can specify complete joint distribution: 6
Example Raint-1 Umbrellat-1 Rt-1 P(Rt|Rt-1) T F 0. 7 0. 3 Raint+1 Umbrellat+1 Rt P(Ut|Rt) T F 0. 9 0. 2 This should look a lot like a finite state automaton (since it is one. . . ) 7
Inference Tasks Filtering or monitoring: P(Xt|e 1, …, et) Compute the current belief state, given all evidence to date Prediction: P(Xt+k|e 1, …, et) Compute the probability of a future state Smoothing: P(Xk|e 1, …, et) Compute the probability of a past state (hindsight) Most likely explanation: arg maxx 1, . . xt. P(x 1, …, xt|e 1, …, et) Given a sequence of observations, find the sequence of states that is most likely to have generated those observations 8
Examples Filtering: What is the probability that it is raining today, given all of the umbrella observations up through today? Prediction: What is the probability that it will rain the day after tomorrow, given all of the umbrella observations up through today? Smoothing: What is the probability that it rained yesterday, given all of the umbrella observations through today? Most likely explanation: If the umbrella appeared the first three days but not on the fourth, what is the most likely weather sequence to produce these umbrella sightings? 9
Filtering We use recursive estimation to compute P(Xt+1 | e 1: t+1) as a function of et+1 and P(Xt | e 1: t) We can write this as follows: This leads to a recursive definition: f 1: t+1 = FORWARD (f 1: t, et+1) QUIZLET: What is α? 10
Filtering Example Raint-1 Umbrellat-1 Rt-1 P(Rt|Rt-1) T F 0. 7 0. 3 Raint+1 Umbrellat+1 Rt P(Ut|Rt) T F 0. 9 0. 2 What is the probability of rain on Day 2, given a uniform prior of rain on Day 0, U 1 = true, and U 2 = true? 11
- Slides: 11