Lecture 2 2 Models for motion tracking CMSC
- Slides: 55
Lecture 2. 2 Models for motion tracking CMSC 818 W : Spring 2019 Tu-Th 2: 00 -3: 15 pm CSI 2118 Nirupam Roy Feb. 12 th 2019
Happy or sad?
Happy or sad?
Happy or sad?
Happy or sad? Past experience P (The dolphin is happy | Experience)
Inference from sensor data Tracking arm motion
Inference from sensor data Tracking arm motion Partial information from sensors
Inference from sensor data Tracking arm motion Knowledge about the arm’s motion (probabilistic models) + Partial information from sensors
Inference from sensor data Tracking arm motion Knowledge about the arm’s motion (probabilistic models) Arm’s gesture and movement tracking + Partial information from sensors
Probability refresher
A few basic probability rules Joint probability: P (A, B) = P (B, A) Likelihood of multiple events occurring simultaneously
A few basic probability rules Joint probability: P (A, B) = P (B, A) Likelihood of multiple events occurring simultaneously Conditional probability: P (A|B) = P(A, B)/P(B) Probability of an event, given another event has occurred
A few basic probability rules Joint probability: P (A, B) = P (B, A) Likelihood of multiple events occurring simultaneously Conditional probability: P (A|B) = P(A, B)/P(B) Probability of an event, given another event has occurred Probability of the occurrence of an event, unconditioned on other
A few basic probability rules Joint probability: P (A, B) = P (B, A) Likelihood of multiple events occurring simultaneously Conditional probability: P (A|B) = P(A, B)/P(B) Probability of an event, given another event has occurred Probability of the occurrence of an event, unconditioned on other A 1 A 2 A 3 B
A few basic probability rules Joint probability: P (A, B) = P (B, A) Likelihood of multiple events occurring simultaneously Conditional probability: P (A|B) = P(A, B)/P(B) Probability of an event, given another event has occurred Probability of the occurrence of an event, unconditioned on other Chain rule: P (A, B, C) = P(A|B, C) P(B|C) P(C) Probability of the occurrence of an event, unconditioned on other
Bayes rule
Bayes rule Posterior Likelihood Prior
Bayes rule Posterior Likelihood Prior
Bayes rule Posterior Likelihood Prior
Bayes rule Posterior Likelihood Prior Relates inverse representations of the probabilities concerning two events
Bayes rule Diabetes: A 2 Healthy: A 1 Cancer: A 3 Smoker: B Posterior Likelihood Prior Relates inverse representations of the probabilities concerning two events
Markov Model Sunny Rainy t 1 t 2 t 3 t 4 t 5 t 6 (time)
Markov Model Sunny Rainy t 1 t 2 t 3 t 4 t 5 t 6 (time) Sunny day Rainy day
Markov Model Sunny Rainy t 1 t 2 t 3 t 4 t 5 t 6 (time) P (sunny after rainy) P (sunny after sunny) P (rainy after rainy) Sunny day Rainy day P (rainy after sunny)
Markov Model Sunny Rainy t 1 t 2 t 3 t 4 t 5 t 6 (time) P (sunny after rainy) P (sunny after sunny) P (rainy after rainy) Sunny Rainy The future depends on the present only, day not on the past P (rainy after sunny)
Markov Model Sunny Rainy t 1 t 2 t 3 t 4 t 5 t 6 (time) P (sunny after rainy) P (sunny after sunny) P (rainy after rainy) Sunny day Rainy day P (rainy after sunny)
Markov Model Sunny Rainy t 1 t 2 t 3 t 4 t 5 t 6 (time) P (sunny after rainy) P (sunny after sunny) P (rainy after rainy) Sunny day Temp. Rainy day P (rainy after sunny) Temp. Humidity Wind Humidity
Hidden Markov Model
Hidden Markov Model: Toy robot localization example Find location of the robot (Hidden information) S 1 S 2 S 3 … Observations = sensor measurement
Hidden Markov Model: Toy robot localization example Prob 0 1 t=0
Hidden Markov Model: Toy robot localization example Prob 0 1 t=1
Hidden Markov Model: Toy robot localization example Prob 0 1 t=1
Hidden Markov Model: Toy robot localization example Prob 0 1 t=2
Hidden Markov Model: Toy robot localization example Prob 0 1 t=3
Hidden Markov Model: Toy robot localization example Prob 0 1 t=4
Hidden Markov Model: Toy robot localization example Prob 0 1 t=5
Hidden Markov Model: Toy robot localization example State = location on the grid = Si S 1 S 2 S 3 …
Hidden Markov Model: Toy robot localization example State = location on the grid = Si S 1 S 2 S 3 … Observations = sensor measurement = Mi
Hidden Markov Model: Toy robot localization example State = location on the grid = Si S 1 S 2 Observations = sensor measurement = Mi S 3 … Depends on the current state Si only (Emission)
Hidden Markov Model: Toy robot localization example State = location on the grid = Si S 1 S 2 Observations = sensor measurement = Mi S 3 … Si Si+1 Si+2 … States change over time (Transition) Depends on the current state Si only (Emission)
Hidden Markov Model: Definition S 1 S 2 S 3 S 4 M 1 M 2 M 3 M 4
Hidden Markov Model: Definition Hidden states S 1 S 2 S 3 S 4 M 1 M 2 M 3 M 4
Hidden Markov Model: Definition Hidden states S 1 S 2 S 3 S 4 M 1 M 2 M 3 M 4 Observations
Hidden Markov Model: Definition S 1 S 2 M 1 Transition Emission
Hidden Markov Model: Definition S 1 S 2 M 1 Transition Emission Markov assumption: Transition probability depends on the current state only. Output independence assumption: Output/Emission probability depends on the current state only.
Hidden Markov Model: Definition Probability of a sequence of hidden states, given a sequence of observations
Hidden Markov Model: Definition Probability of a sequence of hidden states, given a sequence of observations
Hidden Markov Model: Definition Probability of a sequence of hidden states, given a sequence of observations
Hidden Markov Model: Definition Chain rule
Hidden Markov Model: Definition Observation depends on the current state only Future state depends on the current state only
Hidden Markov Model: Definition
Hidden Markov Model: Definition Probability of a given observation/measurement sequence Marginal probability rule
Hidden Markov Model: Definition Probability of a given observation/measurement sequence Summation over all possible combinations of hidden states
Hidden Markov Model: Definition Probability of a given observation/measurement sequence Summation over all possible combinations of hidden states For N hidden states and a sequence of T observations, NT different combinations
Hidden Markov Model: Definition • Forward-backward algorithm. • Viterbi algorithm
- Tracking cmsc
- 01:640:244 lecture notes - lecture 15: plat, idah, farad
- Modal and semi modals
- Simple harmonic motion lecture
- Simple harmonic motion lecture
- Size separation is not based on
- Kontinuitetshantering i praktiken
- Typiska novell drag
- Nationell inriktning för artificiell intelligens
- Vad står k.r.å.k.a.n för
- Shingelfrisyren
- En lathund för arbete med kontinuitetshantering
- Adressändring ideell förening
- Tidbok yrkesförare
- A gastrica
- Förklara densitet för barn
- Datorkunskap för nybörjare
- Stig kerman
- Debatt mall
- Delegerande ledarstil
- Nyckelkompetenser för livslångt lärande
- Påbyggnader för flakfordon
- Formel för lufttryck
- Publik sektor
- Kyssande vind analys
- Presentera för publik crossboss
- Jiddisch
- Vem räknas som jude
- Klassificeringsstruktur för kommunala verksamheter
- Epiteltyper
- Bästa kameran för astrofoto
- Centrum för kunskap och säkerhet
- Byggprocessen steg för steg
- Bra mat för unga idrottare
- Verktyg för automatisering av utbetalningar
- Rutin för avvikelsehantering
- Smärtskolan kunskap för livet
- Ministerstyre för och nackdelar
- Tack för att ni har lyssnat
- Referatmarkering
- Redogör för vad psykologi är
- Stål för stötfångarsystem
- Atmosfr
- Borra hål för knoppar
- Vilken grundregel finns det för tronföljden i sverige?
- Stickprovsvariansen
- Tack för att ni har lyssnat
- Rita perspektiv
- Informationskartläggning
- Tobinskatten för och nackdelar
- Toppslätskivling dos
- Datumr
- Egg för emanuel
- Elektronik för barn
- Mantel som bars av kvinnor i antikens rom
- Strategi för svensk viltförvaltning