AlImam Mohammad Ibn Saud University CS 433 Modeling
Al-Imam Mohammad Ibn Saud University CS 433 Modeling and Simulation Lecture 11 Continuous Markov Chains 01 May 2009 Dr. Anis Koubâa
Goals for Today § Understand the Markov property in the Continuous Case § Understand the difference between continuous time and discrete time Markov Chains § Learn how to use Continuous Markov Chains for modelling stochastic processes
“Discrete Time” versus “Continuous Time” 3 Discrete Time Fixed Time t=1 0 1 4 3 2 time Events occur at known points in time Continuous Time Variable Time t 1=u-s t 3=t-v t 2=v-u time s u v t Events occur at any point in time
Definition (Wi. Ki): Continuous-Time Markov Chains 4 In probability theory, a Continuous-Time Markov Process (CTMC) is a stochastic process { X(t) : t ≥ 0 } that satisfies the Markov property and takes values from a set called the state space. The Markov property states that at any times s > t > 0, the conditional probability distribution of the process at time s given the whole history of the process up to and including time t, depends only on the state of the process at time t. In effect, the state of the process at time s is conditionally independent of the history of the process
Definition 1: Continuous-Time Markov Chains 5 A stochastic process {X(t), t 0} is a Continuous-Time Markov Chain (CTMC) if for all 0 s t and non-negative integers i, j, x(u), such that 0 u < s, In addition, if this probability is independent from s and t, then the CTMC has stationary transition probabilities: X(u)=x(u) u ﺍﻟﻤﺎﺿﻲ X(t)=j X(s)=i s ﺍﻟﺤﺎ t ﻣﺪﺓ ﺯﻣﻨﻴﺔ t ﺍﻟﻤﺴﺘﻘﺒﻞ
Differences between Continuous-Time and Discrete-Time Markov Chains 6 Discrete Markov Chain Continuous Markov Chain Time tk or k ∈ ℕ+ s, t ∈ ℝ+ Transient Transition Probability Pij (k) for the time interval [k, k+1] Pij (1)= Pij Stationary Transition Probability in the time unit equal to 1 Time duration fixed Pij (s, t) for the time interval [s, t] Pij (t) for the time duration t = t-s dependent on the duration t Pii (t)=0 for any t Pii can be different from 0 Transition Probability to the Same State Continuous Time Discrete Time Variable Time Fixed Time t 1=u-s t=1 t 3=t-v t 2=v-u time 0 1 2 3 4 Events occur at known points in time s u v t Events occur at any point in time
Definition 2: Continuous-Time Markov Chains 7 A stochastic process {X(t), t 0} is a Continuous-Time Markov Chain (CTMC) if The amount of time spent in state i before making a transition to a different state is exponentially distributed with rate a parameter vi, When the process leaves state i, it enters state j with a probability pij, where pii = 0 and All transitions and times are independent (in particular, the transition probability out of a state is independent of the time spent in the state). Summary: The CTMC process moves from state to state according to and the time spent in each state is exponentially distributed
Differences between DISCRETE and CONTINOUS 8 DTMC process CTMC process Summary: The CTMC process moves from state to state according to and the time spent in each state is exponentially distributed
Five Minutes Break You are free to discuss with your classmates about the previous slides, or to refresh a bit, or to ask questions.
ﺍﻻﻧﺘﻘﺎﻝ Chapman Kolmogorov: Transition Function ﺩﺍﻟﺔ 10 n Define the Transition Function (like Transition Probability in DTMC) n Using the Markov (memoryless) property
Time Homogeneous Case Transition Matrix State Holding Time Transition Rate Transition Probability
Homogeneous Case 12 The transition rate matrix qij is the transition rate that the chain enters state j from state i ni=-qii is the transition rate that the chain leaves state i Discrete Markov Chain Pij i qki=nk. Pki j Pji Pij: Transition Probability Transition Time is deterministic (each slot) k Continuous Markov Chain q =n. P ji j i qik=ni. Pik qij=ni. Pij: Transition Probability, qij input rate from i to j, ni output rate Transition Time is random
Continuous Markov Chain 13 Continuous Markov Chain qki=nk. Pki k i j Pji Pij: Transition Probability Transition Time is Known (each slot) qji=nj. Pji j i qik=ni. Pik Discrete Markov Chain Pij qij=ni. Pij • Pij: Transition Probability, • qij input rate of state j from state i, • ni output rate from state i for all other neighbor states • Transition Time is randoms
Transition Probability. Matrix in Homogeneous Case 14 n Thus, if P(t) is the Transition Matrix AFTER a time period t n pij (0) is the instantaneous transition function from i to j
Next: State Holding Time Two Minutes Break You are free to discuss with your classmates about the previous slides, or to refresh a bit, or to ask questions.
State Holding and Transition Time 16 n In a CTMC, the process makes a transition from one state to another, after it has spent an amount of time on the state it starts from. This amount of time is defined as the state holding time. Theorem: Theorem State Holding Time of CTMC The state holding time Ti : = inf {t: Xt ≠ i | X 0 = i} in a state i of a Continuous-Time Markov Chain § Satisfies the Memoryless Property § Is Exponentially Distributed with the parameter ni Theorem: Theorem Transition Time in a CTMC The time Tij : = inf {t: Xt = j | X 0 = i} spent in a state i before a transition to state j is exponentially distributed with the parameter qij
State Holding Time: Proofs 17 n n n Suppose our continuous time Markov Chain has just arrived in state i. Define the random variable Ti to be the length of time the process spends in state i before moving to a different state. We call Ti the holding time in state i. The Markov Property implies the distribution of how much longer you’ll be in a given state i is independent of how long you’ve already been there. Proof (1) (by contradiction): Suppose it is time s, you are in state i, and i. e. , the amount of time you have already been in state i is relevant in predicting how much longer you will be there. Then for any time r < s, whether or not you were in state i at time r is relevant in predicting whether you will be in state i or a different state j at some future time s + t. Thus which violates the Markov Property. Proof (2): The only distribution satisfying the memoryless property is the exponential distribution. Thus, the result in (2).
Example: Computer System n n n Assume a computer system where jobs arrive according to a Poisson process with rate λ. Each job is processed using a First In First Out (FIFO) policy. The processing time of each job is exponential with rate μ. The computer has a buffer to store up to two jobs that wait for processing. Jobs that find the buffer full are lost.
Example: Computer System Questions n n n Draw the state transition diagram. Find the Rate Transition Matrix Q. Find the State Transition Matrix P
Example 20 a 0 n n a 1 a 2 d d The rate transition matrix is given by The state transition matrix is given by 3 d a
Transient State Probabilities
State Probabilities and Transient Analysis 22 n Similar to the discrete-time case, we define n In vector form n With initial probabilities n Using our previous notation (for homogeneous MC) Obtaining a general solution is not easy!
Steady State Probabilities
Steady State Analysis 24 n Often we are interested in the “long-run” probabilistic behavior of the Markov chain, i. e. , n These are referred to as steady state probabilities or equilibrium state probabilities or stationary state probabilities As with the discrete-time case, we need to address the n following questions Under what conditions do the limits exist? If they exist, do they form legitimate probabilities? How can we evaluate these limits?
Steady State Analysis 25 n Theorem: In an irreducible continuous-time Markov Chain consisting of positive recurrent states, a unique stationary state probability vector π with n These vectors are independent of the initial state probability and can be obtained by solving
Example 26 a 0 a 1 a 2 3 n d d d For the previous example, with the above transition function, what are the steady state probabilities n Solve a
Example 27 n The solution is obtained
Uniformization of Makov Chains
Uniformization of Markov Chains 29 n n n In general, discrete-time models are easier to work with, and computers (that are needed to solve such models) operate in discrete-time Thus, we need a way to turn continuous-time to discretetime Markov Chains Uniformization Procedure Recall that the total rate out of state i is –qii=n(i). Pick a uniform rate γ such that γ ≥ n (i) for all states i. The difference γ - n (i) implies a “fictitious” event that returns the MC back to state i (self loop).
Uniformization of Markov Chains 30 n Uniformization Procedure Let PUij be the transition probability from state I to state j for the discrete-time uniformized Markov Chain, then j j qik … i … qij k Uniformization i k … …
End of Chapter
- Slides: 31