Game Theory in Wireless and Communication Networks Theory

  • Slides: 21
Download presentation
Game Theory in Wireless and Communication Networks: Theory, Models, and Applications Lecture 10 Stochastic

Game Theory in Wireless and Communication Networks: Theory, Models, and Applications Lecture 10 Stochastic Game Zhu Han, Dusit Niyato, Walid Saad, and Tamer Basar

Overview of Lecture Notes l Introduction to Game Theory: Lecture 1, book 1 l

Overview of Lecture Notes l Introduction to Game Theory: Lecture 1, book 1 l Non-cooperative Games: Lecture 1, Chapter 3, book 1 l Bayesian Games: Lecture 2, Chapter 4, book 1 l Differential Games: Lecture 3, Chapter 5, book 1 l Evolutionary Games: Lecture 4, Chapter 6, book 1 l Cooperative Games: Lecture 5, Chapter 7, book 1 l Auction Theory: Lecture 6, Chapter 8, book 1 l Matching Game: Lecture 7, Chapter 2, book 2 l Contract Theory, Lecture 8, Chapter 3, book 2 l Learning in Game, Lecture 9, Chapter 6, book 2 l Stochastic Game, Lecture 10, Chapter 4, book 2 l Game with Bounded Rationality, Lecture 11, Chapter 5, book 2 l Equilibrium Programming with Equilibrium Constraint, Lecture 12, Chapter 7, book 2 l Zero Determinant Strategy, Lecture 13, Chapter 8, book 2 l Mean Field Game, Lecture 14, book 2 l Network Economy, Lecture 15, book 2 [2]

Overview of Stochastic Game l capture dynamic interactions among players whose decisions impact, not

Overview of Stochastic Game l capture dynamic interactions among players whose decisions impact, not only one another, as is the case of conventional static games, but also the so-called state of the game which determines the individual payoffs reaped by the players. l Many engineering situations in which the system is governed by stochastic, dynamic states, such as a wireless channel or the dynamics of a power system. l Simple overview on basics of stochastic games, so as to provide the fundamental tools needed to address such types of games. l For details T. Basar and G. J. Olsder, Dynamic Noncooperative Game Theory, SIAM Series in Classics in Applied Mathematics, Philadelphia, PA, USA, Jan. 1999. [3]

Definition [4]

Definition [4]

Stochastic Game Procedure Sorry for this lazy slide [5]

Stochastic Game Procedure Sorry for this lazy slide [5]

Notes 1. For the special case in which there exists only a single stage,

Notes 1. For the special case in which there exists only a single stage, the game simply reduces to a conventional static game in strategic form. 2. In general, a stochastic game can have an infinite number of stages, however, the case of finite stages can also be captured, in presence of an absorbing state. 3. In a stochastic game, the payoffs at every stage depend on the state and will change from one state to another. This is in stark contrast to repeated games in which the same matrix game is played at every stage (i. e. , there is only one state in a repeated game). 4. For the case in which there is only one player (i. e. , a centralized approach), the stochastic game reduces to a Markov decision problem. 5. the transition probability depends on both the state and the actions of the players. In some cases, the transition probability may depend on the action of only one player, this is a special case that is known as single-controller stochastic game. 6. We also note that the evolution of the state can follow a Markov process or differential equations. [6]

Definitions l [7]

Definitions l [7]

Payoff l [8]

Payoff l [8]

ε-Nash equilibrium l [9]

ε-Nash equilibrium l [9]

Properties l l Two-player, zero-sum stochastic game Theorems [10]

Properties l l Two-player, zero-sum stochastic game Theorems [10]

Stochastic Markov Game l A stochastic game is a collection of normal-form games that

Stochastic Markov Game l A stochastic game is a collection of normal-form games that the agents play repeatedly l The particular game played at any time depends probabilistically on – – the previous game played the actions of the agents in that game the states are the games the transition labels are joint action-payoff pairs Thanks for Dana Nau’s slides [11]

Stochastic Markov Game [12]

Stochastic Markov Game [12]

Histories and Rewards [13]

Histories and Rewards [13]

Strategies [14]

Strategies [14]

Equilibria [15]

Equilibria [15]

Equilibria [16]

Equilibria [16]

Two-Player Zero-Sum Stochastic Games [17]

Two-Player Zero-Sum Stochastic Games [17]

Backgammon [18]

Backgammon [18]

The Expectiminimax Algorithm [19]

The Expectiminimax Algorithm [19]

In practice [20]

In practice [20]

Summary l General definition – Procedure – Payoffs – ε-Nash equilibrium l Stochastic Markov

Summary l General definition – Procedure – Payoffs – ε-Nash equilibrium l Stochastic Markov games – Reward functions, equilibria – Expectiminimax – Example: Backgammon l Major reference – T. Basar and G. J. Olsder, Dynamic Noncooperative Game Theory, SIAM Series in Classics in Applied Mathematics, Philadelphia, PA, USA, Jan. 1999. – Eitan Altman’s work [21]