Texas Holdem Poker With QLearning First Round preflop

  • Slides: 28
Download presentation
Texas Holdem Poker With Q-Learning

Texas Holdem Poker With Q-Learning

First Round (pre-flop) Player Opponent

First Round (pre-flop) Player Opponent

Second Round (flop) Player Community cards Opponent

Second Round (flop) Player Community cards Opponent

Third Round (turn) Player Community cards Opponent

Third Round (turn) Player Community cards Opponent

Final Round (river) Player Community cards Opponent

Final Round (river) Player Community cards Opponent

End (We Win) Player Community cards Opponent

End (We Win) Player Community cards Opponent

End Round (Note how initially low hands can win later when more community cards

End Round (Note how initially low hands can win later when more community cards are added) Player Community cards Opponent

The Problem • State Space Is Too Big • Over 2 million states would

The Problem • State Space Is Too Big • Over 2 million states would be needed just to represent the possible combinations for a single 5 card poker hand

Our Solution • It doesn’t matter what your exact cards are, just their relations

Our Solution • It doesn’t matter what your exact cards are, just their relations to your opponent. • The most important piece of information that it needs is how many possible combinations of two cards could make a better hand

Our State Representation [Round] [Opponent’s Last Bet] [# Possible Better Hands] [Best Obtainable Hand]

Our State Representation [Round] [Opponent’s Last Bet] [# Possible Better Hands] [Best Obtainable Hand] • [4] [3] [10] [ 3]

To Calculate The # Better Hands Evaluate! Player Count the number of better hands

To Calculate The # Better Hands Evaluate! Player Count the number of better hands on the right side Community cards Evaluate! All other two possible cards

Q-lambda Implementation (I) • The current state of the game is stored in a

Q-lambda Implementation (I) • The current state of the game is stored in a variable • Each time the community cards are updated, or the opponent places a bet, we update our current state. • For all states, the Q-values of each betting action is stored in an array. Fold = -0. 9954 Some Check = 2. 014 State Call = 1. 745 Raise = -3. 457

Q-lambda Implementation (II) • Eligibility Trace: we keep a vector of state-actions which are

Q-lambda Implementation (II) • Eligibility Trace: we keep a vector of state-actions which are responsible for us being in the current state In State s 1 In State s 2 Now we are in Did Action a 1 Did Action a 2 Current-State • Each time we make a betting decision, we add the current state and the action we chose to the eligibility trace.

Q-lambda Implementation (III) • At the end of each game, we get the money

Q-lambda Implementation (III) • At the end of each game, we get the money won/lost to reward/punish the state-actions in the eligibility trace In State s 1 In State s 2 In State s 3 Did Action a 1 Did Action a 2 Did Action a 3 Update Q[sn, an] Got Reward R

Testing Our Q-lambda Player üPlay Against the Random Player üPlay Against the Bluffer üPlay

Testing Our Q-lambda Player üPlay Against the Random Player üPlay Against the Bluffer üPlay Against Itself

Play Against the Random Player Q-lambda learns very fast how to beat the random

Play Against the Random Player Q-lambda learns very fast how to beat the random player rn a e l t es i o d Why o fast? s

Play Against the Random Player (II) Same graph, with up to 9000 games

Play Against the Random Player (II) Same graph, with up to 9000 games

Play Against the Bluffer • The bluffer always raises, unless raising is not possible,

Play Against the Bluffer • The bluffer always raises, unless raising is not possible, in which case it calls. • It is not trivial to defeat the bluffer, because you need to fold on weaker hands and keep raising on better hands • Our Q-lambda player does very poorly against the bluffer!

Play Against the Bluffer (II) In our many trials with different alpha and lambda

Play Against the Bluffer (II) In our many trials with different alpha and lambda values, the Q-lambda player always lost with a linear slope

Play Against the Bluffer (III) • Why is Q-lambda losing to the bluffer? •

Play Against the Bluffer (III) • Why is Q-lambda losing to the bluffer? • To answer this, we looked at the Q-value tables • With the good hands, Q-lambda has learned to Raise or Call Q-values from Round = 3 , Oppt. Bet = Raise

Play Against the Bluffer (IV) • The problem is that even with a very

Play Against the Bluffer (IV) • The problem is that even with a very poor hand in the second round, it still does not learn to fold and continues to either raise , call, or check. • Same problem exists with poor hands in other rounds Q-values from Round = 1 Oppt. Bet = Not_Yet Best. Hand. Possible = ‘ok’

Play Against Itself • We played the Q-lambda player against itself, hoping that it

Play Against Itself • We played the Q-lambda player against itself, hoping that it would eventually converge on some strategy.

Play Against Itself (II) • We also graphed the Q-values of a few particular

Play Against Itself (II) • We also graphed the Q-values of a few particular states over time, to see if they converge to meaningful values. • The result was mixed. For some states Qvalues completely converge, while for some other states they are almost random

Play Against Itself (III) • With a good hand in the last round, the

Play Against Itself (III) • With a good hand in the last round, the Q-values have converged so that Calling and after that Raising are good and folding is very bad

Play Against Itself (IV) • With a medium hand in the last round, the

Play Against Itself (IV) • With a medium hand in the last round, the Q-values does not clearly converge. Folding still looks very bad, but there is no preference between calling and raising.

Play Against Itself (V) • With a very bad set of hands in the

Play Against Itself (V) • With a very bad set of hands in the last round, the Qvalues do not converge at all. This is clearly wrong, since in an optimal-policy folding would have a higher value.

Why Q-values Do not Converge? v. Poker cannot be represented with our state representation

Why Q-values Do not Converge? v. Poker cannot be represented with our state representation (our states are too broad or are missing some critical aspects of the game) v. The ALPHA and LAMBDA factors are incorrect v. We have not run the game for long enough time

Conclusion • Our State representation and Q-lambda implementation is able to learn some aspects

Conclusion • Our State representation and Q-lambda implementation is able to learn some aspects of poker (for instance it learns to Raise or Call when it has a good hand in the last round) • However, in our tests it does not converge to an optimal policy. • More experience with the Alpha and Lambda parameters, and the state representation may result a better convergence.