The one most people are aware of is known as 'Frequentist Probability'. This basically views probability from the standpoint of an infinite number of experiments or trials, each producing there own independent result (an event) - such as dealing out an infinite number of Hold'em starting hands - and frequentist probability tells us we will get AA (on average) once every 220 times, for example.
The other most common framework is known as 'Bayesian Probability', named after Thomas Bayes, an 18th Century mathematician.
This takes a rather different approach, and it captures mathematically what poker players try intuitively to do, in that it makes use of a 'prior probability' and makes adjustments based on future observations. Although often players do a rather bad job of this - an example is "Opponent raised, they might have AK, A72r flop and they bet so its more likely he has AK (atleast a A)?"
I will state the formula first, and give brief explanation, then show an example and confirm it with a separate method.
Note: P(X) means the Probability of X.
Bayes' Theorem:
Code: Select all
P(E|H) x P(H)
P(H|E) = -------------
P(E)
Where H is some hypothesis we have made (and wish to refine) and E is an observed Event which may affect H.
P(H) is the 'prior probability' of the hypothesis.
P(E) is the probability of the Event.
Now of course, P(A|B) needs explanation:
This is called a 'Conditional Probability', and P(A|B) reads as 'The conditional probability of event A given that event B occured',
and is used for events that are dependant (such as in poker, as compared to flipping coins). However, the implicative function of conditional probabilities is 'right to left' in that we can also interpret this as 'What is the probability that B implies A'.
See here for a more indepth look at conditional probability.
So we can read Bayes' Theorem like this:
We formulate some hypothesis, H, and assign it some initial (prior) probability P(H).
We then observe some event E, that has probability P(E) of occuring.
What is the a revised probabilty of H, or P'(H), in light of E occuring?
Which is kind of the opposite inference from the (naive) Poker example above.
Something else essential to the process of Hand Reading is thinking in terms of combinations of cards, ie how many different AK hands are there? How many TT hands are there? etc
Here I introduce 2 simple formulae from Combinatorics, that make working with card combinations much easier:
1) Unpaired hands: Combinations = #Card1 x #Card2
Example: How many different combinations of KT are there? Well there are 4 of each card -> 4 x 4 = 16.
We can simply add these to include more hand types, for example KJ OR KT = 32 combinations
Say we want to know how many combinations of Ax+ are there where, say, x >= T.
There are 4 cards that fit for x (T,J,Q,K), and we know there are 16 combos of each
so 16 x 4 = 62 combos of AT+.
For unpaired hands, these 16 combinations consists of 12 offsuit combinations and 4 suited.
2) Paired Hands: Combinations = #Cards x (#Cards - 1) / 2
Example: How many different combinations of TT are there? 4 Tens gives 4 x 3 / 2 = 6
Of course, both of these formula still hold when considering card removal effects.
Example: Preflop we hold an Ace -> leaving 3 in the deck.
Now how many combinations of AK are left? -> 3 Aces x 4 Kings = 12 (9 offsuit and 3 suited)
Now how many combinations of AA are left? 3 x (3 - 1) / 2 = 3.
From this we can immediately see the impact of our holding an Ace: only 3/4 of the AK (or AQ whatever) combos remain, and 1/2 the AA combos.
So how does Bayes' Theorem apply to Poker?
Consider this example:
I hypothesise that my opponent has an Ace (I do not), lets call this H and assign probability P(H)
Now we see the flop, which is A84. What is the (new) probability P'(H) in light of this?
I will work a general version of this example to show that it works, then a more specific one to give some idea of how powerful this techniques can be.
So preflop, given we don't hold an Ace, the (frequentist) probability that our opponent holds 1 Ace is 0.150 (15%)
We can use the combinatorics formula above to help calculate this:
There are 16 combinations of AK, likewise AQ and so on, and there are 12 other cards to go with the Ace, therefore the total number of hands than contain exactly 1 Ace is 16 x 12 = 192.
Now removing 1 card reduces the number of combinations by 4, we hold 2 cards, so we need to subtract 2 x 4 = 8 from 192 to get the remaining combinations of Ax = 184. (ie if we hold a 7, there a 4 less combinations of A7 our opponent can possibly have)
There are a total of 52 * 51 / 2 = 1326 combinations of Hold'em starting hands, however as we have 2 cards ourselves, this leaves 50 * 49 / 2 = 1225 possible 2 card combinations our opponent can hold, so the probability out opponent holds an Ace is 184 / 1225 = 0.150
To work out the probability of holding 1 or more Aces, we simply add 6 (the number of AA combos) to the quotient 184.
So we have P(H) = P(Opp has A) = 0.150
P(1 Ace Flops) = 0.21 (I wont take the space to show the working for this or the next figure)
And the probability that an Ace flops, given our opponent holds one: P(1 A Flops | Opp has A) = 0.16
Now we have all the figures neccessary to use Bayes' Theorem to 'update' the probability that our opponent holds and Ace, given that we see 1 in the flop.
Code: Select all
P(1 A Flops | Opp has A) x P(Opp has A)
P(Opp has A | 1 A flops) = -------------------------------------
P( 1 Ace Flops )
0.16 x 0.15
= -------------
0.21
= 0.114
So we have 'refined' the probability of our opponent holding an Ace from 15% preflop, to 11.4% after we see an Ace on the flop.
OK, so lets check this using combinations alone and card removal effects (frequentist approach):
Removing 1 Ace, number of combos containing 1 Ace is 12 x 12 = 144: 144 / 1225 = 0.111 (11.1%)
Which is 0.3% different from the answer we got using Bayes' Theorem, an entirely acceptable level of differrence, so this 'confirms' the Bayesian approach. (Not to mention rounding error, and use of differing degrees of accuracy in the figures - 2 or 3 decimals)
Now, lets apply this to a more common scenario in Poker. (factor additional information also)
This time, our opponent open raises from UTG. Lets say we know he is doing this 10% of the time (from our HUD), so we assign him a top 10% hand. (Somewhat naive, but often true, esp at lower stakes where 'balancing' is either unknown or not practiced often)
1 common method of this, taking 17 of 169 hand types, gives a range of {88+,ATs+,KTs+,QJs,AQo+} - consisting of 98 hand combinations.
Now, consider how many of these hands contain 1 Ace : the ATs+ and AQo+ portion of the range.
We have the full 16 combos for AQ and AK = 32
and 4 combos for AJs and ATs = 8, hence 40 combinations have 1 Ace, so 40 / 98 = 0.408 (40.8%)
So here P(Opp has A) = 0.408
So now after seeing the same flop as above, what is the probability our opponent holds an Ace or P'(H)?
The only value that has changed is P(Opp has A), which is now 0.408 instead of 0.150, applying Bayes' Theorem we get:
Code: Select all
0.16 x 0.408
------------
0.21
= 0.311
So now we have refined the probability of our hypothesis from 40.8% preflop, to 31.1% on the flop, a reduction of almost 10%!
Looking back at our preflop thoughts, we saw opponent's hands that had 1 Ace were 40 of the 98, however now they consist of 31.1% of 98 which is approximately 31 hands, so we have eliminated 9 hands of our opponents range!
9 hands may not sound like a lot, but it is. But also consider we only applied ONE piece of information here (1 Ace flops) and we can often apply more observations to refine this further...
Now compare this to the first 'reasoning' example above, where a bet from our opponent when an Ace flops leads to the idea that our opponent has an Ace, or that our opponent more likely has an Ace given he bet. The Bayesian approach tells us exactly the opposite (although we have not included the 'bet information' in this analysis - lets say we know opponent CBets 50% of the time he doesnt hit). But, you say, so does frequentist approach tell us the same thing. Yes, BUT only if we look at it in the correct way, but the problem lies in that this is not immediately intuitive. ie, We normally do not consider events that happen subsequent in time will have an effect on the probability of a precedent event.
Bayes' Theorem tells us they do.
This gives you a taste of how Bayesian Inference (using Bayes' Theorem to make inferences) works, but really the power lies in the implications of Bayes' Theorem itself (after all, we could have worked all this out using standard frequentist methods, if we do so correctly, however our intuition is often wrong in this regard).
This is not an easy thing to explain (or understand), so I will redirect you to an excellent resource for this - "An Intuitive Explanation of Bayes' Theorem" or "an excruciatingly gentle introduction". Also here (which also describes conditional probabilities rather well - and visually).
I guarantee you will be amazed how counter-intuitive some problems (and poker often falls in this category) really are. If you arent, then you are a rare person indeed!
But just quicky, looking at Bayes' Theorem itself, we can see a few implications of immediate benefit.
1) P'(H) is INVERSLY PROPORTIONAL to P(E).
- This means, the diference between P'(H) and P(H) will be smaller the larger P(E) is, and vice versa.
- Likely observations (events) will result in only small changes from P to P', unlikely observations in large changes.
- To see this, imagine the flop is AAx above, (much less likely) and we can see intuitively that P'(H) will be even less that 0.111
2) P'(H) is DIRECTLY PROPORTIONAL to P(H).
- This means the difference will be larger the larger P(H) is, and vice versa.
- Likely hypothesise result in larger changes from P to P', after the observed event.
- To see this, imagine we set P(H) to 1 (certain) above, then P'(H) is 75% -> difference of 25%
3) Also notice that 'The probability of our hypothesis, given our observation' is also directly proportional to 'The probability of our observation, given our hypothesis'.
This has been a basic introduction to Bayes' Theorem, and a small taste of applying it to Poker.
I hope to post some more applications of it to Poker later, in the mean time, think it over and discuss whats been presented thus far (or ask for clarification of any points).
Addendum: Some links of possible interest:
http://en.wikipedia.org/wiki/Probabilit ... pretations
http://en.wikipedia.org/wiki/Bayes'_theorem
http://en.wikipedia.org/wiki/Bayesian_inference
http://en.wikipedia.org/wiki/Discrete_p ... stribution
http://www.bluefirepoker.com/thread.aspx?thrid=2599
http://www.cardplayer.com/cardplayer-ma ... rt-players
http://archives1.twoplustwo.com/showfla ... 37&fpart=1
http://www.ruffpoker.com/blog/poker-mat ... s-theorem/
http://www.google.com.au/webhp?sourceid ... 270ec8e787

