(Part I is here.)

There are further significant performance gains to be had by exploiting thoughtful choices for the suit values.

We assign the integer values 0, 1, 8 and 57 for spade, heart, diamond and club respectively.

Any sum of exactly seven values taken from {0, 1, 8, 57} is unique among all such sums. We add up the suits of a 7-card hand to produce a "flush check" key and use this to find a precalculated flush suit value (in the case we're looking at a flush) or otherwise a defined non-flush constant.

There are further significant performance gains to be had by exploiting thoughtful choices for the suit values.

We assign the integer values 0, 1, 8 and 57 for spade, heart, diamond and club respectively.

Any sum of exactly seven values taken from {0, 1, 8, 57} is unique among all such sums. We add up the suits of a 7-card hand to produce a "flush check" key and use this to find a precalculated flush suit value (in the case we're looking at a flush) or otherwise a defined non-flush constant.

The extraordinarily lucky aspect of this is that the maximum non-flush key we have, 7825759, is a 23-bit integer (note 2^23 = 8388608) and the largest suit key we find, 57*7 = 399, is a 9-bit integer (note 2^9 = 512). If we bit-shift a card's non-flush face value and add to this its flush check to make a card key in advance, when we aggregate the resulting card keys over a given 7-card hand we generate a 23+9 = 32-bit integer key for the whole hand. This integer key can only just be accommodated on a 32-bit system and yet still carries enough information to decide if we're looking at a flush and, if not, to then look up the rank of the hand.

We can therefore typically (about 97% of the time for a random hand) decide the rank of a 7-card hand with no more than six additions, as opposed to twelve (six for the flush check and another six for the rank key), extracting a performance gain in excess of 30%.

This optimisation features in Poker Ace 1.1 and later versions. It has also been implemented in the Objective-C, Java, C++ and Python versions of the open source code.

For what it's worth, with this optimization and some inlining I've seen the C++ version clock over 250M evaluations per second on my 2010 MacBook Pro.

This optimisation features in Poker Ace 1.1 and later versions. It has also been implemented in the Objective-C, Java, C++ and Python versions of the open source code.

For what it's worth, with this optimization and some inlining I've seen the C++ version clock over 250M evaluations per second on my 2010 MacBook Pro.

This is some impressive work. Does your code handle 6-card hands as well?

ReplyDeleteNot yet, but you can use the existing key scheme with a new rank table. I didn't have an application for this. You can build one on the existing work.

DeleteOk, so the key scheme you use for 7-card hands produce unique keys for 5 and 6 card hands as well?

DeleteYes, exactly. It's overkill, you could compute a smaller scheme for 6 cards. Of course if you start asking the 7-card scheme for uniqueness over 8 or more cards it will fail.

DeleteCongratulations for the ideas and the work !

ReplyDeleteI totally new to poker, but I have read a bit over the last few weeks, and I find your approach original, efficient and general.

In particular the search for unique 'add' keys. but I am puzzled by what sort of brute force exactly you used to find the keys. At first sight, even if you know the highest key is ~1.5e6, that is still Binomial[1.5e6,13]~3e70 candidate keys, which is impossible to explore. Could you please write more about that, and maybe post the corresponding code (or the most relevant chunks - in any language) ?

Also, re the suit codes, you did not exactly chose the smallest sequence {0,1,29,37}. Not that it allows you to get to 8bits for the suit hand key because 7*37=259>256=2^8.

But as you optimized hard the face keys, I am curious why you did not the suit keys..?

Thx

Hello Oscar6Echo,

DeleteMany thanks for your interest and post. You make a good point: There is more than one sense of minimality. The keys I chose for the non-flush face values are "lexicographically" minimal and were computed inductively. That doesn't imply they are minimal in that they produce the least seven-card hand non-flush key among all schemes, but they're good enough for my needs.

I believe I included the code I used to generate the non-flush keys with the source code I released. Have a look at "GeneralUnits.cpp" in "Scripts". Having found the first n keys, it successively tries larger integers for the (n+1)th and rejects each if uniqueness fails. In this sense it's brute force.

You might be able to generate a better set of keys by working down from ~1.5e6 instead of working up from 0, it's well worth giving it a try.

Cheers,

SpecialK

Hi Oscar6Echo....

DeleteIf we allow collisions only in the keys that provide the same output (flush suit), then, the smallest sequence is {a=0,b=1,c=25,d=32}.

Values:

Flush A: 0 1 2 25 26 32 33 50 57 64

Flush B: 5 6 7 30 31 37 38 55 62 69

Flush C: 125 126 127 150 151 157 158 175 182 189

Flush D: 160 161 162 185 186 192 193 210 217 224

Collision found in 100:

A=0 B=4 C=0 D=3 -> KEY=100

A=3 B=0 C=4 D=0 -> KEY=100

7*32 = 224. This fits in 8 bits.

--------

Regards

Abraham

Hi SpecialK,

ReplyDeleteThx for the explanation.

Please find below some results. No breakthrough, but I feel I must contribute a bit too.

I will try some backward search - but I am not sure where to start from, much less than 1.5e6 if there is any significant progress to be made in terms to number of bits, I guess...

The other thing: I tried to fold the list of all sums generated by the first (lexicographically) faceKeySeven a second time, in order to reduce the memory footprint further. And I found there was no solution (except useless solutions near the end). I was surprised as the 'density' of this list is only ~0.5%. It must be some kind of probability paradox that I am still trying to clarify.

Out of curiosity, how did you come across this (simple and elegant but unheard of to me before reading your post) idea of additive key concept ?

Cheers

-------------------------------------------------------------

best (optimized)

suitKey = {0, 1, 29, 37};

2nd best (heuristic)

faceKeySeven = {0, 1, 6, 26, 111, 491, 2144, 8756, 29201, 92494, 253672, 642893, 1586207};

2nd best (heuristic)

faceKeyFive = {0, 1, 6, 26, 94, 335, 1069, 2576, 5961, 10552, 24667, 45198, 83623};

1st best (heuristic)

flushKeySeven = {1, 2, 4, 8, 16, 32, 64, 128, 240, 464, 896, 1728, 3328};

1st best (heuristic)

flushKeyFive = {0, 1, 2, 4, 8, 16, 32, 56, 104, 192, 352, 672, 1288};

Hello Oscar6Echo,

DeleteMany thanks for sharing your work here.

The paradox you refer to reminds me of the Birthday Problem: In a class of 50 people, what is the probability that two share a common birthday? (Answer - It's near certain.) It makes having a decent circumference at all a minor miracle.

In regard to your last question, I'd wanted to take the log of any multiplicative scheme for 7-card hands (otherwise the keys become huge). It's fairly clear that some sensible additive scheme will exist - you can assign powers of 10 for the non-flush face values and that will more than suffice. It's a short step from there to figure how much uniqueness you're actually going to need and to write a script that does the job.

Actually, the key scheme I outlined in Part II and that 32-bits are just enough is what surprised me most.

Cheers,

SpecialK

Hello SpecialK,

DeleteA remark I wanted to make some time ago, but I got carried away on other stuff..

Regarding the memory footprint of the vector of all non flush ranks indexed with the sum of the keys (faceKeySeven list, as I call it above) for each card in a 7 card hand, without considering the folding point, it is indeed relatively large (31.3MB), but it can be stored (sort of zipped) in much less space (852KB .csv file) as a sparse array, which it is (see below).

Naturally, the sparse array must be expanded in the RAM for maximum access speed to the ranks. I also think that if such a sparse array is put in a MySQL database and the COLUMN 'sum of keys' is indexed, the hash mechanism will provide a decently fast access to the ranks. It is probably a trade off speed vs memory to be considered.

But I would think that 30MB (~15MB with the folding point) of RAM is not very much of a constraint nowadays, even on mobiles. Sure moving around large files, especially wirelessly is a still a bit difficult (I mean cellular networks much more than wifi), but the sparse array storage structure seems a convenient and very simple solution.

What do you think ?

Is there more the the memory footprint issue than I can see ?

------------------------------

Beginning and end of faceKeySeven data structure stored:

{4} -> 7297

{5} -> 7309

{8} -> 7298

{9} -> 7153

{10} -> 7310

(...)

{7189868} -> 7452

{7191446} -> 7452

{7198113} -> 7452

{7212269} -> 7452

{7273076} -> 7452

{7451764} -> 7452

{7825760} -> 7452

{anything else between 1 and 7825760 included}-> 0

Hello,

ReplyDeleteis there a way to use your code in my iOS app? GPL won't apply, because it is impossible to share source code in iTunes. I'm not trying to make any money out of this and this is a school project of mine.

attem

Hello Attem,

DeleteI'm sorry to disappoint you, but it seems you've answered your own question.

SpecialK

Hi,

ReplyDeletehow the card evaluator is working in partII ? in partI, we had

0 = Ace of Spades

1 = Ace of Hearts

2 = Ace of Diamonds

3 = Ace of Clubs

4 = King of Spades

5 = King of Hearts

6 = King of Diamonds

7 = King of Clubs

and so forth. But in partII I guess it is different due to the 0,1,8,57 sequence.

Thanks

Thanks for your post and interest. You guess wrong - the part of the scheme that concerns you here is unchanged.

DeleteThanks for you quick answer, amazing job !

Delete