Superhuman AI for Multiplayer Poker: Difference between revisions

From statwiki
Jump to navigation Jump to search
(Created page with "== Presented by == Hansa Halim, Sanjana Rajendra Naik, Samka Marfua, Shawrupa Proshasty == Introduction == Lorem Ipsum Bla bla bla == Related Work == Lorem Ipsum Bla bla...")
 
Line 4: Line 4:
== Introduction ==
== Introduction ==


Lorem Ipsum Bla bla bla
For many years, most of the superhuman AI that were built can only beat human players in two-player zero-sum games. These games include checkers, chess, two-player limit poker, Go, and two-player no-limit poker. The most common strategy that the AI use to beat those games is to find the most optimal Nash equilibrium. A Nash equilibrium is the best possible choice that a player can take, regardless of what their opponent is going to choose. Nash equilibrium has been proven to always exists in all finite games, and the challenge is to find the equilibrium. To summarize, Nash equilibrium is the best possible strategy and is unbeatable in two-player zero-sum games, since it guarantees to not lose in expectation regardless what the opponent is doing.


== Related Work ==
== Related Work ==

Revision as of 11:49, 14 November 2020

Presented by

Hansa Halim, Sanjana Rajendra Naik, Samka Marfua, Shawrupa Proshasty

Introduction

For many years, most of the superhuman AI that were built can only beat human players in two-player zero-sum games. These games include checkers, chess, two-player limit poker, Go, and two-player no-limit poker. The most common strategy that the AI use to beat those games is to find the most optimal Nash equilibrium. A Nash equilibrium is the best possible choice that a player can take, regardless of what their opponent is going to choose. Nash equilibrium has been proven to always exists in all finite games, and the challenge is to find the equilibrium. To summarize, Nash equilibrium is the best possible strategy and is unbeatable in two-player zero-sum games, since it guarantees to not lose in expectation regardless what the opponent is doing.

Related Work

Lorem Ipsum Bla bla bla

Layer for Processing Missing Data

Lorem Ipsum Bla bla bla

Theoretical Analysis

Lorem Ipsum Bla bla bla

Experimental Results

Lorem Ipsum Bla bla bla

Discussion

Lorem Ipsum Bla bla bla

Conclusion

Lorem Ipsum Bla bla bla

Critiques

Lorem Ipsum Bla bla bla

References

[1] Lorem Ipsum Bla bla bla [2] Lorem Ipsum Bla bla bla