Robots like Pluribus are now beating people at poker

0
211
views
Can a robot win a game of poker like this?

Facebook plays its part in creating an AI bot capable of beating millionaire poker players at their own game.

I’ve played poker once, amongst people who weren’t much better than me. I prefer card games like solitaire and switch. But for professional card players, games like poker are their lifeblood. And they have a new threat to their hands.

Oliver Roeder wrote an article on robots and their poker-playing abilities for FiveThirtyEight. His story started in the Mojave Desert with hundreds of strangers. They were all there for around 14 hours and one-by-one, people left until there was one who’d walk away a millionaire. But what were they doing?

We were playing poker. And unbeknownst to me at the time, a pair of Intel processors on the other side of the country had recently undergone a similar ordeal. At the crescendo of the World Series of Poker in Las Vegas, a pair of computer scientists have announced that they’ve created an artificial intelligence poker player that is stronger than a full table of top human professionals at the most popular form of the game — no-limit Texas Hold ’em.

Oliver Roeder

Robots have a long history in games like poker.

  • Rain Man 2.0, named after the Tom Cruise/Dustin Hoffman movie, counts cards using a webcam, “real-time object detection”, and a whole lot of code.
  • Swiss firm ABB built a robot capable of playing chess as well as making sushi and pouring beer.
  • Well-renowned research unit DeepMind has built computers to beat humans at checkers, chess, Go, and Starcraft

In the case of the fallen poker players in the Mojave Desert, a team involving AI researchers at Carnegie Mellon and Facebook created an A.I. bot capable of beating some of the top poker players in the world. It’s called Pluribus and learnt everything from machine learning, as a robot does. Then it tried its mettle (ahem) against 5 humans in a game of no-limit Texas Hold’em.

According to Singularity Hub:

One of the most significant breakthroughs is the computational efficiency of the new approach. Learning the blueprint took 8 days on a 64-core server, which works out to 12,400 CPU core hours. In contrast, their previous Libratus system took 15 million core hours to train.

That’s a significant improvement in learning and reduction in resources. A paper by Noam Brown and Tuomas Sandholm details the bot’s exploits. The final sentence is impressive and potentially worrying.

Pluribus’s success shows that despite the lack of known strong theoretical guarantees on performance in multiplayer games, there are large-scale, complex multiplayer imperfect-information settings in which a carefully constructed self-play-with-search algorithm can produce superhuman strategies.

Robots and superhuman strategies don’t mix if you watch enough movies (Terminator springs to mind). There are plenty of positive opportunities with robotics. Let’s hope Pluribus’s prowess can promote that.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.