Search Ebook here:

Machine, Platform, Crowd: Harnessing Our Digital Future

Machine, Platform, Crowd: Harnessing Our Digital Future PDF

Author: Erik Brynjolfsson

Publisher: W. W. Norton & Company


Publish Date: June 27, 2017

ISBN-10: 0393254291

Pages: 416

File Type: PDF

Language: English

read download

Book Preface

Computers on the Go (Board)

Learning to play Go well has always been difficult for humans, but programming computers to play it well has seemed nearly impossible.

Go is a pure strategy game—no luck involved*—developed at least 2,500 years ago in China. One player uses white stones; the other, black. They take turns placing stones on the intersections of a 19×19 grid. If a stone or group of stones has all of its freedoms removed—if it’s completely surrounded by opposing stones, essentially—it’s “captured” and taken off the board. At the end
of the game† the player with more captured territory wins.

People who love strategy love Go. Confucius advised that “gentlemen should not waste their time on trivial games—they should study Go.” In many quarters, it’s held in higher regard even than chess, another difficult two-person, luck-free strategy game. As the chess grand master Edward Lasker says, “While the Baroque rules of chess could only have been created by humans, the rules of Go are so elegant, organic, and rigorously logical that if intelligent life forms exist elsewhere in the universe, they almost certainly play Go.”

The game’s apparent simplicity belies a complexity that’s difficult to even conceptualize. Because of the large board and the great freedom that players
have in placing their stones, it is estimated that there are about 2 × 10170 (that is, 2 followed by 170 zeros) possible positions on a standard Go board. How big is this number? It’s larger than the number of atoms in the observable universe. In fact, that’s a completely inadequate benchmark. The observable universe contains about 1082 atoms. So, if every atom in the universe were itself an entire universe full of atoms, there would still be more possible Go games than atoms.

The Game Nobody Can Explain

How do the top human Go players navigate this absurd complexity and make smart moves? Nobody knows—not even the players themselves.
Go players learn a group of heuristics and tend to follow them.‡ Beyond these rules of thumb, however, top players are often at a loss to explain their own strategies. As Michael Redmond, one of few Westerners to reach the game’s highest rank, explains, “I’ll see a move and be sure it’s the right one, but won’t be able to tell you exactly how I know. I just see it.”

It’s not that Go players are an unusually tongue-tied lot. It turns out the rest of us can’t access all of our own knowledge either. When we recognize a face or ride a bike, on reflection we can’t fully explain how or why we’re doing what we’re doing. It is hard to make such tacit knowledge explicit—a state of affairs beautifully summarized by the twentieth-century Hungarian-British polymath Michael Polanyi’s observation “We know more than we can tell.”

“Polanyi’s Paradox,” as it came to be called, presented serious obstacles to anyone attempting to build a Go-playing computer. How do you write a program that includes the best strategies for playing the game when no human can articulate these strategies? It’s possible to program at least some of the heuristics, but doing so won’t lead to a victory over good players, who are able to go beyond rules of thumb in a way that even they can’t explain.

Programmers often rely on simulations to help navigate complex environments like all the possible universes of Go games. They write programs that make a move that looks good, then explore all the opponent’s plausible responses to that move, all the plausible responses to each response, and so on. The move that’s eventually chosen is essentially the one that has the most good futures ahead of it, and the fewest bad ones. But because there are so many potential Go games—so many universes full of them—it’s not possible to simulate more than an unhelpfully tiny fraction of them, even with a hangar full of supercomputers.

With critical knowledge unavailable and simulation ineffective, Go programmers made slow progress. Surveying the current state and likely trajectory of computer Go in a May 2014 article in Wired magazine, philosophy professor Alan Levinovitz concluded that “another ten years until a computer Go champion may prove too optimistic.” A December 2015 Wall Street Journal article by Chris Chabris, a professor of psychology and the newspaper’s game columnist, was titled “Why Go Still Foils the Computers.”

Past Polanyi’s Paradox

A scientific paper published the very next month—January 2016—unveiled a Go-playing computer that wasn’t being foiled anymore. A team at Google DeepMind, a London-based company specializing in machine learning (a branch of artificial intelligence we’ll discuss more in Chapter 3), published “Mastering the Game of Go with Deep Neural Networks and Tree Search,” and the prestigious journal Nature made it the cover story. The article described AlphaGo, a Go-playing application that had found a way around Polanyi’s Paradox.

The humans who built AlphaGo didn’t try to program it with superior Go strategies and heuristics. Instead, they created a system that could learn them on its own. It did this by studying lots of board positions in lots of games. AlphaGo was built to discern the subtle patterns present in large amounts of data, and to link actions (like playing a stone in a particular spot on the board) to outcomes
(like winning a game of Go).§

The software was given access to 30 million board positions from an online repository of games and essentially told, “Use these to figure out how to win.” AlphaGo also played many games against itself, generating another 30 million positions, which it then analyzed. The system did conduct simulations during games, but only highly focused ones; it used the learning accumulated from studying millions of positions to simulate only those moves it thought most likely to lead to victory.
Work on AlphaGo began in 2014. By October of 2015, it was ready for a test. In secret, AlphaGo played a five-game match against Fan Hui, who was then the European Go champion. The machine won 5–0.
A computer Go victory at this level of competition was completely unanticipated and shook the artificial intelligence community. Virtually all analysts and commentators called AlphaGo’s achievement a breakthrough.

Debates did spring up, however, about its magnitude. As the neuroscientist Gary Marcus pointed out, “Go is scarcely a sport in Europe; and the champion in question is ranked only #633 in the world. A robot that beat the 633rd-ranked tennis pro would be impressive, but it still wouldn’t be fair to say that it had ‘mastered’ the game.”

The DeepMind team evidently thought this was a fair point, because they challenged Lee Sedol to a five-game match to be played in Seoul, South Korea, in March of 2016. Sedol was regarded by many as the best human Go player on the planet,¶ and one of the best in living memory. His style was described as “intuitive, unpredictable, creative, intensive, wild, complicated, deep, quick, chaotic”—characteristics that he felt would give him a definitive advantage over any computer. As he put it, “There is a beauty to the game of Go and I don’t think machines understand that beauty. . . . I believe human intuition is too advanced for AI to have caught up yet.” He predicted he would win at least four games out of five, saying, “Looking at the match in October, I think (AlphaGo’s) level doesn’t match mine.”

The games between Sedol and AlphaGo attracted intense interest throughout Korea and other East Asian countries. AlphaGo won the first three games, ensuring itself of victory overall in the best-of-five match. Sedol came back to win the fourth game. His victory gave some observers hope that human cleverness had discerned flaws in a digital opponent, ones that Sedol could continue to exploit. If so, they were not big enough to make a difference in the next game. AlphaGo won again, completing a convincing 4–1 victory in the match.

Sedol found the competition grueling, and after his defeat he said, “I kind of felt powerless. . . . I do have extensive experience in terms of playing the game of Go, but there was never a case as this as such that I felt this amount of pressure.”

Something new had passed Go.

Download Ebook Read Now File Type Upload Date
download read PDF April 14, 2022
How to Read and Open File Type for PC ?