One Giant Step for a Chess-Playing Machine


When AlphaZero was first unveiled, some observers complained that Stockfish had been lobotomized by not giving it access to its book of memorized openings. This time around, even with its book, it got crushed again. And when AlphaZero handicapped itself by giving Stockfish ten times more time to think, it still destroyed the brute.

Tellingly, AlphaZero won by thinking smarter, not faster; it examined only 60 thousand positions a second, compared to 60 million for Stockfish. It was wiser, knowing what to think about and what to ignore. By discovering the principles of chess on its own, AlphaZero developed a style of play that “reflects the truth” about the game rather than “the priorities and prejudices of programmers,” Mr. Kasparov wrote in a commentary accompanying the Science article.

The question now is whether machine learning can help humans discover similar truths about the things we really care about: the great unsolved problems of science and medicine, such as cancer and consciousness; the riddles of the immune system, the mysteries of the genome.

The early signs are encouraging. Last August, two articles in Nature Medicine explored how machine learning could be applied to medical diagnosis. In one, researchers at DeepMind teamed up with clinicians at Moorfields Eye Hospital in London to develop a deep-learning algorithm that could classify a wide range of retinal pathologies as accurately as human experts can. (Ophthalmology suffers from a severe shortage of experts who can interpret the millions of diagnostic eye scans performed each year; artificially intelligent assistants could help enormously.)

The other article concerned a machine-learning algorithm that decides whether a CT scan of an emergency-room patient shows signs of a stroke, an intracranial hemorrhage or other critical neurological event. For stroke victims, every minute matters; the longer treatment is delayed, the worse the outcome tends to be. (Neurologists have a grim saying: “Time is brain.”) The new algorithm flagged these and other critical events with an accuracy comparable to human experts — but it did so 150 times faster. A faster diagnostician could allow the most urgent cases to be triaged sooner, with review by a human radiologist.

What is frustrating about machine learning, however, is that the algorithms can’t articulate what they’re thinking. We don’t know why they work, so we don’t know if they can be trusted. AlphaZero gives every appearance of having discovered some important principles about chess, but it can’t share that understanding with us. Not yet, at least. As human beings, we want more than answers. We want insight. This is going to be a source of tension in our interactions with computers from now on.

In fact, in mathematics, it’s been happening for years already. Consider the longstanding math problem called the four-color map theorem. It proposes that, under certain reasonable constraints, any map of contiguous countries can always be colored with just four colors such that no two neighboring countries are colored the same.



Source link