Geoffrey Hinton first became interested in AI as a high schooler in Britain in 1966, though, at the time, he didn’t know where it would lead him. He just knew he was fascinated with how the brain works.
The inciting incident was one day when a friend of his, who Hinton describes as a “brilliant mathematician” who was “always better than me at everything” came into school jazzed up about something he had recently learned through independent self-study.
“Did you know the brain uses holograms?” Hinton’s friend asked him.
Hinton’s response, “What’s a hologram?”
Hinton’s friend explained that a 3-D holographic image is created by recording beams of light bouncing off a given object, then storing those recorded bits of information across a database.
He went on to tell Hinton about the work of the late Harvard Psychologist Karl Lashley who, in the 1940’s, ran experiments searching for evidence of the engram—the group of neurons that stores memories in the brain. Lashley used a soldering iron to burn lesions in the cerebral cortexes of the rats, attempting to erase the engram. He then put the rats through a maze they’d already mastered, testing the assumption that they wouldn’t remember how to get through it with their engrams erased. But the rats remembered anyway.
Lashley theorized that if part of one area of the brain involved in memory is damaged, another part of the same area can take over that memory function. Or, as Hinton’s classmate put it, memories are distributed all over the brain—not just in one place. Each memory is actually located in lots of different brain cells.
This idea gripped Hinton. How does the brain store memories? he wondered. “I just kept thinking about how the brain might work from then on,” he says.
The Pursuit of a Passion
Geoffrey Hinton chewed over the idea for the rest of high school, and when it came time to apply to university he set out to answer his question through studying physiology and physics. At Cambridge he was the only undergraduate studying both fields. He didn’t find his answers for how the brain works in either physiology or physics classes, though, so he changed his major to philosophy. He thought that field would give him more insight, but he quickly realized he was unlikely to find any answers there. Philosophy, he discovered, was “lacking in ways of distinguishing when they said something false."
He switched his major again, this time to psychology. Despite graduating in 1970 with a Bachelor’s in experimental psychology, his studies in that field didn’t satisfy his curiosity either. “In psychology they had very simple theories, and it seemed to me hopelessly inadequate for explaining what the brain was doing,” Hinton says. After graduation, he threw his hands up, took some time off, and became a carpenter.
Then he decided he would try AI.
Hinton attended graduate school at the University of Edinburgh, one of the first few universities in the world to work on AI and its applications. He was drawn to study under the advisory of Christopher Longuet-Higgins, who co-founded the Department of Machine Intelligence at Edinburgh after a late career interest in the emerging field of AI. Longuet-Higgins had been working with neural networks. However, when Hinton arrived, he had just given up the cause.
The field of AI is split into two camps: Machine Learning and Symbolic AI. Symbologists believe that AI should be rule-based—that, in order for it to work, humans have to program it, they have to show it everything. The logic of Symbolic AI is described well in the initial proposal for a conference held at Dartmouth College in the summer of 1956, which is generally considered to be the birth of AI as a research discipline. The proposal describes "the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
Hinton saw a problem with that logic. Human brains weren’t programmed, he reasoned; we had to learn. Training machines to learn the way that humans do had to be the right way to go. And it had to be possible.
The problem was, hardly anybody in the field believed that. For his part, Longuet-Higgins had researched neural nets already and come to the conclusion that it wasn’t possible. It was a waste of time, going down that road. He strongly encouraged Hinton to shift the direction of his graduate research.
Hinton met with Longuet-Higgins every week. They argued about the neural net approach constantly, their meetings sometimes ending in shouting matches. “Give me another six months and I’ll prove to you that it works,” Hinton would tell his advisor. Every six months, he would say that again.
Longuet-Higgins tolerated Hinton’s deep belief in neural networks enough for Hinton to complete his PhD in Artificial Intelligence in 1978. He couldn’t find a job in Britain, however, when he graduated. Instead, he came across an ad for Sloan Fellowships in California, which provide research funding for promising early-career scientists. He applied, and scored one, which brought him to the University of California, San Diego.
Silly is Good
“I went to California, and everything was different there,” Hinton says. “In Britain, neural nets was regarded as kind of silly, and in California, Don Norman and David Rammelhart were very open to ideas about neural nets,” Hinton says of the scholars he worked with at UC San Diego during this time. “It was the first time I'd been somewhere where thinking about how the brain works and thinking about how that might relate to psychology was seen as a very positive thing. And it was a lot of fun.”
While at UC San Diego, Hinton and his collaborators produced a paper on backpropagation, a now well-known algorithm used to train neural networks by having a computer perform a task repeatedly, adjusting the computer's neural network each time to decrease the error. In 1986, they managed to get a paper accepted into the scholarly journal Nature. From there, the tide started to turn toward the neural net approach to AI.
In 2019, being interviewed by The Telegraph, Hinton was asked if he had a message for young researchers. His response? “Don’t be put off if everyone tells you what you are doing is silly.” These days, Hinton mentors graduate students of his own, at the University of Toronto. He has a habit of shouting now, too, though the outbursts are of a different quality than they were during his student days. About once a week Hinton can be counted on to suddenly shout in a demonstration of flagrant optimism, “I understand how the brain works now!”