Introducing Yann LeCun

On his resume, AI Scientist Yann LeCun lists the distinction, “ACM Turing Award Laureate.” Indeed, winning the Turing Award, often referred to as "the Nobel Prize of Tech," is definitely resume material. On LeCun’s resume, though, there’s a parenthetical caveat next to this achievement: "(Sounds like I’m bragging, but a condition of accepting the award is to write this next to your name.)"

Perhaps LeCun got so used to going unrecognized in his career that listing the Turing Award on his resume feels surreal. Or maybe he is just not the bragging type. One thing is for sure—if he were in it for fame and recognition, LeCun would have gotten out of the game a long time ago.

First inspired to study machine intelligence after seeing the movie 2001: A Space Odyssey as a boy, LeCun spent much of his childhood in the Paris suburbs tinkering with various electronic and mechanical pursuits. As a university undergraduate, LeCun took on independent research in machine learning, which he eventually carried through to his Ph.D. work.

Throughout graduate school LeCun steadily kept at his work, which focused on Convolutional Networks (ConvNets), a type of neural network geared toward recognizing visual patterns in pixels designed after the human brain's visual cortex. LeCun pushed forward with his ideas in the face of massive skepticism from the larger AI community, where the neural net approach to AI had fallen out of favor. At that time, the major academic journals refused to publish anything related to neural nets. “There was a dark period between the mid-90s and early-to-mid-2000s when it was impossible to publish research on neural nets, because the community had lost interest in it,” says LeCun. “In fact, it had a bad rep. It was a bit taboo.”

He had to take advantage of opportunities where he could find them, even if they were small or seemed inconsequential. One of those opportunities was at an obscure workshop in France in 1985 where LeCun, then still a graduate student, presented a paper he wrote describing a form of backpropagation. The paper “was in French and basically wasn’t read by many people—but at least by one important person,” LeCun recalls.

That person turned out to be Geoffrey Hinton, a cognitive psychologist and computer science professor at the University of Toronto who also believed in the neural net approach to AI. After LeCun’s presentation. Hinton offered LeCun a postdoctoral research associate position in his lab. From 1987 to 1988, the two worked side by side on their shared dream of Deep Learning—creating AI machines that could teach themselves, while the larger AI community ignored their work.

Don't Go It Alone

This time period in which LeCun and Hinton came together would later come to be known as the AI Winter. In AI: The Tumultuous Search for Artificial Intelligence, author and AI researcher Daniel Crevier describes an AI Winter as a chain reaction that begins with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research. By the time the AI Winter of the ‘90s hit, Geoffrey Hinton already had years of experience fighting against the current. He had been working on Deep Learning since his days as a graduate student in the early ‘70s, where he often got into shouting matches with his advisor, who believed he was wasting his time.

While LeCun was a postdoc in Hinton’s lab he gave a talk at McGill University in Montreal, where he was impressed by some really smart questions being asked by a master’s student in the audience. LeCun made a mental note to keep an eye on this student, who seemed poised to do some really interesting things. This student’s name was Yoshua Bengio.

After a year of postdoctoral research with Hinton in Canada, LeCun took a job with AT&T’s Bell labs in New Jersey, where he set out to design a neural network that could read handwritten letters and numbers. When Yoshua Bengio finished his PhD at McGill, LeCun hired him at Bell Labs. For many years after that, Hinton, LeCun, and Bengio stayed in contact as collaborators and friends. Together, they helped one another maintain their shared belief that there was a way to teach machines to learn like humans do, with a computing system called a neural network. For their belief, they were mocked by their peers in the larger AI research community.

“People thought what the three of us were doing was nonsense,” Hinton later told the media, when he was being interviewed about the Turing Award—which the three men shared in a joint win in 2019. “They thought we were very misguided and what we were doing was a very surprising thing for apparently intelligent people to waste their time on."

How humble leadership really works

The future is coming on fast. Even faster now, because of COVID-19. Changes that we all thought would take at least 10 or 15 years are now happening overnight, such as huge movie theater chains and rental car agencies going bankrupt. As a future-thinking CEO, the most important quality I recommend is being humble.

Being humble is necessary in order to keep up with the rapid changes that are coming on. Yes, you might be good at your craft at this point. You might even consider yourself an expert. But the truth is nobody is an expert, because as soon as they become an expert, guess what? They become obsolete because things change again. You can become an expert for iOS 7 but before you know it, it's iOS 12.

A CEO must be okay with looking stupid trying new things in order to successfully tackle all the unprecedented changes brought about by AI—just like how LeCun, Hinton, and Bengio looked stupid working on "nonsense" in the eyes of the larger AI community while exploring the uncharted terrain of neural networks. For LeCun, it took years before there was enough data and enough computing power available for him to prove his hunches were correct. In the '90s this just wasn't possible. Similarly, as a CEO, there are times when you will need to be able to work and move forward in an incomplete data environment. You have to be willing to try something, even if it doesn't work. And when it doesn't work, being humble means admitting fuck-ups when they happen, and not needing to maintain a facade of always knowing everything.

I had one CEO client I will always remember as a role model because at one point in our work together she was struggling to understand an aspect of the data audit we were doing on her company. She turned to me and said, "I'm not the smartest, but I'm willing to learn and I'm a fast learner." I was like, wow. A CEO just admitted that she is "not the smartest." That's a very fresh approach. A very humble approach.

Humble Plus Data

Being humble in business involves valuing the customer in a different way than previously has been the norm. It’s actually a monumental shift. Succeeding in the AI Age means becoming a customer-centric company, one that focuses on what the customer wants and how they are using your product or service, rather than starting from a product development point of view, where the company comes up with the road map and the customer follows it. Going forward, it needs to be the other way around.

This means becoming data-driven, rather than operating in the old school way of looking at the CEO as a guru who calls all the shots. In the future, like it or not, it's the data that calls the shots.

For example, before data and AI, it might go like this: Merchandise, once manufactured, is put into a warehouse. The company does an annual audit. In that audit, maybe they'll find out that a particular new jacket didn't sell. So, they'll do a clearance sale on the jacket during winter holiday-time. The whole process takes over a year.

However, with AI tools feeding on lots of clean data, an algorithm will detect much earlier than was possible before that the jacket isn't selling. What used to take the company a year to figure out —that "Hey, this jacket is never going to sell”—will now be discovered by AI tools within three months. So the manager on the floor, armed with that data, can make the call to put the jacket on sale and clear it out in real-time.

What’s more, a unified digital database will allow the company’s departments and regional offices to become unsiloed, and in good communication with each other. So, maybe the jacket was selling in a company’s Palm Springs, California store but in the Palm Beach, Florida store, it was doing nothing but collecting dust on the rack. In that case, the regional manager of the Palm Beach store, using the readily available data and with the help of AI tools, could make the decision to put the jacket on clearance for their particular store only.

This is another big shift in doing business that comes from embracing data and AI: Each regional office can now make their own decisions based on data—the CEO doesn’t have to make all these decisions. Depending on the CEO, this development can either be a huge relief or one of the more terrifying prospects they have ever faced.

It can be very humbling to surrender your guru status identity to data, but it is absolutely necessary.