Google AI Giant Leaves Company, Fearing a Coming Dystopia

Geoffrey Hinton’s stance changed after the advent of generative AI. His fellow Turing award winners have a different view

Deborah Yao, Editor, AI Business

May 3, 2023

5 Min Read
JOHNNY GUATTO/UNIVERSITY OF TORONTO

AI pioneer Geoffrey Hinton, when asked how he could work on a technology that was potentially dangerous, would respond by quoting Robert Oppenheimer, who led the U.S. initiative to build the atom bomb.

“When you see something that is technically sweet, you go ahead and do it."

Hinton does not say that anymore. Instead, the 75-year-old ‘Godfather of Deep Learning’ quit his job at Google this week, where he worked for more than a decade, so that he can freely speak out about the dangers of AI, according to an interview with The New York Times.

He said a part of him now regrets his life’s work on neural networks, which in 2018 won him the Turing award, considered the Nobel prize of computing. His former students included Turing award winner Yann LeCun and OpenAI co-founder and chief scientist Ilya Sutskever.

What changed?

Hinton cited the new generation of large language models, especially GPT-4 from Open AI last year, that has made him realize how smart machines can get. “Look at how it was five years ago and how it is now. Take the difference and propagate forwards. That’s scary.”

“The idea that this stuff could actually get smarter than people – a few people believed that,” Hinton told the Times. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Hinton had thought Google was a “proper steward” of AI, careful about any releases that might cause harm, until Microsoft began targeting its core search business by incorporating GPT-4 into Bing. It goaded Google into deploying AI faster in a contest that might be “impossible to stop,” he said.

Deepfakes to Flood the Internet?

Notably, Hinton did not sign the open letter from Future of Life Institute, which called for a 6-month pause on developing AI more powerful than GPT-4 until there are more guardrails. He said he did not want to publicly criticize Google or other companies until he had resigned.

The letter has attracted over 27,500 signatures to date, including from Elon Musk, Apple co-founder Steve Wozniak and Yoshua Bengio, who shared the Turing award with Hinton and LeCun. (LeCun did not sign the letter, tweeting that he disagreed with the premise.)

Hinton’s immediate worry is the internet will be filled with deepfakes such that the regular person will “not be able to know what is true anymore.”

In the future, he sees the more fundamental risk coming from AI systems generating and running code themselves, thereby acting autonomously in ways that could be detrimental to society.

As machines train themselves, they could exhibit unexpected and even harmful behavior. For example, one fear is that a machine trained to maximize rewards may prevent people from turning it off if it realizes that it can get more rewards by staying on - even if it harms humans.

While other AI experts believe that this existential threat from AI is hypothetical, Hinton thinks the global AI race between Microsoft, Google and others will escalate unimpeded without regulation. But regulating AI can be tricky if companies and nations want to work on it secretly since it is not traceable, unlike nuclear weapons.

That is why the best way forward is for the world’s best scientists to come up with ways to control AI, Hinton said.

Not Science Fiction

"These things are totally different from us," Hinton said in a separate interview with MIT Technology Review. "Sometimes I think it's as if aliens had landed and people haven't realized because they speak very good English."

Hinton said large language models (LLMs) have massive neural networks with vast numbers of connections. But they are still tiny compared to the human brain, which has 100 trillion connections. LLMs have up to a trillion today, at most. "Yet GPT-4 knows hundreds of times more than any one person does. So maybe it's actually got a much better learning algorithm than us," he said.

For a long time, neural networks were thought to be bad at learning compared to the human brain, which can pick up new ideas and skills quickly. But that changed with these large language models. In 'few-shot learning,' these pretrained LLMs can be trained for a task with just a few examples.

Compare this LLM with a human's speed and the human's advantage disappears, Hinton said.

What about LLMs' tendency to hallucinate, or generate fiction or errors in their answers, is that not a weakness? Hinton said this confabulation is a feature, not a bug. "Confabulation is a signature of human memory. These models are doing something just like people."

Hinton thinks the next step for these intelligent machines is the ability to create their own subgoals, or interim steps needed to carry out a task. Already, experimental projects such as AutoGPT and BabyAGI can link chatbots with other programs to string together simple tasks and it could advance from there. In the wrong hands, this capability could be deadly.

“I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they're very close to it now and they will be much more intelligent than us in the future,” Hinton said. “How do we survive that?”

LeCun does not share the same pessimistic view. Rather, the chief AI scientist at Meta told MIT Technology Review that intelligent machines will usher in "a new renaissance for humanity, a new era of enlightenment. I completely disagree with the idea that machines will dominate humans simply because they are smarter, let along destroy humans."

Even among humans, the smartest are not the ones most dominant, LeCun pointed out. "And the most dominating are definitely not the smartest. We have numerous examples of that in politics and business."

Bengio, a computer science professor at Université de Montréal in Canada, has a more neutral view.

“I hear people who denigrate these fears, but I don’t see any solid argument that would convince me that there are no risks of the magnitude that Geoff (Hinton) thinks about,” he told the magazine. But being overly concerned does not do much good. "Excessive fear can be paralyzing, so we should try to keep the debates at a rational level.”

This article first appeared in IoT World Today's sister publication AI Business.

About the Author

Deborah Yao

Editor, AI Business

Deborah Yao is an award-winning journalist who has worked at The Associated Press, Amazon and the Wharton School. A graduate of Stanford University, she is a business and tech news veteran with particular expertise in finance. She loves writing stories at the intersection of AI and business.



Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like