СƵ

Skip to content

AI pioneer Geoffrey Hinton says the world is heeding warnings about the technology

TORONTO — Months after artificial intelligence luminaries began ringing alarm bells about the technology’s risks, one of the field's pioneers says he feels like people are listening.
20231004151040-651dbffeea80688c1f9568c8jpeg
Geoffrey Hinton, known as the "Godfather of AI," leaves the stage after speaking at the Collision conference in Toronto on Wednesday, June 28, 2023. THE CANADIAN PRESS/Chris Young

TORONTO — Months after artificial intelligence luminaries began ringing alarm bells about the technology’s risks, one of the field's pioneers says he feels like people are listening.

"I'm optimistic that people understood that there's this whole bunch of problems," Geoffrey Hinton said at a talk AI financier Radical Ventures hosted at the MaRS Discovery District in Toronto on Wednesday.

"I am quite optimistic that people are listening."

For the bulk of the year, the British-Canadian computer scientist who won the A.M. Turing Award, known as the Nobel Prize of computing, in 2018 with Yoshua Bengio and Yann LeCun, has been on a crusade to make the public more aware of AI’s dangers.

The so-called godfather of AI left a job at search engine giant Google recently so he can more freely discuss AI’s dangers, which he has listed as bias and discrimination, joblessness, echo chambers, fake news and battle robots.

Though some, including fellow AI pioneer Yann LeCun, have downplayed his warnings about existential risk, Hinton has not backed down.

He said Wednesday that he's convinced of the existential risk because technology made by humanity that is smarter than us will create subgoals to achieve efficiency.

"There's a very obvious subgoal, which is if you want to get anything done, get more power," he said.

"If you get more control, it's going to be easier to do things."

That's where the problems can start.

"If things much more intelligent than us want to get control, they will.  We won't be able to stop them," Hinton said.

"So we have to figure out how we stop them ever wanting to get control."

Hinton’s remarks came the same day as Aidan Gomez, chief executive of Toronto-based AI darling Cohere, released a blog saying, “spending our time and resources stoking existential fear of AI has served as a distraction.”

“To those in the industry who earnestly believe that doomsday scenarios are the most serious risks that we face with AI, I welcome the difference of opinion, even as I respectfully disagree," Gomez said.

Rather than focus on existential risk, he said the globe should be rallying around three priorities: protecting sensitive data, mitigating bias and misinformation, and knowing when to keep humans in the loop for oversight. 

“These three areas are perhaps less extraordinary than the notion of a technology-enabled terminator taking over the world,” he said. 

“However, they are the most likely and immediate threats to our collective well-СƵ.”

Fei-Fei Li, co-director of Stanford University’s Human-Centered AI Institute, who appeared in conversation with Hinton on Wednesday, said she grew "personally anxious" about the technology around 2018.

Conversations about privacy and surveillance were becoming the norm after Cambridge Analytica paid a Facebook app developer for access to the personal information of about 87 million users. The personal info was used to target U.S. voters during the country's presidential election that ended with Donald Trump in power.

It made Li realize "we've got so many catastrophic risks and we need to get on this."

She agreed with Hinton that the world is starting to listen to their concerns.

Last week, Canada's Innovation Minister François-Philippe Champagne revealed a voluntary code of conduct for generative AI at a Montreal tech conference.

Adopters of the code -- Cohere, software company OpenText Corp. and cybersecurity firm BlackBerry Inc. among others -- agreed to a slew of promises including screening datasets for potential biases and assessing any AI they create for “potential adverse impacts.” 

But Tobi Lütke, founder and CEO of e-commerce goliath Shopify Inc., labelled the code "another case of EFRAID" — a reference to electronic and afraid.

"I won’t support it. We don’t need more referees in Canada. We need more builders. Let other countries regulate while we take the more courageous path and say 'come build here,'" Lütke posted on X, the social media platform formerly known as Twitter.

After hearing Lütke's remarks, Champagne highlighted that it is voluntary. 

"If he thinks that to promote his interests he doesn't need to sign the code, that's a decision for him to take. I respect that," Champagne said.

"On the other hand, there's a number of voices out there that are calling for framework to be able to operate. It is in Canada's best interest, the best interest of companies, to be able to say that they will adhere to some basic principles on a voluntary basis that will allow for responsible innovation."

The federal government tabled a bill in June taking a general approach to AI regulation, but left many of the details for a later date. It is expected to be implemented no earlier than 2025.

This report by The Canadian Press was first published Oct. 4, 2023.

Tara Deschamps, The Canadian Press

push icon
Be the first to read breaking stories. Enable push notifications on your device. Disable anytime.
No thanks