小蓝视频

Skip to content

Don't get lost in the 'distant sci-fi'

As people have learned more about the technology, many have let their minds run wild with possibilities, including the thought of AI developing superhuman intelligence.
20231211131248-5f670765862a68ca5eab1fd7964de4ae5f68ce95d781f5c19c3b796c1a616515
Nick Frosst, co-founder of Cohere, at the AI company's offices in Toronto on Nov. 27. THE CANADIAN PRESS/Chris Young

TORONTO — In a world where artificial intelligence has helped professional soccer teams develop game strategies, created a fake viral tune reminiscent of Drake and The Weeknd and played brewmaster, Nick Frosst loves the simpler uses of the technology. 

The Cohere co-founder's eyes light up as he sits in his AI company's Toronto meeting and games room on a dreary day in late November discussing how AI has made it possible to extract information from resumés, cutting out the tedium of filling out job application questionnaires.

"The mundane use cases are the ones that I'm like, 'Hell yeah,'" said Frosst.

"That's real value. We've solved the problem for somebody."

Cohere develops AI for enterprise use, meaning it helps businesses build powerful applications by using large language models (LLMs) — algorithms that use massive data sets to recognize, translate, predict or generate text and other content.

But these days, the buzz around AI has moved leaps and bounds ahead of the kinds of commonplace tasks Frosst is excited about.

There's general agreement the technology will disrupt most, if not all, sectors. Some even herald it as an eventual game-changer in the fight against many of humanity's biggest challenges: cancer and climate change.

Other observers are sci-fi-like prognosticators, worrying AI will be so powerful it will trigger the demise of humanity.

Though much of AI's future is uncertain, Cohere is likely to be one of the companies at the centre of it — and Frosst doesn't take that responsibility lightly.

"It means staying grounded in what the technology can do today and what we think it will do in the future, but not getting lost in the distant sci-fi," he said.

Now boasting a valuation that surpassed $2.1 billion earlier this year, Cohere began as a startup in 2019 under Aidan Gomez, a Google Brain researcher, and Ivan Zhang, who had dropped out of the University of Toronto to work at a genomics company but often interloped at Google. 

Frosst, also a co-founder and former Google staffer, joined in 2020, followed by a rush of funding from technology luminaries Jeff Dean, Fei-Fei Li, Pieter Abbeel and Raquel Urtasun.

Though cloud-computing business Oracle, software company Blue Dot and note-taking app Notion made use of their LLMs, there wasn't widespread excitement around AI until November 2022, when San Francisco-based OpenAI released a chatbot that can turn simple prompts into text within seconds.

"That was a cool moment because … we used to have a lot of conversations where we would talk to people and they would say what is an LLM?" Frosst recalled.

"We don't have those conversations anymore."

But as people have learned more about the technology, many have let their minds run wild with possibilities, including the thought of AI developing superhuman intelligence.

"I think the conversation has shifted sufficiently to be distracting and it's ungrounded in the reality of the technology," Frosst said.

"That means that other conversations, like how this is going to affect the labour market or what this means for the infrastructure of the web, are difficult to have and that saddens me sometimes."

Geoffrey Hinton, often called the godfather of AI and an early Cohere investor, has said he fears the technology could lead to bias and discrimination, joblessness, echo chambers, fake news, battle robots and existential risk.

Hinton left his job at Google, where Frosst became his protégé, earlier this year so he can more freely discuss AI’s dangers. 

Frosst said the duo remain "quite close" but have an ongoing but "friendly disagreement" on whether AI poses an existential threat. (Frosst doesn't think it does.)

"I love that he's thinking about this. He's one of the smartest people I know. It's great to have him out there putting his honest opinions forward," Frosst said.

"It's nice to have people in my life who disagree with me."

Hinton did not comment on his rapport with Frosst or support for Cohere.

But just because he disagrees with Hinton's take on existential threats, doesn't mean Frosst is naive to AI's risks. 

Cohere was among a group of signatories, including BlackBerry Ltd., OpenText Corp. and Telus Corp., who agreed to a voluntary code of conduct the federal government released in September. As part of the code, the company has promised to assess and mitigate the risks of their AI-based systems, monitor them for incidents and act on issues that develop.

Cohere also has an external advisory council designed to ensure what the company builds is deployed carefully and a safety team constantly thinking about how they can mitigate anything that can go wrong.

"We work on trying to make sure that this tech is good at the things we would be proud of and bad at the things we wouldn't be," Frosst said.

"We try to make it well-suited to good things and poorly suited for bad things."

The bad things form an expansive list in Cohere's terms of service, including the generation of political propaganda, cyberattacks designed to interfere with servers and networks, and activities that cause serious personal injury, severe environmental or property damage or even death.

"Defamatory, bullying, harassing," behaviour as well as activities that promote violence, racism and hatred are also banned. 

Cohere is constantly evaluating whether there are things worth adding to the list, but Frosst said there haven't been any major shifts in its approach over the past few years.

However, he's proud that even having a list may shape how people think about AI's ramifications. 

"In general, I think it's great to have people think about the consequences of technology," Frosst said.

"I think people should always, especially people building technology, be thinking about how it could go wrong, and that's something we think about here all the time." 

This report by The Canadian Press was first published Dec. 13, 2023.

Tara Deschamps, The Canadian Press

push icon
Be the first to read breaking stories. Enable push notifications on your device. Disable anytime.
No thanks