СƵ

Skip to content

Tech community weighing how to balance AI's risks and rewards at Elevate conference

TORONTO — Members of Canada’s tech community are concerned about how the country will rein in the risks of artificial intelligence without stifling innovation.
20230927140916-8f812c16d224b3e680766a04edf62a3965944644f4682d610678fcc4bbe732d9
Members of Canada’s tech community are concerned about how the country will rein in artificial intelligence’s risks without stifling innovation. A metal head made of gears symbolizes artificial intelligence, or AI, at the Essen Motor Show in Essen, Germany on Nov. 29, 2019.THE CANADIAN PRESS/AP/Martin Meissner

TORONTO — Members of Canada’s tech community are concerned about how the country will rein in the risks of artificial intelligence without stifling innovation.

As they gathered in Toronto for the annual Elevate tech conference, much of their chatter focused on the technology's great promise, but many said they also feared over-regulating AI would put the nation behind its counterparts hurdling toward adoption without guardrails.

“I'm a little bit afraid of just putting the brakes on because while we might want to put the brakes on, other places aren't putting the brakes on and I feel that that's going to create an adoption gap that we can't afford to lose,” said Joel Semeniuk, chief strategy officer of Waterloo, Ont., tech hub Communitech at a breakfast adjacent to the conference.

“I actually feel like we need to go all in but with all of the regulatory perceptions in place at the same time.”

Semeniuk’s remarks came as the globe nears one year since the debut of ChatGPT, a generative AI chatbot capable of humanlike conversations and tasks that was developed by San Francisco-based OpenAI. A new iteration of the technology with voice and image capabilities was released this month.

ChatGPT’s advent kickstarted an AI race, where companies as big as Google and Microsoft revved up their AI efforts and started pouring billions of dollars into the sector, hoping to orchestrate bigger advances in the technology’s capabilities and adoption.

But as the flurry around AI got underway, concerns loomed large. Many of tech’s biggest champions, including Tesla empresario Elon Musk, Apple co-founder Steve Wozniak and so-called “AI godfather” Geoffrey Hinton, warned of the technology’s risks and proposed slowing down the rate it was moving at.

On a visit to Toronto in the summer, Hinton shared that he was concerned the technology would perpetuate bias and discrimination, joblessness, echo chambers, fake news, battle robots and existential risk.

Semeniuk sees both sides. The startups he works with view AI as having “tremendous excitement and tremendous trepidation.”

Many are not averse to regulation, but the shape of policies are hard to decipher because the industry is evolving so quickly and much of its potential is still unrealized.

“So I do believe in regulation, but I also don't believe that it should be an all or nothing,” Semeniuk said.

“We have no idea what we're regulating and how to regulate it without understanding the use cases that come out of it.”

The federal government has long had its eye on introducing AI guardrails, but has not moved as rapidly as the technology.

Innovation Minister François-Philippe Champagne only revealed a voluntary code of conduct for generative AI at Montreal tech conference All In on Wednesday.

Adopters of the code agree to a slew of promises including screening datasets for potential biases and assessing any AI they create for “potential adverse impacts.” Toronto-based AI darling Cohere and Waterloo, Ont., software company OpenText have already agreed to the terms.

As for legislation, the federal government tabled a bill in June taking a general approach to AI regulation, but left many of the details for a later date. It is expected to be implemented no earlier than 2025.

In the absence of Canadian AI regulation, Muxin Ma said companies are letting their own sensibilities guide them.

For example, clients of her company, Pontosense, which works with AI and a coin-sized sensor that monitors biometrics for use in the auto and health care sectors, are СƵ mindful of data protection, what information gets stored in the cloud and what systems their materials might help to train. 

Meanwhile, Google has adopted its own principles, which vow the company won’t dabble in AI that can be used to inflict harm.

Others are relying on other countries to set the tone.

“Europe is the fastest for sure, but right now it's a little bit ambiguous,” Ma said.

The European Union is advancing toward a legal framework for AI that “proposes a clear, easy to understand approach, based on four different levels of risk: unacceptable risk, high risk, limited risk, and minimal risk.”

Its legislation is expected to address subliminal, manipulative and deceptive AI techniques and how the technology could exploit vulnerabilities along with biometric systems and using AI to infer emotions in law enforcement and office settings.

Carole Piovesan, a managing partner at INQ Law, worries about the pace of Canada’s approach to AI.

She recently spoke with a machine learning researcher who argued regulation has to be realistic about where we are because we aren’t in the futuristic world of AI that many foresee coming.

Piovesan, however, rebutted that “we take forever to get there.”

“If we can't start to forecast what the crystal ball looks like, then we're never going to keep pace," she said on the Elevate stage.

This report by The Canadian Press was first published Sept. 27, 2023.

Tara Deschamps, The Canadian Press

push icon
Be the first to read breaking stories. Enable push notifications on your device. Disable anytime.
No thanks