Why forcing AI firms to report online threats may not be simple
A cybersecurity law expert says Canada could introduce laws requiring artificial intelligence companies to notify police of online threats, but the process would not be a simple one, since reporting every suspicion is “just not workable.”
Emily Laidlaw, a Canada Research Chair in cybersecurity law at the University of Calgary, said every AI company sets its own policy around when to inform police about what happens online and that Canada considered laws in the past but did not follow through.
The issue is under scrutiny again in the wake of the mass killings in Tumbler Ridge, B.C., by a shooter who was banned by OpenAI from its ChatGPT platform at least seven months ago.
But the firm did not inform police about the problematic behaviour of Jesse Van Rooteslaar until after the Feb. 10 killings and it has been called to Ottawa to meet with federal Artificial Intelligence Minister Evan Solomon on Tuesday to explain its safety procedures and decisions.