OpenAI has revealed it reviewed whether to alert Canadian authorities about a user months before a deadly school attack.
The company flagged the account of Jesse Van Rootselaar in June for possible “furtherance of violent activities”.
After an internal assessment, OpenAI decided the activity did not meet its threshold for referral to law enforcement.
That threshold requires evidence of an imminent and credible risk of serious harm.
The company said it found no clear sign of an immediate attack being planned at the time.
The 18-year-old later killed eight people in British Columbia before taking their own life.
Victims included a teaching assistant and several students.
The Royal Canadian Mounted Police said the suspect had prior mental-health-related contact with officers.
OpenAI said it contacted the RCMP after the shooting and shared relevant account information.
The company added that it will continue to assist the investigation.
The case has renewed debate about when technology firms should alert authorities.
It also highlights the difficulty of judging risk without clear evidence of an imminent threat.
