After reading that, youre probably wondering These companies are tackling toxicity?
Really?UbisoftandRiothave their own histories of alleged inappropriate behaviour within their company cultures.
The Zero Harms In Comms project began last July.

Jacquier and Kerr established two objectives for the research.
The first of these is to create a GDPR-compliant data-sharing framework that protects privacy and confidentiality.
The second is to use the data gathered to train cutting-edge algorithms to more successfully pick up toxic content.

Riot feel working with Ubisoft broadens what they can hope to achieve through the research, Kerr tells me.
I asked Jacquier and Kerr to define what theyd consider to be disruptive behaviour in chat.
Jacquier tells me that context is key.

Kerr points out that behaviours can vary across cultures, regions, languages, genres, and communities.
As stated, the project revolves around AI, and improving its ability to interpret human language.
Traditional methods offer full precision but are not scalable, Jacquier tells me.

is way more scalable, but at the expense of precision.
Jacquier assures me that privacy is a core tenet of the research.
Of course, another elephant in the room here is the player.

I asked Jacquier and Kerr how they thought players would react to AI judging their in-game convos.
Jacquier acknowledged that its just a first step to tackling toxic spaces in the industry.
Maybe players could just try being nice to each other, as formerOverwatchdirector Jeff Kaplanonce suggested?.
Well ship it to players as soon as we can, Kerr tells me.
Will we have a successful framework to enable cross industry data sharing?
Will we have a working prototype?
Regardless of how the project turns out, the companies say theyll be sharing their findings next year.