Unfortunately, there is nothing new about the fact that online abuse, often with both racist and sexist elements, is far too frequent. Relatively recently, Microsoft launched new AI tools to help remedy this.
The idea is that this will be used for "detection and prevention of unwanted behaviour on the platform, and ultimately, ensure we continue to balance and meet the needs of our growing gaming community".
And apparently it is needed. Microsoft has now published (via Xbox Wire) its fifth Transparency Report, and writes that in the first six months alone, "a total of 19M pieces of Xbox Community Standards-violating content were prevented from reaching players across text, image, and video" thanks to the AI tools.
The report also reveals some other rather disturbing data that shows how bad things are online in many areas, including threats, insults, scams and spam. Nevertheless, Microsoft states that AI does not solve everything, and "player reporting continues to be a critical component in our safety approach", so please continue to report when you see or hear violations from people who obviously lack human decency.