As we continue our mission at Xbox to bring the joy and community of gaming to even more people, we remain committed to protecting players from disruptive online behavior, creating experiences that are safer and more inclusive, and continuing to be transparent about our efforts to keep the Xbox community safe.
Our fifth Transparency Report highlights some of the ways we’re combining player-centric solutions with the responsible application of AI to continue amplifying our human expertise in the detection and prevention of unwanted behaviors on the platform, and ultimately, ensure we continue to balance and meet the needs of our growing gaming community.
During the period from January 2024 to June 2024, we have focused our efforts on blocking disruptive messaging content from non-friends, and the detection of spam and advertising with the launch of two AI enabled tools that reflect our multifaceted approach to protecting players.
Among the key takeaways from the report:
Balancing safety and authenticity in messaging: We introduced a new approach to detect and intercept harmful messages between non-friends, contributing to a significant rise in disruptive content prevented. From January to June, a total of 19M pieces of Xbox Community Standards-violating content were prevented from reaching players across text, image, and video. This new approach balances two goals: safeguarding players from harmful content sent by non-friends, while still preserving the authentic online gaming experiences our community enjoys. We encourage players to use the New Xbox Friends and Followers Experience, which gives more control and flexibility when connecting with others.
Safety boosted by player reports: Player reporting continues to be a critical component in our safety approach. During this period, players helped us identify an uptick in spam and advertising on the platform. We are constantly evolving our strategy to prevent creation of inauthentic accounts at the source, limiting their impact on both players and the moderation team. In April, we took action on a surge of inauthentic accounts (1.7M cases, up from 320k in January) that were affecting players in the form of spam and advertising. Players helped us identify this surge and pattern by providing reports in Looking for Group (LFG) messages. Player reports doubled to 2M for LFG messages and were up 8% to 30M across content types compared to the last transparency report period.
Our dual AI approach: We released two new AI tools built to support our moderation teams. These innovations not only prevent the exposure of disruptive material to players but allow our human moderators to prioritize their efforts on more complex and nuanced issues. The first of these new solutions is Xbox AutoMod, a system that launched in February and assists with the moderation of reported content. So far, it has handled 1.2M cases and enabled the team to remove content affecting players 88% faster. The second AI solution we introduced launched in July and proactively works to prevent unwanted communications. We have directed these solutions to detect Spam and Advertising and will expand to prevent more harm types in the future.
Underpinning all these new advancements is a safety system that relies on both players and the expertise of human moderators to ensure the consistent and fair application of our Community Standards, while improving our overall approach through a continuous feedback loop.
At Microsoft Gaming, our efforts to drive innovation in safety and improve our players’ experience also extends beyond the Transparency Report:
Prioritizing Player Safety with Minecraft: Mojang Studios believes every player can play their part in keeping Minecraft a safe and welcoming place for everyone. To help with that, Mojang has released a new feature in Minecraft: Bedrock Edition that sends players reminders about the game’s Community Standards when potentially inappropriate or harmful behavior is detected in text chat. This feature is intended to remind players on servers of the expected conduct and create an opportunity for them to reflect and change how they communicate with others before an account suspension or ban is required. Elsewhere, since the Official Minecraft Server List launched a year ago, Mojang, in partnership with GamerSafer, has helped hundreds of server owners improve their community management and safety measures. This has helped players, parents, and trusted adults find the Minecraft servers committed to the safety and security practices they care about.
Upgrades to Call of Duty’s Anti-Toxicity Tools: Call of Duty is committed to fighting toxicity and unfair play. In order to curb disruptive behavior that violates the franchise’s Code of Conduct, the team deploys advanced tech, including AI, to empower moderation teams and combat toxic behavior. These tools are purpose-built to help foster a more inclusive community where players are treated with respect and are competing with integrity. Since November 2023, over 45 million text-based messages were blocked across 20 languages and exposure to voice toxicity dropped by 43%. With the launch of Call of Duty: Black Ops 6, the team rolled out support for voice moderation in French and German, in addition to existing support for English, Spanish, and Portuguese. As part of this ongoing work, the team also conducts research on prosocial behavior in gaming.
As the industry evolves, we continue to build a gaming community of passionate, like-minded and thoughtful players who come to our platform to enjoy immersive experiences, have fun, and connect with others. We remain committed to platform safety and to creating responsible AI by design, guided by Microsoft’s Responsible AI Standard and through our collaboration and partnership with organizations like the Tech Coalition. Thank you, as always, for contributing to our vibrant community and for being present with us on our journey.
Some additional resources: