The new AI-powered tool is called Conflict Alerts and is meant to help group administrators have more control over their communities. The AI tech will notify an admin if it spots “contentious or unhealthy conversations” so an admin can take needed action quicker. For the average Facebook user, this new tool means you’ll most likely see less bickering and arguments within posts of Groups you follow, so things will be less negative, overall. Admins can use the Conflict Alerts tool to create alerts when users comment with specific words or phrases, and machine learning can spot these instances to alert an admin when they occur. The tool also allows admins to limit activity for specific people and posts. Facebook told The Verge that the machine learning would use “multiple signals such as reply time and comment volume to determine if engagement between users has or might lead to negative interactions.” Facebook already uses AI tools to flag other types of content on its platform, including hate speech. According to an August 2020 Community Standards Enforcement Report, Facebook’s AI tool for hate speech was 95% accurate, compared to 89% from the first quarter of 2020. Facebook said it increased its actions against hate speech content from 9.6 million instances in the first quarter 2020 to 22.5 million in the second quarter. The social network also is working on an AI tech that can “see” photos by using raw data to let the model train itself—independently and without using an algorithm—as it views more images. The AI project, known as SEER, could pave the way for more versatile, accurate, and adaptable computer-vision models, while bringing better search and accessibility tools to social media users.