Facebook will launch new tools for admins to allow them to monitor their communities. It will give them a clearer view of the posts and members of the community. The AI-powered feature will help admins identify “contentious or unhealthy conversations” happening in the comments section.
The feature that deals with conflicts in the Facebook groups is called Conflict Alerts. It is similar to the app’s existing feature, Keywords Alerts, which helps admins create custom alerts whenever questionable or unhealthy comments occur. Conflict Alert will use machine learning models to identify troubles and alert the admin. Once alerted, admins can immediately act upon it by deleting comments, booting users from a group, limiting the individuals’ frequency to comment, or frequency of comments that can be made on certain posts, etc.
The tools will help admins automate content moderation, reduce promotional content, and ban certain links to reduce unwanted promotions, block recently joined users, and reduce spam rates.
Meanwhile, a report in The Verge stated that the functioning of the feature is currently unclear. The US-based company has not clearly stated how the feature will identify “contentious or unhealthy conversations” when reached for comment by the portal.
However, a spokesperson aware of the matter said that the company would use machine learning models to identify “multiple signals such as reply time and comment volume to determine if the engagement between users has or might lead to negative interactions.”
One can assume that Conflict Alerts uses AI systems similar to Facebook to flag abusive speech on the site.
Facebook added that Conflict Alerts is still in testing mode. The final day of availability of the feature has not been disclosed yet.