Facebook is doing its best to explain artificial intelligence through a new video series, but the social network wants to put AI to use for policing livestreams as well. The company’s director of applied machine learning Joaquin Candela told Reuters that Facebook is working on a tool that automatically flags video streams for nudity, violence and any other content that violates the site’s policies. Right now, the feature is being tested on Facebook Live.
Rather than relying on users to report offensive content that’s then checked by Facebook employees, the system would detect any offensive material on its own. According to a Reuters report in June, the company is also working on way to flag extremist content, like violent photos or videos, but adding livestreams to the mix would require a lot more effort.
A debate over the nature of the streaming tool was sparked by a Facebook Live broadcast that showed Antonio Perkins as he was shot and killed by Chicago Police. The company left the video up on its site with a graphic content warning attached to it because the footage served as an example of the real consequences of violence. Facebook said that it would continue to remove any clips that sensationalized violent acts.
As Candela explained, there are two major challenges to policing live video. “Your computer vision algorithm has to be fast, and I think we can push there,” Candela said. “The other one is you need to prioritize things in the right way so that a human looks at it, an expert who understands our policies, and takes it down.”
Facebook is one of a number of companies dealing with the rise of fake news stories. Last month, CEO Mark Zuckerberg said that in addition to easier methods for users to report a hoax, the company is working on better detection of those links before they even make it into your New Feed.