Facebook wants to automatically flag offensive material in live video streams.
The company's director of applied machine learning said this will be building on a growing effort to use artificial intelligence to monitor content.
The social media company has been in some content moderation controversies this year, after removing an iconic Vietnam War photo due to nudity, and allowing the spread of fake news on its site.
Facebook wants to employ artificial intelligence to find offensive material.
The AI uses an algorithm that detects nudity, violence, or anything not in accordance to the company's policies.
The automated system is being tested on Facebook Live, but is still in the research stage.