Published: Wed, May 16, 2018
Tech | By Amelia Peters

Facebook shut 583 million fake accounts in first 3 months of 2018

Facebook shut 583 million fake accounts in first 3 months of 2018

Facebook also prohibits hate speech and said it took action against 2.5 million pieces of content in the first quarter, up 56 percent from the quarter earlier.

The firm disabled about 583 million fake accounts which were disabled minutes after registering.

This report covers Facebook's enforcement efforts between October 2017 to March 2018, and it covers six areas: graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam and fake accounts.

Facebook said Tuesday it took down 21 million "pieces of adult nudity and sexual activity" in the first quarter of 2018, and that 96 percent of that was discovered and flagged by the company's technology before it was reported.

Zuckerberg noted that there is still room for improvement with Facebook's AI tools - noticeably flagging hate-speech content. Hate speech is hard to flag using AI because it "often requires detailed scrutiny by our trained reviewers to understand context and decide whether the material violates standards", according to the report.

For graphic violence, Facebook took down or applied warning labels to about 3.5 million pieces of violent content during the period - 86 percent of which was identified by Facebook technology before it was reported. While the company seems to be very proficient at removing nudity and terrorist propaganda, it's lagging behind when it comes to hate speech.

More news: Israel to close key Gaza crossing heavily damaged by rioters

Mr Rosen said making the figures public will "push" the company to improve more quickly.

"We have a lot of work still to do to prevent abuse". In addition, Facebook stated that from the remaining accounts, a mere three to four percent were fake. However for hate speech, Rosen said, "our technology still doesn't work that well".

Improved technology using artificial intelligence had helped it act on 3.4 million posts containing graphic violence, almost three times more than it had in the last quarter of 2017.

Facebook released the data not to brag, but instead, the company said in a statement that it's offering up its statistics so users can judge its performance themselves. And more generally, as I explained last week, technology needs large amounts of training data to recognize meaningful patterns of behavior, which we often lack in less widely used languages or for cases that are not often reported.

The numbers were disclosed in a report Tuesday that breakdown how much material Facebook removes for violating service terms.

It claimed to detect nearly 100 percent of spam and to have removed 837 million posts assimilated to spam over the same period. The Facebook leader added that the company would notify users if their data were compromised.

Like this: