Skip to main content

By The Pollack Group

By Marcus Herring, The Pollack Group Intern

“Fake News” is a term that has spread like wildfire, partially stemming from Facebook’s involvement in the 2016 presidential election. The social media platform has been under scrutiny from both the government and its users for doing little to prevent the spread of misinformation, biased, and illegitimate news. This topic gained global attention when interference by Russian hackers took place on the platform during the previous American presidential election. Since then, people have realized the danger of “fake news” and its ability to spread content virally.

As Facebook now reaches more than 2.2 billion active users, it should come as no surprise that this Fortune 500 company has been having troubles combatting unwanted content.

Facebook has been working on ways to fix this for some time now. They were the first social media platform to allow users to report posts that may contain false information. When a post is repeatedly flagged by users, it will appear less on newsfeeds and show a warning that states: “Many people on Facebook have reported that this story contains false information.”

CEO Mark Zuckerberg made an announcement early this month stating that they are reportedly buying a British Artificial Intelligence firm, Bloomsbury AI, to help take on the fake news issue that has plagued them.

The buyout of the London-based company that specializes in natural language processing would cost approximately $30 million. Facebook’s goal with this acquisition is to have AI software that is exceptional at understanding images, videos, and text to the point where it can effectively moderate various social media platforms, including Facebook and Instagram.

When building an automated fake news detector, there are a number of challenges that may arise. AI is software that has been proven beneficial in the manufacturing industry, but when it comes to understanding human communication in the way humans do, it becomes less effective.

AI technology can do a basic analysis and pull out facts, like if the post is written in a positive or negative tone. However, there are topics today that confuse even humans and, as such, how can AI software be taught to make the call on something that even we can’t?

An example of this would be the various political topics that do not have a consensus opinion, such as global warming. One side believes in its merits, while others believe its theories to be “fake news.” How can you teach a machine to make that judgment for us? This is where things become much more complicated.

Facebook executives Rob Goldman and Alex Himel realize that this isn’t a cure-all to prevent abuse in elections, but are hopeful that this is as a good a place as any to start.

CEO Mark Zuckerberg also chimed in by writing a Facebook post: “These steps, by themselves, won’t stop people trying to game the system. But they will make it a lot harder for anyone to do what the Russians did during the 2016 election and use fake accounts and pages to run ads.”

For more agency insights, visit our WellRed archives