Recently, there have been rising concerns that the public is being deceived by the unprecedented dissemination of fake news on the internet and on social media, in particular.
The term fake news is a fluid concept but is generally understood to refer to fictitious reports designed to deliberately deceive readers in order to maximise internet traffic and profit. The term gained particular traction during the 2016 US presidential election following claims of harm to democratic process.
Some people are taking the fight against fake news on themselves. The Fake News Challenge is a global grassroots effort of more than 100 volunteers and 71 teams from academia and industry that are looking at using artificial intelligence to combat fake news.
The Fake News Crack Down
Bloomberg reported in early April that Germany was getting serious about cracking down on fake news with legislation that threatens social networks, like Facebook, with steep fines up to 50 million euros if they fail to give users the option to complain about hate speech and fake news or refuse to remove illegal content.
Concerned by the impact of fake news in the UK, the Government announced in January 2017 that it had asked a Commons Select Committee to launch an inquiry. In doing so, Damian Collins MP, Chair of the Committee, specifically called on tech companies to help address the issue.
Many tech companies are already looking to combat the dissemination of fake news on their platforms. For instance, several have already started to introduce preventative measures such as content filtering or fact verification.
In October 2016, Google announced it was introducing Fact Check, an automated news verification feature to help readers determine whether a news story is accurate by the addition of a label to show that the content has been independently verified.
Facebook has also made a string of similar announcements. In January 2017, it announced the introduction of a fake news filtering service in Germany. Then, in March 2017, it was widely reported that Facebook had started to introduce fact verification features to certain users elsewhere. And in the first week of April 2017, Facebook released a number of fake news tools designed around the safe harbours.
If these features are successful, it’s anticipated that they could be rolled-out globally. Other tech companies may also be inclined to follow suit, automatically block certain content from being uploaded or even use artificial intelligence to identify and remove fake news.
However, there are potential legal implications for tech companies looking to introduce such measures. Currently, tech companies can rely on certain safe harbours to avoid financial liability for unlawful content (including fake news) uploaded to their platforms by users. Safe harbours shield tech companies from liability in relation to certain passive transmissions as well as caching and hosting activities.
To benefit from safe harbours, tech companies must not be involved with the content transmitted or possess knowledge of the content’s unlawfulness. Practically, this means that tech companies must not select or modify content uploaded by its users and must remove it expeditiously upon becoming aware of unlawful activity.
There is uncertainty as to whether the introduction of features aimed at preventing the spread of fake news might inadvertently remove the benefit of the safe harbours. This is because features like content filtering or fact verification could prevent tech companies from meeting the necessary requirements.
A tech company that filters or blocks fake news could, for instance, be deemed to have selected the information that is uploaded onto its platform. Likewise, a company that verifies content may gain knowledge of the grounds on which unlawfulness might be apparent.
If a tech company is unable to avail itself of the relevant safe harbour when pursued for damages, its liability is potentially unlimited. Given the amount of content uploaded by users to internet platforms each day, tech companies could face an increase in legal claims and substantial financial liability. This is either because some unlawful content slips through the net, or because a decision is taken not to remove it out of fear that doing so might breach a user’s right to freedom of expression.
Since the financial consequences of such could be severe, the resulting uncertainty might ultimately discourage tech companies from tackling the dissemination of fake news on their platforms.
The information in this blog post is provided for general informational purposes only, and may not reflect the current law in your jurisdiction. No information contained in this post should be construed as legal advice from JAG Shaw Baker or the individual author, nor is it intended to be a substitute for legal counsel on any subject matter.
The post was written by Ben Williams, Associate at JAG Shaw Baker.