The interim report published by the DCMS committee of the UK parliament as part of its inquiry into fake news has cast a light not only on the role of social media platforms in spreading disinformation, but more importantly on the willingness of policymakers to take the matter into their own hands. The report issues a set of recommendations which would result in strong regulatory safeguards around platforms’ activity. Assembly’s Platforms and Big Tech Tracker shows that tech companies have already taken action to solve some of these problems, and invested significant figures; however, if the UK government takes the committee’s recommendations on board, those initiatives will amount to “too little, too late”. The self-regulatory approach could be off the table, in the UK at least.
Report’s Conclusions
After numerous hearings involving, among others, representatives of social media companies, the inquiry into fake news carried out by the DCMS Committee has resulted in an interim report. The report highlights strong responsibilities of social media platforms in facilitating the dissemination of misinformation, and sets out several recommendations for government which, if taken on board, will lead to heavy-handed and pervasive regulation of these companies’ activity.
The report’s recommendations develop along five pillars, as follows:
Make tech companies responsible and liable. The committee argues that a new category of tech company should be formulated, which tightens tech companies' liabilities, and which is not necessarily either a 'platform' or a 'publisher'. This process should establish clear legal liability for the tech companies to act against harmful and illegal content on their platforms. The committee is explicit in considering tech companies not to be passive platforms on which users input content; they reward what is most engaging, because engagement is part of their business model and their growth strategy. They have profited greatly by using this model.
Impose a levy on tech companies to fund education and the Information Commissioner’s Office (ICO). This proposal identifies social media companies as the cause of a problem, and as such aims to get them to bear the cost of the solution. The committee proposes, on the one hand, a comprehensive media educational framework (which these companies should fund); and on the other hand, it requires them to pay for the expanded work and remit of the ICO, much like the banking sector pays for the Financial Conduct Authority.
Change the rules on political campaigning. One of the key issues related to the fake news inquiry was the spread of misinformation through advertising campaigns, which could also exploit the current regulatory vacuum in political advertising online. Not only does the committee call for a public register for political advertising, but also for a ban on ‘micro-targeted political advertising to Facebook lookalike audiences’. A code for advertising through social media in election periods should also be introduced, and the government should investigate ways to enforce transparency requirements on tech companies, so that political advertising data are publicly accessible.
Audit fake accounts through the Competition and Markets Authority (CMA). The committee notes that, if tech companies fail to act against fake accounts, this could not only damage the user experience, but potentially defraud advertisers who could be buying target audiences. The CMA should therefore consider conducting audits of the advertising market on social media.
Establish a Digital Atlantic Charter. This should be an instrument of international cooperation, and should establish a formal basis of cooperation with the US. While the charter would be voluntary, it would rely on a framework clearly setting out the legal obligations in adhering countries. This should facilitate alignment, if not in law, at least in what users can expect in terms of liability and protection.
The above recommendations are the result of “disturbing evidence” found by the inquiry, of activities including hacking, disinformation, and voter suppression. Tech giants like Facebook are accused to have “made it easy for developers to scrape user data” and use it without users’ knowledge or consent. The committee also notes companies have not been collaborative and transparent in the committee’s scrutiny.
Companies have already started to act – but it has largely been reactive rather than proactive
Tech companies will not see the DCMS committee’s recommendations as coming out of the blue. Assembly’s Platforms and Big Tech Tracker shows that the DCMS committee’s inquiry has been part of an international trend to debate and explore regulatory options, and that companies have already started to take action to tackle these problems. However, most of these initiatives have been taken between late 2017 and 2018, when these problems had already got under the spotlight of regulators. In other words, it has been largely a reactive approach, rather than a proactive one.
Nonetheless, it is undeniable that companies are now doing something about it. Facebook has committed to doubling the number of employees in safety and security by 2018, bringing it from 10,000 at the end of 2017 to 20,000 at the end of 2018; Google is spending hundreds of millions of dollars in initiatives to foster quality news and journalism. Not to mention the investment in technology to take down illegal content and remove fake accounts (all the three main social media platforms are increasingly using technology for this purpose). Even Apple, which has come out relatively untouched by the scandals and is not perceived to have any particular responsibility in the spread of misinformation, has recently announced a special news section, curated by human editors, aiming to present users with quality information and opinion.
Similarly, companies are taking action to improve transparency and accountability in political advertising, so that it is possible to understand identity and location of an advertiser. Facebook has been the most active on this front during 2018. All these actions show that companies are aware of the problems, and are now listening to policymakers, at least to some extent. It might be too little, too late though.
It is unclear whether the recommendations will turn into action, but policymakers have surely lost trust in tech companies
The DCMS committee’s report, and the consequent recommendations, are the most thorough and hands-on initiative taken by any legislator so far around the issue of fake news. This is not surprising, since the committee has engaged in the lengthiest and most detailed inquiry, going well beyond generic hearings such as those carried out by the US congress and senate, and by the EU parliament around similar topics. The long list of hearings in which the committee has obtained evidence from whistleblowers, quizzed company executives, and gained advice from regulators, has given the committee an exhaustive view of all aspects of the issue. These go well beyond the simple publishing of ‘fake news’; they are strongly linked to misuse of personal data and, ultimately, put into question important aspects of tech companies’ business models. On the one hand, these companies have accepted political advertising on their platforms without thinking of certain consequences, and on the other hand they have for a long time worked to increase user engagement, thereby encouraging addiction to apps and devices.
If the committee’s proposals are taken on board by the government, the UK will turn into a pioneer in regulating social media, as no other country or supranational institution has intervened in this way so far; which also leaves doubts as to whether such institutions will follow a similar approach any time soon. In the US, the cultural tendency toward deregulation, together with the willingness to protect tech companies’ ability to generate value, continues to be a powerful incentive to preserve the status quo (with the possible exception of the ‘Honest Ads’ bill currently under discussion, aiming to improve online political advertising); in the EU, the current initiative on fake news has resulted in proposals for a code of practice, and it is unlikely that such proposal will change dramatically unless the EC gathers evidence to back the need for urgent action.
However, it is unclear whether the DCMS committee’s recommendations will be turned into anything actionable in the coming months. What is certain though is that companies have been too late at taking action after these problems have emerged. Policymakers now find them untrustworthy, and investors are starting to doubt about their ability to find new revenue streams; while this is due to more than one factor, the increased risks and costs coming from regulation is definitely one of them. Especially if legislators start using the disruption brought about by social media as an argument for heavier taxation.