Please enable javascript in your browser to view this site

Disinformation

AI, disinformation and elections in 2024

As more than 60 countries head to the polls, regulators and tech firms alike are concerned with the impact AI could have on the spread of disinformation. We consider the experience in elections so far and the policy response aimed at securing votes.

Telecoms and Big Tech under a Biden administration

The Biden administration is expected to bring significant change for telecoms and Big Tech. Overhauling broadband policy to foster competition and reduce the gap between urban and rural areas, restoring net neutrality rules, and continued restrictions on Chinese equipment vendors all seem likely

The Fake News inquiry is over, but there’s no legislation in sight

The Committee for Digital, Culture, Media, and Sport of the UK Parliament (DCMS Committee) has now completed its inquiry into Fake News, which lasted throughout 2018. The inquiry started as an investigation on the spread of disinformation and its role in influencing elections, and soon turned to the link between tech companies’ practices and the protection of citizens’ personal data.

Summary of the global inquiries into the spread of misinformation (and data privacy)

This note will be updated as and when witnesses appear in front of various committees that are addressing the topic of misinformation and the use of personal data.

Far-reaching regulation of social media in the UK draws closer

The interim report published by the DCMS committee of the UK parliament has cast a light not only on the role of social media platforms in spreading disinformation, but more importantly on the willingness of policymakers to take the matter into their own hands. The report issues a set of recommendations which would result in strong regulatory safeguards around platforms’ activity. If the UK government takes the committee’s recommendations on board, the self-regulatory approach could be off the table, in the UK at least.

Tech companies should take down illegal content in one hour

The European Commission issued today a set of “operational measures” to tackle illegal content online. This also includes terrorist content and hate speech. Tech companies are recommended to follow a “one-hour-rule” to take down terrorist content and to implement faster detection systems, including automated ones. Tools should also be shared with smaller companies. Businesses will have to submit information to the EC about their compliance with this Recommendation within three months.

European Commission still unclear on how to tackle fake news

On 27 February 2018, the European Commission held the second multi-stakeholder meeting on the problem of fake news. The meeting is part of a series of events, and of a comprehensive initiative the EC is taking to address the issue. During the event, it was clear that the EC’s position is still far from being defined. On the other hand, the advertising industry is advocating for light-touch regulation and is defending its own ability to enforce self-regulation.

Policymakers turn their attention to fake news, hate speech and addiction

Tomorrow, the European Commission will host a colloquium in Brussels on the issue of “fake news”, which will see the participation of experts and industry representatives. This is very likely another step toward legislative intervention, which could come in the form of a Recommendation from the EC.

DCMS committee unhappy with social media’s approach to fake news

The hearing carried out by the UK’s Digital, Culture, Media, and Sport (DCMS) parliamentary committee on 8 February 2018 in Washington DC with executives of three of the main global social media platforms (Google, Twitter, Facebook) showed how much distance currently separates tech companies from regulators and policymakers with regard to tackling “fake news”, hate speech, and other related illegal activities.