In March 2024, ReMeD published a report mapping EU-wide and national legislations regarding hate speech, Artificial Intelligence and disinformation. In this interview, the lead authors Anna Shavit and Kateřina Turková, researchers at Charles University (CZ), look back at the main findings and give an overview of the existing regulatory frameworks for social media platforms.

What are the main trends you have identified in the way social media are regulated in Europe?

EU Member States operate within the broad frameworks of the Digital Services Act (DSA) and the forthcoming AI Act, which differentiates their regulatory landscape from countries such as the UK and Norway. A key distinction lies in the differing perceptions of freedom of expression, significantly influenced by historical and cultural contexts. For example, the UK’s approach in this field varies considerably from many EU countries. The Czech Republic, for instance, has stricter laws against promoting genocidal regimes, and this is deeply reflected in the national legislation. Similarly, Norway prioritizes digital security. These cultural and historical influences have shaped each nation’s legal frameworks, even if the rules extend beyond the digital sphere.

Can you elaborate on how hate speech is addressed in different countries?

Similarly, hate speech regulations vary significantly from country to country, particularly in how they relate to national laws and the protection of freedom of speech. In many European countries, the focus is on what you can’t say, with regulations aimed at preventing harm. The UK, however, has a different tradition with a broader spectrum defining harmless versus harmful speech. The situation is more complex in federal countries, like Germany or Belgium, where different regions have their own laws and interpretations of what constitutes hate speech. It’s a challenge they have to navigate carefully.

Are social media regulated in such a way as to meet the challenges posed by hate speech and disinformation in Europe? 

We understand that regulating social media is a challenging task. Although there are – and probably will continue to be – differences between Member States, we think that aiming for the overarching EU-wide framework is worth the effort. There are still issues that are not fully addressed, but it is good to have at least some starting point. 

Which countries have you identified as most advanced in regulating AI?

Our research did not specifically focus on the regulation of AI, as it had a wider scope. However, we found out that there is no unified approach to regulating AI in the media in Europe, and different initiatives at national and supranational levels exist. The important “player” in this field is the UK, which aims to become a global AI leader. Also some other countries under study are quite active in this area, e.g., the German government published a national AI strategy in 2018 and an update in 2020, while Czech representatives in the EU were active in commenting on the AI Act proposal.

How do you think this report will contribute to the wider conversation on media regulation at EU level?

Firstly, it is crucial to acknowledge that our role is that of researchers, not regulators. Our main objective was to gather knowledge and identify examples of good practice across different countries. A significant advantage of this project lies in its comprehensive scope, encompassing data not only from EU Member States but also the United Kingdom and Norway. We anticipate that the compilation of information regarding their respective regulations, laws, and standards may prove useful for future EU regulations or proposals.

Read the full report here