Written by 11:09 Media

What the EU Digital Service Act (DSA) envisages and what Big Tech risks if they do not comply

by Vincenzo Tiani

As of Friday 25 August, the application of the new rules on platform liability for online content, the Digital Services Act (DSA), begins. This is a real revolution, since the previous directive of 2000 was born out of considerations made in the mid-1990s and had long since hardly applied to the web 2.0.

To whom the Regulation applies

The new EU regulation, which unlike the previous directive, will apply equally across the EU, allowing users to have the same rights everywhere and companies not to have to deal with 27 different laws, applies to all online intermediaries, be they social networks, search engines, marketplaces, or hosting service providers. Unlike in the past, however, they will not be treated equally and, depending on their size, will have to deal with increasingly stringent requirements.

Entered into force on 16 November 2022, it is from 25 August that the first effects will be seen, starting with the famous Big Tech, only this time they are not a mere journalistic epithet, but part of a specific list compiled by the European Commission, which has identified all VLOPs (very large online platforms) and VLOSEs (very large online search engines), i.e. those platforms and search engines that exceed 45 million monthly active users in Europe, 10% of the Union’s population, the quantitative criterion chosen to target the platforms that have the greatest impact on European citizens. These are the social networking sites Facebook, Instagram, Snapchat, TikTok, Twitter (now X), Linkedin, Pinterest, YouTube, booking services such as Booking.com, marketplaces such as Amazon, Zalando, Google Shopping, Alibaba, AliExpress, the Apple App Store and Google Play application stores, and then Google Maps and Wikipedia, and finally the two most popular search engines, Google and Microsoft’s Bing.

What’s new for Big Tech

Let’s take a look at what’s new for Big Tech.

Content reporting

The previous rule was that platforms would only be held liable for users uploading illegal content if, once they became aware of it, they did not remove it. That principle has remained, but in the case of Big Tech, the bar is raised. First of all, they will have to equip themselves with a ‘point of contact’, i.e. a team dedicated to reports from authorities and users, for which a simpler and more effective system will have to be put in place.

Platforms will be able to suspend users but having first warned them and clearly specifying why they risk suspension. It will no longer be enough to say that the terms and conditions have been violated in a generic way, but they will have to be informed of the reasons why a post has been removed, or its visibility or monetisation has been limited.

The terms and conditions will then have to be set out more clearly and simply.

Similar to the control of online social content, marketplaces will have to check that no illegal goods are sold in their online shops.

Systemic risk

The real big news remains the analysis of systemic risk. Since the approach attempted by individual European states, in 2019, of sanctioning platforms for each illegal post that was not removed within a few hours had the side effect of incentivising censorship, the DSA stipulates that each year the large platforms must draw up a report assessing the risks to fundamental rights, freedom of expression, public debate, and minors, resulting from an abuse or illegitimate use of their services. Once these risks have been identified, they will have to present solutions to mitigate their impact, solutions that will touch on moderating posts, using algorithms to recommend certain content rather than others, modifying terms and conditions, changing the design, their advertising collection system, etc. In order to verify that these companies have done everything possible, they may be subject to external audits, not only by the authorities, but also by researchers.

In the event of imminent threats to people’s health or safety, in which these platforms may play a role, e.g. through massive disinformation campaigns, crisis protocols, i.e. emergency measures aimed at mitigating their harmful effect, will have to be activated in consultation with the Commission.

Algorithms, advertising and dark patterns

It must be explained what parameters the content recommendation algorithms work on. The famous ‘why are you seeing this post’ becomes the norm. Furthermore, one can decide whether to see posts in the way the algorithm proposes them, in a personalised way therefore, or whether in a chronological way. This second option offers the possibility of being less subject to external influences.

To limit the influence of online advertising, companies will not be able to use information concerning sensitive data such as religion, health, sexual orientation. At the same time, children’s data may not be used to offer them personalised advertising.

Companies will have to keep track of advertisers and, for each advertisement, they will have to keep information on who advertised it and who paid for the sponsorship, how long that advertisement was shown and to which group (by age, gender, interests, location) it was shown.

Finally, dark patterns, i.e. methods that serve to stealthily steer users towards specific choices, will be banned. Until recently they were the norm in cookie banners, where the ‘accept’ button was coloured, while the others were grey.

How companies are complying

TikTok played it safe and, after the first changes introduced in July, presented its final updates at the beginning of August, such as greater ease in reporting illegal content, more information on how content is moderated and how the recommendation system works, and greater protection for minors.

Meta, a few days ago, announced similar changes, including the option of being able to see Reels and Stories only of people one follows and in chronological order, instead of as decided by the algorithm.

Google recalled that many of the DSA’s requests have been in place for some time and that it has improved its Transparency centre, where it collects the list of all advertising investors who show their ads on Google’s services.

Zalando, for its part, officially challenged on 27 June before the Court of Justice of the European Union the Commission’s choice to designate it as a VLOP. For the German company, in fact, Zalando would not be among those presenting a systemic risk and therefore should not be considered a VLOP. Amazon too, took similar action on the same grounds at the beginning of July.

It should be noted that, for now, the changes will in many cases only affect European users, although the hope is that these new protections and options will be extended globally. In the meantime, for some years now, even the United States has been discussing a regulatory update along these lines, given that even their reference law, Section 230 of the Communication Decency Act, dates back to 1996, when Google and social networks did not yet exist. In China, on the other hand, where with the exception of TikTok and Alibaba many of these companies do not operate, a similar regulation is already in force in certain respects, aimed precisely at regulating the use of algorithms by platforms in the promotion of certain contents.

Next steps

Meanwhile, many countries, including Italy, have yet to designate the national authority that will be responsible for monitoring and ensuring compliance with the DSA. For Italy, it could be AGCOM, which, given its role in enforcing the copyright directive, could be the one most in line with the tasks envisaged by the DSA.

Surely the governments cannot delay any longer since from February 2024 the DSA will also become binding for all those platforms with less than 45 million monthly users, and penalties may amount to 6% of global turnover.

Originally published in Italian on La Repubblica

Close