Why we should all be working on our DSA readiness
Don't think you need content moderation tech? Think again.
The week before Christmas was an interesting one in EU digital regulation, if only because of the made-for-clickbait announcement from the European Commission that three companies have been added to the list of designated Very Large Platforms (VLOPs): Xvideos, Pornhub and Stripchat.
That brings to 19 the number of companies that will face the most stringent requirements to police content under the far-reaching Digital Services Act (DSA). Not only will they have to report their user numbers, but also to assess systemic risks they create (especially re children), allow access to independent auditors and external researchers, and deploy recommender systems that are not based on user profiling.1
The EU has been both commended and criticised for drawing a distinction between the Goliaths (most of which are US-based), and everyone else. This approach was a direct response to findings that GDPR had disproportionately impacted smaller companies and—in effect—made it harder for them to compete.
But with all the focus on VLOPs, we seem to have forgotten that most digital services of any size will face critical new obligations from 17 February this year. Especially around moderation of user-generated content (UGC).
(The following is most definitely not legal advice.)
Under the DSA’s rules, if there is any UGC on your platform—whether it’s comments or reviews, chat messages, content in files people are exchanging or posting, live voice communications, 3D creations—you have to implement means of detecting, flagging and removing illegal content.2 If you are a marketplace for products, you must have a process to identify and remove illegal goods, including counterfeits.
This could prove a real headache for many growth-stage and midmarket companies that have users or sell product in the EU. While a lot of the technical components for content moderation and user reporting workflows exist, they still need to be cobbled together in a way that covers all the DSA requirements. You’ll need at least these features (appropriately localised to the 27 EU member nations…):
Moderation / filtering to detect illegal content (and a way to publish an annual transparency report on this process)
Mechanism for users to report illegal content
Ability to remove identified or reported content
Ability to provide a notice to users explaining why content has been removed, and to enable them to appeal your decision.
A process in place to notify law enforcement if you become aware of a potential criminal offence or a threat to someone’s life or safety
That’s the technology & process bit. Of course you will first have to come up with a content policy that both satisfies the DSA definition3 and matches the context of your UGC.
And that’s not all. You will also have to demonstrate that your privacy and security mechanisms were designed to protect minors specifically (including not serving them profile-based ads4), and that your interfaces are not deceptive or using ‘dark patterns’.
Today the market for tools and services that can help with content moderation is fragmented. There are plenty of vendors jumping on the double bandwagon of this new regulation and the new technology-du-jour (AI). It can be hard to distinguish between those that provide services (based on humans and/or AI), or tools and components for you to build your own solution (which will likely also require some human moderators). In addition to technology, you’ll need to appoint someone who owns the policy and can evolve it, and to continuously manage set of principles to help adjudicate disputes.
Finally, content moderation is uniquely complex in that it can be very specific to your service (eg, what is considered a threatening comment in a social community may not be a reportable offence in an adversarial video game chat). At the same time, every company needs to make use of generalised content moderation approaches (eg, how to identify and report on Child Sexual Abuse Material or CSAM). Getting the benefit of the best standards in the industry while optimising for your own service can be hard.
Look out for lots of innovation and company pivots among content moderation solution providers as they try to address the DSA compliance challenge for the midmarket.
In fact, recommender systems (ie, the personalisation algorithm that powers your feed) are under attack from all sides in Europe, which will be interesting to watch given that they are. by far the most effective driver of user growth, see TikTok astonishing growth.
The DSA does not create a new definition for what content is illegal – it simply points at existing EU and member state laws. Broadly, illegal content includes anything that incites terrorism, depicts the sexual exploitation of children, incites racism or xenophobia, infringes intellectual property rights or is considered disinformation. But there are also country-specific restrictions to be aware of, such as the prohibition on depicting Nazi symbols in Germany, or more stringent restrictions on racist content in France.
Note that “where a content is illegal only in a given Member State, as a general rule it should only be removed in the territory where it is illegal.” (Questions and Answers: Digital Services Act).
The DSA draws a very hard line here, directly barring digital ads based on profiling by using personal data of users “when [operators] are aware with reasonable certainty that the recipient of the service is a minor” (Article 28). This puts into much clearer language what had been implied until now by GDPR’s Recital 71 restriction on automated decision-making via profiling. Note that while the targeted advertising ban applies to any company in scope, VLOPs and VLOSEs face additional obligations to mitigate risks including “targeted measures to protect the rights of the child, including age verification and parental control tools.”
Great summary Max. Thanks for sharing. Super useful reminder to keep an eye on trends not events