What are the essential things investors need to know about the regulatory landscape?
With complex and overlapping regulation piling on, how can investors mitigate (or even exploit) the new rules?
It’s a strange complicated time for investors in startups. Many have capital to deploy, but are waiting for market stability (and friendlier valuations) to deploy it. Others are debating whether to pay up for high-priced AI startups, or fretting about how to raise the next fund given a dearth of near-term exits. Due diligence has shifted to unit economics and cash runway to assess if the company can withstand a prolonged period of higher interest rates and slower growth.
But a category of due diligence I’m still not hearing enough about is regulatory readiness. Startups that understand the regulatory landscape and design their products for where regulation is going can not only mitigate risks, but can also create a moat against competitors. What is more, new European laws like the Digital Services Act (DSA), Digital Markets Act (DMA) and the EU AI Act are explicitly designed to impose disproportionate restrictions on larger companies.
This creates a potential opportunity for startups and growth companies to exploit the difference in their regulatory burden vs the large platforms.1
When looking at new investment opportunities or portfolio support situations, ask yourself:
Can the team articulate how it has built on the rights-based principles from existing and new laws in designing its products?
Start with GDPR & similar privacy laws: what are they doing to ensure data minimisation, transparency, having a legal basis2, data security?
Do they know clearly who their audience is and – if that is not kids or teens – how are they minimising the likelihood that kids and teens will use the product? Do they need to implement user content scanning or age verification to comply with the UK’s Online Safety Act?3
How are they ensuring that consumer flows are optimised for efficiency whilst not falling afoul of the increasing regulatory focus on dark patterns and unfair trade practices?
How will their go-to-market strategy be impacted by the DMA? Gatekeeper platforms – social media, marketplaces, ecommerce – face new constraints, which will make them less effective for user acquisition, at least in the short term. It’s worth building a buffer for this into every CAC model.
If they do any business in the EU, what is their readiness for compliance with the DSA, enforceable as of yesterday (with TikTok already its first target)?
Do they know they have to demonstrate how they detect, report, remove illegal content? That they have to give users reasons for removing content and an opportunity to appeal? That they have to publish annual reports on their content moderation processes?
What does the opportunity for regulatory arbitrage look like in their space? How will the DSA, DMA, AI Act affect larger companies in a way that could be exploited by nimbler competitors?
Can they build in human-light AI-based content moderation tools (like Checkstep) from the start, while the large platforms have to contend with their enormous legacy moderation stacks?
Can they take advantage of more detailed profiling and user targeting (subject to data privacy rules of course) which are now restricted for larger players?
Can they use ‘easier’ legal bases (such as legitimate interest) for certain data processing practices? (not legal advice!)
If they are using AI or building AI into their products, do they have a set of principles that is guiding their use? What guardrails are in place to ensure employees comply? Do their policies cover:
Ensuring that any training data is usable, continuously available, and legally obtained? Are you comfortable that any copyrighted material used to train the model (whether internal or licenced) can be covered by fair use (US) or TDM (EU) exceptions (though this debate is still in flux).
A vendor assessment framework for evaluating tools and AI models? Agreement on what good performance looks like and a means of monitoring, testing, measuring the product’s effectiveness and safety? Can it operate cost-effectively at the level or reliability required by the end user?4
Is a human-in-the-loop required, available, properly trained and attentive?5
Is there a way of establishing whether there is bias in the system, if it has a material impact and what to do about it?
What is the degree of explainability of their system and is that sufficient for customers, or for potential regulatory enquiries?
Etc6
It used to be that investors divided the world into ‘regulated’ industries (finance, healthcare, defence) and everything else. But in effect, all industries are regulated now, bringing into the mainstream both the burden of compliance as well as the opportunity to build a defensive moat or even a USP that takes advantage of regulators’ particular focus on large companies. Does your due diligence process take this into account?
At least until they join the VIP club of VLOPs/VLOSEs, with >45m MAUs, in which case you’ll hopefully have achieved a unicorn exit…
Note that the pool of available legal bases is shrinking fast. Meta has cycled through three of them in the past year (contractual necessity, legitimate interest, and now consent - sort of). For practical purposes in relation to anything that resembles personalisation (including ads), the choice now is: consent, or don’t collect personal information.
If the audience is kids or teens, they will have to do more than just demonstrate an understanding of kids’ privacy laws (like COPPA), and design codes (like the Children’s Code). They will need to build specific safeguards based on those laws into their products and processes, and document their decisions.
This is perhaps the most important question investors should be asking of all AI companies. Almost by definition, the low-risk applications generative AI is good for today (see what is featured in the new GPT Store) won’t deliver enough value to pay for their cost of delivery (once VC subsidies run out); whereas the high-risk applications where the most value could be created won’t tolerate the kinds of errors endemic to the AI tools that are attracting much of the funding. Don’t get me wrong, there are really powerful, specialised applications for newfangled AI, but the scope of mid-term opportunities for ROI is IMHO a lot narrower than the markets seem to think.
Perhaps more important, does human-in-the-loop even work? Cory Doctorow breaks down the problem in his excellent essay Supervised AI isn't.
More on this in a future post outlining a framework for AI policy that smaller companies can use.
Really useful thought piece it helped me think about my own mindset on "regulated vs non-regulated" industries and the need to update my position on this. Thanks