Is the EU AI Act Killing Startups? A Medical Device Perspective

Disagreement in public, threats in private

Threats of legal action? Best remedied by considered reflection. What to do with the initial emotions? Let the edges smooth, it’s just energy and can be rooted in doing something beneficial. I decide to use that energy to write my view of “evidence” presented during a LinkedIn disagreement.

The individual was asserting that the EU is making startups illegal and killing startups - claims that are clearly hyperbolic. They used an article (linked below) as evidence to back up their claims. When I challenged them, they threatened me with legal action in a direct message.

I believe the antidote for misrepresentation and an internet full of hyperbole and outright lies is to take time and present information calmly and factually. So this is a view of the article, presented as objectively as I can.

Why This Matters to Me

I work with several companies implementing AI in various guises (often not calling it AI!), so I’m interested in the details. My background includes managing security programs, and one of the first things documented in an Information Security Management System is the regulatory landscape. It can seem pointless (“laws are evident”) but is helpful to have done and be able to reference.

So an article like this piques my interest: Mapping the regulatory landscape for artificial intelligence in health within the European Union.

Scope of the article

While the article headline focuses on Health Care, I cannot say for sure it fully encompasses that sector. From speaking to one of the authors, another individual that is “in the know”, and from the article’s context, it leans towards covering only Medical Devices and AI, not the whole Health Care sector.

It must be noted that this isn’t the Nature - rather an ancillary journal. I say that as I made the mistake of thinking it was Nature - I’m grateful someone pointed out that it isn’t. I do not know the quality of this ancillary journal - evaluating its credibility is something I plan to do over time.

Methodical Analysis

I’ve read the article five times now: twice diagonally (skimmed, but I love the term “reading diagonally”), once making extensive notes in the margin, and finally reviewing it with my notes. The article presents itself as:

In this article, we present a synthesis of 141 binding policies applicable to AI in healthcare and population health in the EU and 10 European countries.

Limitations in the Research

I’d like more information, as the article is purely a synthesis (I had to look that up; it means “to form a coherent body of information”) done in an accepted scientific fashion. Unfortunately, I can’t find the data collected. That would obviously be very interesting, as they identified 26,046 policy records and 757 academic records, then filtered that down to 141 records for qualitative synthesis.

The article includes the keywords and provides a view on the process they used to review the papers. It’s the first time I’ve looked at this sort of process, and it looks like one I’d happily adopt. My only slight critique is that the keywords do not include (Cyber) Security, Ethical, or Responsible.

What the Article Actually Says About the AI Act

The article is clear that further work is needed to understand the challenges presented with respect to AI (and other regulations):

Future work should explore specific regulatory challenges, especially with respect to AI medical devices, data protection, and data enablement.

It only has a paragraph that directly talks about AI Regulation, but there are insightful comments that I think highlight the value of the regulation to society as a whole:

It aims to ensure the ethical development of AI in Europe and beyond its borders while protecting health, safety, and fundamental rights.

The design and development of high-risk AI systems should be in such a way that natural persons can oversee their functioning.

For the development of future AI systems, the EU AI Act provides the possibility to establish AI regulatory sandboxes at national level as well as the testing in real-world setting prior to market placement.

Innovation Support Across Europe

The article also highlights actions being taken around Europe to support innovation. These include Horizon Europe, a Technological Free Zone in Portugal, support from the German Health Insurance industry, and Malta’s Digital Innovation Authority.

It’s worth noting that Technology policies are not a direct competence of the EU (as per Article 4 TFEU). However, any technology used as a product intended for consumers does get covered. I interpret this to mean that there will be a collection of directly and indirectly related regulations to consider.

While the article states:

In the EU AI Act, regulatory sandboxes are proposed that are intended to provide a controlled environment for the development, testing, and validation of innovative AI systems.

Only Portugal’s Technological Free Zone is mentioned, leading me to believe that no other zones exist yet. Given the publication date was before the announcement of the 7 AI factories (plus other innovation hubs), this needs further investigation.

Interesting Perspectives on Intellectual Property and International Comparison

There are two areas where the article took an “interesting” turn. The first concerns Intellectual Property:

Intellectual property laws could hinder access to training data and AI technologies, posing challenges for collaborative innovation ecosystems in developing and speeding up the introduction of novel AI-enabled health technologies.

I find it strange to use the word “hinder.” Either you have the right to use the data or you don’t. The same is true of GDPR. Aside from some aspects of IP law that have negative connotations, these are well-established laws, and no one has the right to all data for training purposes. Maybe it was an oversight in the article editing process.

The second element is the inclusion of a comparison to America. There are many rights Europeans have that Americans do not; they have a healthcare system that ranks at the bottom of developed nations.

The article does offer some balancing arguments on the FDA approach. While saying that the American process is more streamlined (even more so now that there is no AI executive order!), it does highlight complexities:

While this framework is potentially easier to navigate due to its streamlined nature, it requires medical devices to be reauthorised after each update that changes the underlying performance of a device, its safety characteristics, or the intended population for whom the device is intended to be used.

The article notes this would be a blocker for:

AI systems capable of continuous learning

I had to refer to the article referenced (Regulating AI: Lessons from Medical Devices) for a definition of continuous learning:

AI/ML systems that are capable of continuous learning – that is, changing their output or performance based on new information that they encounter while in use – add a new dimension to this problem.

I think a better description is needed, though, as any probabilistic/stochastic system could have a change in output based on new information while in use.

The Need for Further Research

I found that the article comes to an abrupt end with no formal conclusion. As such, I treat the last sentence from the abstract as the conclusion:

Future work should explore specific regulatory challenges.

Is the EU AI Act Killing Startups?

Clearly, the article does not show that the EU AI Act is “killing” or “making startups illegal.” Such claims are hyperbolic and unsupported by the research.

That said, the article does help identify legitimate challenges that startups may face:

  • A regulatory landscape that spans both EU-wide and national regulations
  • Balancing innovation with requirements for human oversight of high-risk AI systems
  • Feeling the pressure of American companies or investors who operate without regulation to protect health, safety, and fundamental rights

The article also highlights positive aspects like regulatory sandboxes and innovation zones that are specifically designed to help startups navigate these challenges while continuing to innovate. Since it was published the EU have also announced AI factories that are ramping up quickly.

Conclusion

So the reality is definitely more nuanced than alarmist and hyperbolic LinkedIn posts suggest. Whilst I can’t say that the EU AI Act presents opportunities for startups it certainly doesn’t block them. What’s needed isn’t hyperbole but thoughtful analysis and practical guidance to help companies navigate this landscape. A landscape that is going to evolve.

More information and research is needed. Thankfully there are lots of organisations that are doing that. Given my wishes for a strong AI economy in Europe, based on a well founded base including Ethical, Responsible, Safe, and Secure AI solutions, I’m happy with the AI Act and look forward to the future in Europe!