What happens when you mix a highly talented bunch of people who work in interdisciplinary fields, all thinking about Health x AI x Policy?
Andrew from Form Ventures and I went on a mission to find out. We hosted 20 operators, investors, and policy-makers on Wednesday to see if the world would change. It’s only been two days since the breakfast, but I’d venture to say everyone left the room a little bit more informed, optimistic, and full of pastries.
These are our three takeaways from the event, but we’ve also included our long-form notes towards the bottom for the dedicated & curious.
Thoughts from the (wide) ecosystem 🗞️
1. Enabling breakthrough technologies in health
Breakthrough technologies a require robust testing to ensure they are safe, efficacious, and commercial. But in today’s health system, pilots seem to be few and far between. Startups want to build, DHSC policymakers want to improve absorptive capacity, and investors want to participate where markets are viable… but it takes work to align these incentives.
One example: the last five years have put AI at the forefront but the NHS hasn’t set up the supply side properly to create centralized intelligence. Evidence collection and procurement is often siloed and needs to be consolidated into larger learning loops.
There are some positive cases – e.g. Stroke Delivery Networks which are more centralized. This only came about onwards from 2020, and we now have 100% of AI technology in stroke in four years which is pretty incredible – but we need to build on this. *Note, there are of course lots of innovations beyond London across the UK. There are green shoots, for example if we get EMIS and other EHR systems right… to be continued!
2. Accelerate reimbursement & reg evolution to support for new delivery models -- e.g. Flok's e2e, vertical integration -- to reduce reliance on others needing to absorb/apply AI
We still don’t yet have strong reimbursement models to offset the up front costs required to build fully-regulated software medical devices, particularly for smaller companies. So large companies retain a 1) cost and 2) distribution advantage in many health x AI domains.
It’s tricky to get the distribution, which means you get the utilization, which means the reimbursement & cash cycles don’t work. Put more bluntly, solutions require initial capital to be validated, but will often not receive initial capital before being validated. Taking this one step on, there is a need to rethink & reframe how we think about ‘ability to fail’ within the healthcare system while remaining true to the tenets of patient safety.
The trifecta of startups, investors, and policy therefore is really hard to get right as we balance traction, health efficacy, and regulation.
3. Work to do, to align startup, investor & public incentives (but green shoots are there!)
The leitmotif for our conversation revolved how to enable progress in AI and health, and the alignment of startup, investor, policymaker and public interests in the process. Startups are looking for ways to innovate safely but also quickly, in order to create compounding advantages. Investors are looking for returns and (sometimes) strong social impact (see Eka’s Shared Value thesis). Structural barriers make it hard for policymakers to make necessary reforms, even when electoral and stakeholder incentives are aligned — particularly when not directly managing the NHS. Regulators have little incentive for risk, and are motivated more by preventing harm (downside protection, rather than ‘safe’ upside maximization). But there are causes for optimism:
New service models, and compliance infrastructures, are emerging to enable deployment of AI at scale in the NHS
The health system needs change, whether that is VC-funded innovators or more mainstream innovations like FIT testing for GI cancer.
There are many talented people out there but we need to make sure to make some of these incentives more aligned in order to unlock change across operators, policy makers, and investor.
And voila!
We were impressed by the voices in the room, and learned a lot about the challenges & opportunities faced in shifting our health system. If you’re curious about another area of Impact x Policy, get in touch and let us know which event we should host next. We previously ran one on Energy x Policy which Form summarised last month.
Thanks for making it this far, you get some long-form notes as a thank you 🙏
Long-form notes 📝
Interaction between product and commercial strategy: Startups must decide between building broad versus building deep, or building regulated versus unregulated, which in turn shapes their commercial opportunity. This seems to be the 2-D axis that innovators are building against, and not all of the combinations work in the NHS context. Building a fully regulated, end-to-end, vertically-integrated pathway like Flok’s for musculoskeletal is hard, but signals a path for how to scale AI in an NHS setting.
Product velocity as a function of regulatory flex and speed: In contrast to hardware, software requires continuous iteration and therefore continuous compliance. But historically, shipping products in ‘v1, v2, vN’ methods isn’t always possible when each product upgrade needs to be certified, but there are ways to start to think about this in a more systematic way. THE MHRA is working on ‘predetermined change control plans’ to enable this, though they’re not yet operational, and many Approved Bodies have huge backlogs. Credit also to Scarlet Comply, which is building a new model of Approved Body which enables continuous deployment and iteration.
Reimbursement & risk tolerance: We still don’t yet have strong reimbursement models to offset the up front costs required to build fully-regulated software medical devices, particularly for smaller companies. So large companies retain an advantage in some domains. It’s tricky to get the distribution, which means you get the utilization, which means the reimbursement & cash cycles don’t work. Put more bluntly, solutions require initial capital to be validated, but will often not receive initial capital before being validated. Taking this one step on, there is a need to rethink & reframe how we think about ‘ability to fail’ within the healthcare system while remaining true to the tenets of patient safety.
Alignment of public, startup & investor interest requires a larger learning loop: Startups want to build, DHSC policymakers want to improve absorptive capacity, and investors want to participate where markets are viable, but it takes work to align these incentives. E.g. The last five years have put AI at the forefront but the NHS hasn’t set up the supply side properly to create centralized intelligence. Evidence collection and procurement is often siloed and needs to be consolidated into larger learning loops. There are some positive cases – e.g. Stroke Delivery Networks which are more centralized. This only came about onwards from 2020, and we now have 100% of AI technology in stroke in four years which is pretty incredible – but we need to build on this. *Note, there are lots of innovations beyond London across the UK. There are greenshots, for example if we get EMIS and other EHR systems right… to be continued!
Ring fenced NHS budgets for innovation? A great suggestion to call the NHS to create ‘open innovation tenders’ where founders & innovators could be called in to solve the highest acuity problems within hospitals and PCNs.
Death by pilot: Tying back to Risk Tolerance, pilots seem to be few and far between. There isn’t a natural network effect within the NHS which necessarily equates to 1) more (paid) pilots, 2) high utilization, and 3) long-term contracts. This links back to Learning Loops as we’re not able to build coherent & consistent bodies of information over time which create compounding data advantages.
Breakthrough potential: We also discussed the possibility of breakthrough AI and whether regulatory and reimbursement structures, inside the NHS, limit future possibilities. It’s hard to acquire and train on enough quality data outside of clinical settings to push at the frontier, but also hard to deploy in the first place.
Health equity, social inclusion: There was a discussion around whether AI helps or entrenching inequalities and health disparities. How are models trained, using what data, measuring which health outcomes? AI also has an outsized potential to reduce bias provided it is built correctly with the necessary safeguards in place. These novel pathways can be more inclusive for certain groups (Flok’s example here of the most popular time for digital consultations being 8pm rather than the traditional 9-5pm slots).
Funding the policy makers: The MHRA and NICE have some incredibly talented regulators who are creating innovative health technology assessments and making good, consistent decision making. But they can struggle to 1) attract and 2) retain top talent which may or may not be cyclical with the past 2 years of UK politics.
Who’s missing from the room? We need to regularly engage diverse stakeholders: operators, regulators, investors, and clinicians on the ground in order to learn from each other and make sure we’re speaking the same language. Upskilling needed!
Portfolio News 🎉
Well done to Isla for reaching 1m submissions! The Health Innovation Network published more on this in their monthly newsletter for avid readers.
Getting in Touch 👋
If you’re looking for funding, you can get in touch here.
Don’t be shy, get in touch on LinkedIn or on our Website 🎉.
We are open to feedback: let us know what more you’d like to hear about 💪.