World models: impact edition
Issue 150 l Eka’s Weekly Roundup (16 Feb 2026)
Google DeepMind launched Project Genie to the public in late January, powered by its Genie 3 world model. This is the first system capable of generating interactive, navigable 3D environments in real-time from a simple text prompt. Yann LeCun left Meta after 12 years to raise €500m for AMI Labs at a €3b valuation, with a bet that LLMs will never achieve general intelligence. Fei-Fei Li‘s World Labs shipped its first commercial product, Marble, and is in talks to raise $500m at a $5b valuation. NVIDIA’s Cosmos platform - trained on 20 million hours of real-world data - has surpassed 2 million downloads.
For those of us working in impact (specifically Consumer Health & Sustainable Consumption), the interesting question is focussed on domain applications across energy (ie geophysics), climate (ie weather), and health (ie trials).
Introducing world models 🗞️
At the most basic level, a world model is an AI system that builds an internal representation of a physical environment and can predict how that environment will evolve over time.
This is different from Large Language Models (LLMs), which predict the next word. World models predict the next **state of a physical system. Thinking about the weather example, LLMs can predict the following word after ‘today it was sunny, tomorrow will be X’. Instead, a world model can simulate it.
The key approaches diverge sharply:
Google DeepMind’s Genie 3 takes an auto-regressive approach - generating one frame at a time, looking back at what was previously generated to decide what happens next. It’s learned intuitive physics by training on massive amounts of video data. The result is real-time interactive environments at 24fps that maintain visual consistency for several minutes.
Yann LeCun’s JEPA (Joint Embedding Predictive Architecture) takes a fundamentally different path. Rather than predicting every pixel of the future (which LeCun argues is doomed to fail because the world is inherently unpredictable at that level of detail), JEPA predicts abstract representations of future states.
Fei-Fei Li’s World Labs is focused on spatial intelligence - the ability for AI to perceive, generate, reason about, and interact with 3D space. Marble generates persistent, downloadable 3D environments (not just on-the-fly frames), using Gaussian Splatting to represent 3D volume.
NVIDIA’s Cosmos is the infrastructure play - open world foundation models trained on 9,000 trillion tokens, purpose-built for robotics and autonomous vehicles, with three model families: Predict (future state simulation), Transfer (bridging simulation and reality), and Reason (physics-aware chain-of-thought reasoning).
Applications in Health & Climate
1. Weather forecasting (short & long-term)
This is the most mature application of world-model thinking in climate. DeepMind’s GraphCast and its successor GenCast have already revolutionised weather prediction. GenCast outperforms the world’s top operational forecasting system (ECMWF’s ENS) on 97% of evaluated targets and generates 15-day ensemble forecasts in under 8 minutes on a single TPU - compared to hours on a supercomputer with thousands of processors. NOAA has already deployed AI weather models operationally, fine-tuning GraphCast on its own data.
More accurate extreme weather prediction saves lives and enables better renewable energy forecasting. When GenCast can predict the probability distribution of hurricane landfalls days in advance, that’s directly actionable for disaster preparedness, insurance modelling, and energy grid management.
Researchers at the University of Washington recently demonstrated an AI model that can simulate 1,000 years of Earth’s current climate in just 12 hours on a single processor — compared to ~90 days on a state-of-the-art supercomputer.
2. Energy systems optimisation
World models that understand physical dynamics could transform how we design, operate, and optimise energy systems. Imagine an AI that doesn’t just forecast tomorrow’s solar output, but simulates the entire grid — generation, storage, transmission, demand — as a dynamic physical system, testing thousands of scenarios in real-time.
LeCun himself highlighted this application in a recent MIT Technology Review interview: “Think about complex industrial processes where you have thousands of sensors, like in a jet engine, a steel mill, or a chemical factory. There is no technique right now to build a complete, holistic model of these systems. A world model could learn this from the sensor data and predict how the system will behave.”
The IEA estimates global electricity demand from data centres alone could exceed 945 TWh by 2030. Physics-aware simulation of energy systems could identify efficiency improvements that current approaches miss entirely.
3. Materials science
Physics-aware world models can simulate complex physical interactions — fluid dynamics, material deformation, molecular behaviour - without costly physical experiments. Recent advances in operator learning are already accelerating CO₂ plume migration simulations by three to four orders of magnitude. For carbon capture, storage, and novel materials research, this could dramatically compress R&D timelines.
4. Advancing personalised medicine through twins
The concept of digital twins in healthcare is essentially a world model of human biology - a virtual representation of a patient that evolves in response to real-world data and can simulate treatment outcomes in silico.
This is moving from concept to clinical reality:
Researchers at the Weizmann Institute published work in Nature Medicine on AI-powered personalised digital twins that can predict disease risk and simulate which dietary changes or drugs would be most beneficial for individual patients - already being tested with pre-diabetic participants.
Aitia is using causal AI to build digital twins that reverse-engineer the hidden 95% of biological circuitry from patient data. Their Huntington’s disease programme - claiming the first truly hypothesis-free drug target discovery from human data - is headed to IND filing.
UCL researchers coined the term “Big AI” in a recent npj Digital Medicine paper, describing the integration of physics-based digital twins with data-driven AI. The argument: neither physics-based simulation nor data-driven ML alone is sufficient for personalised medicine. You need both.
5. Accelerating clinical trials
AI-generated digital twins of clinical trial participants can predict individual health trajectories, enabling smaller, faster, more efficient trials. The PROCOVA method (prognostic covariate adjustment using digital twin-derived scores) has been qualified by the European Medicines Agency and aligns with FDA guidance. For rare diseases and paediatric populations where enrollment is difficult, this could be genuinely transformative.
6. Surgical training and robotics
NVIDIA’s Cosmos is already being used by LEM Surgical to train the autonomous arms of its Dynamis surgical robot using synthetic data. As world models improve their fidelity to real-world physics, the ability to train surgical robots in simulation - safely, at scale, at low cost - becomes increasingly viable.
In the Eka portfolio, XRLabs is working on surgical intelligence through robotics and AR/VR and is working with Nvidia, Olympus, and Medtronic.
👋 Getting in Touch
If you’re looking for funding, you can get in touch here.
Don’t be shy, get in touch on LinkedIn or on our Website 🎉.
We are open to feedback: let us know what more you’d like to hear about 💪.


Great read
great read