The Google I/O release made headlines across a lot of the tech newsletters. We won’t spend too much time summarising the release (though there’s a helpful 10m video here to catch you up), but instead wanted to focus our time on three applications which are in our Consumer Impact thesis:
instant translation and the impact on language learning,
developments in Project Astra and the impact on personal assistants,
and making a step in solving the fashion fit problem.
There are many more pieces we haven’t covered (actually there are over 100 releases), so do let us know the ones you’re particularly excited about by connecting with us over email or over on LinkedIn.
Speaking with someone in a different language? Managing your bills? What else could you do with Google I/O? 🗞️
You will have likely seen highlights from Google I/O’s event last week. There are some great summaries online already, like Ben Evans’s latest newsletter or MIT’s piece aptly called “By putting AI into everything, Google wants to make it invisible”. We won’t be providing an exhaustive summary, but wanted to pick out a few pieces which made us reflect on our world of consumer impact investing.
This years drop focussed on the application of Gemini, Google’s AI products, throughout their tools and services.
Speech translation: imagine a world where you don’t need to learn a language to interact with the world
I’d encourage you to look at the Google Beam demo below. It’s made me want to reconsider my Duolingo usage (!), but it’s a clear example of how real time translation could change the way we communicate with people globally. A lot of the press release focussed on how Project Beam is making video calls more 3D, but I think this is the real opportunity of this new interaction mode.
Personal assistance: enter Project Astra
Project Astra, Google's vision for a universal AI agent, further blurs the lines between the digital and physical worlds. They’ve been talking about Astra for quite a while now (we loved this interview from the Deepmind team, published in Dec ‘24).
This I/O’s demo focussed on helping a Googler fix their bike.
This multimodal assistant, designed to understand and respond to a combination of sight, sound, voice, and text, reignites the debate around the optimal design for AI assistants: should they be horizontal (broadly capable) or vertical (specialized)? We’re building a thesis here as we speak.
Finally solving fashion’s fit problem?
Fashion retailers have been trying to solve returns through virtual try on for a long time. Google announced that it was rolling out AI to help shoppers try on clothes - but more than this, it’s moving into the shopping advice layer and changing the way that search works.
New features include price tracking across websites of the right colour & size, better search through AI mode shopping, and of course virtual try on.
There are lots more we didn’t cover here. For a 10 minute recap on the I/O releases, you should watch Sundar’s highlights here.
✍🏽 Week in Impact Articles
Monday: The Science Behind Why Massive Change Seems To Happen All At Once
Tuesday: We did the math on AI’s energy footprint. Here’s the story you haven’t heard.
Wednesday: The hope and hype of fusion
Thursday: Weave - Streamlining Doctor-to-Patient Communication
Friday: Hinge Health: Reimagining possibilities and better outcomes with virtual care
📊 3 Key Charts
1. Is this healthy? Consumer trends around food health perception
2. How Weave became the all-in-one health communication platform
3. Hinge Health’s journey to IPO
🗣️ Review of the Week
👋 Getting in Touch
If you’re looking for funding, you can get in touch here.
Don’t be shy, get in touch on LinkedIn or on our Website 🎉.
We are open to feedback: let us know what more you’d like to hear about 💪.