Living on record: my first ten days with an AI wearable
Issue 107 l Eka’s Weekly Roundup (28 January 2025)
Two weeks ago, I started wearing a new hardware device called Bee Computer. It looks similar to a Fitbit and was often confused as such. Unlike a Fitbit though, these AI wearables are able to record every (consenting) conversation across your personal and professional life. Bee Computer is one of many US startups pioneering these wearables (Omi, Friend.com, or Limitless).
It’s been very useful in summarising conversations, saving memories, and consolidate my to-do list across conversations. But, as with any new recording technology, there are some obvious concerns including privacy, consent, and memory. My partner immediately banned me from wearing it at home when he found out about it, and many friends had similar reactions once I’d told them about Bee.
This week, we’re examining 1) what a recorded life could look like, 2) what applications it could unlock across health, wellness, and life more broadly, and 3) key questions across privacy, consent, and memory.
The examined recorded life 🗞️
My early experiences with Bee Computer
I’ve had a fun journey with Bee Computer over the last ten days. Distilling it into 10 points:
Unboxing was already quite the accomplishment, given most of the AI x wearables aren’t live (yet) in the UK. Thank you to some US family who shipped it over, and to US/UK customs who charged $40 (!) for an $80 device… I’m guessing they weren’t sure what it’s value was!
The first few days felt like something out of Black Mirror (see The Entire History of You from Season 1). My partner had a pretty negative reaction to it and so I was immediately banned from wearing & recording in the flat. Which, to be fair, I agree with!
I spent the weekend asking friends & family if I was able to record conversations. Out of 5 “interactions” I had, only 2 gave consent to be recorded (special thanks to my climbing partner and another sports partner). When people didn’t want to be recorded, it started quite awkward conversations about the future of AI in personal lives (people who said no were 1) in a pub setting, 2) around a family lunch, and 3) at a friends’ dinner).
At work, I decided not to record internal meetings but asked for consent during external calls, mainly if Fathom or Otter came into the room.
Interestingly, I once had a conversation with a founder about the wearable device, and he actually said that I was being recorded by Granola.
Many people I’ve since spoken to about Granola admit using it without consent, but the majority emphasise that it’s for internal note taking only. I’m not sure if I totally buy this and if, as they claim, they are really getting rid of the transcripts post call. Part of me wonders if that is even enough given they can write perfect notes - gray area…
I am increasingly convinced that, in 10 years, the majority of conversations will be recorded in some format. This will also include video - see Meta x Oakley or Meta x RayBans, or even Waymo. Some of the AI x hardware startups are even talking about neuralink-style devices to decode thoughts.
The value for a consumer comes when the majority of their life is recorded (maybe 90%+). At the moment, I can feel some gaps in my own recorded life given this split between work & life transcripts, and the fact that I need to run x2 to-do lists across Bee and non-Bee conversations. A bit of a hassle, but this should flip as adoption & social acceptance goes up.
Even if you decide not to wear a recording device, someone else in your circle may decide to do so. We should start thinking through the consequences of recording & consent within this (see below).
I am optimistic that, with the right guard rails in place, AI wearables have significant positive impact potential. See use cases across Alzheimers, mental health, relationships, mental load, or education which have already been showcased in launch videos from Omi, Friend, Limitless, or Bee.
What it’s teaching me about wearable x AI potential within our thesis areas
Consent & privacy are at the heart of this adoption. Many European players within AI x consumer think about privacy as core to their business (see Kin as great example, where privacy is front and center for their brand). SF (where most of these companies are based) is already used to higher levels of recording, see Waymo as one example.
It will still take a lot of time until this becomes socially acceptable. That’s ok - a lot of work needs to go into the ethics of recording.
There is a strong debate as to who owns the recording - Apple Siri, Amazon Alexa vs. startup? Who do you trust more with your life’s data?
I worry about the impact on memory and how we will practice remembering tasks & to-do lists. I can see this already slipping after 10 days of wearing it.
I remain positive on the impactful use cases and believe that this ‘new’ data set of life conversations can be used as a force for good provided it is grounded in firm principles and consent.
This is an evolving thesis area which we are exploring in Q1.


✍🏽 Week in Impact Articles
Monday: Poised for take-off: Hyperscaling the United Kingdom’s climate tech
Tuesday: The secret sauce of Chinese social media apps
Wednesday: The hidden risks of overestimating AI's power needs
Thursday: Notes on deepSeek: generative AI is all about the applications now
Friday: GE HealthCare and Nuffield sign $247m deal to bolster diagnostic services in UK
📊 3 Key Charts
1. Unpicking alternative protein sources: from whole-foods to biomass

2. Women-specific burdens seeing significant under-investment compared to disease burden

3. Finally… 50% odds at peak of Deepseek being banned by April?
🗣️ Review of the Week
👋 Getting in Touch
If you’re looking for funding, you can get in touch here.
Don’t be shy, get in touch on LinkedIn or on our Website 🎉.
We are open to feedback: let us know what more you’d like to hear about 💪.