← All writing

The Always-On Question

Apple reportedly has three new devices in the works: AI glasses, an AirTag-sized pendant you clip to your shirt, and AirPods with cameras. According to Mark Gurman at Bloomberg, they’re designed to be the “eyes and ears” of Siri — always-on cameras and microphones, tethered to your iPhone. OpenAI has a screenless, contextually-aware device coming from Jony Ive’s team, and Meta’s Ray-Bans already have 7 million wearers.

The question is not whether always-on AI input devices will exist. It’s what we do about the world they create.

Barry O’Reilly’s new book Artificial Organizations is a useful place to start. Barry wrote the book in just six weeks, mining eight years of records from his 1-1 coaching sessions. Incredibly, he has every conversation structured, searchable, and reusable. He says the book practically assembled itself from a decade of accumulated insight, and that today’s AI tools were easily able to accelerate the process. When I asked him on LinkedIn about the social contract behind that — specifically, what relentless capture does to trust in group settings — he’d clearly thought about it deeply.

His first principle is that capture never comes before consent. In 1-1 coaching transcription is explicit, agreed upfront, and part of the working relationship. The verbatim record can allow significant insight: hearing your own words accurately, with the emotion and intent intact, helps you look back on a conversation in a way that reconstructed notes never quite manage.

In larger group settings, though, he takes a different approach. Raw transcripts are not circulated but used instead for thematic synthesis: what decisions were made, what risks surfaced, what tensions emerged. The output is structured insight, not a searchable archive of who said what. Participants know when recording is happening and can omit sensitive topics from the record.

Barry also made a point about trading desks at financial services firms, where everything is recorded by default. This isn’t surveillance; it’s a system of record. Finance is a sector that long ago normalised capture because the alternative (disputed recollections of high-stakes decisions or instructions) was worse. The environment is also culturally distinct: high compliance, clear rules and norms, low ambiguity about power and decision-making.

What strikes me about Barry’s framework is how deliberately it’s built around trust maintenance rather than data maximisation. If transcription reduces psychological safety, he says, it’s being used wrong. When intent is transparent and expectations are clear, it can actually increase safety because ambiguity disappears and decisions don’t get quietly rewritten from memory afterward.

That’s a sophisticated position. It’s also one that requires genuine organisational maturity to execute. Most companies aren’t there.

The bystander problem

The hardware is arriving faster than the norms. Apple glasses could be shipping in 2027 and Meta’s facial recognition may be live before that. The Google Glass era gave us the “Glasshole” moment — people having glasses smacked off their faces in bars — because the social contract was broken before anyone agreed to it. Meta’s Ray-Bans succeeded where Glass failed partly because they look like ordinary sunglasses, and partly because we’d collectively moved on. But the underlying tensions didn’t resolve.

I expect that Apple will handle this better. Their track record creates permission to enter the category with a different posture: on-device processing and data storage, transparent indicators, opt-in defaults. But even Apple can’t solve the bystander problem. The people who aren’t wearing the device, who never opted into anything, who are just sitting across the table.

An always-on pendant at a dinner table changes the dynamic for everyone at the dinner table. A boardroom where one person is wearing AI glasses is a different kind of meeting. The social infrastructure we’ve built over centuries for managing what’s on and off the record — the sidebar, the hallway conversation, the “between us” — gets disrupted when the default shifts to capture.

In professional settings, I’d expect two things to happen simultaneously. Some organisations will attempt to ban the hardware, which will work about as well as banning AI in the first place. Others will treat ambient capture as competitive intelligence infrastructure and create a different set of problems entirely. But there are some fundamental questions we need to ask:

  • Is consent even structurally possible at scale?
  • Do we create physical “AI-free” zones?
  • Does visible hardware become a new signaling mechanism — like taking notes used to be?

Barry’s framework works because he owns his data, controls its use, and has built consent into the relationship from day one. That’s quite different from an employee in a meeting where someone’s pendant might be recording, and that recording might surface in ways nobody anticipated or agreed to.

What happens to exploratory thinking when people internalise that everything might be captured? A lot of real insight lives in the half-formed conversation, the idea you’re still working out, the risk you’re willing to name only because you trust the room. That space is worth protecting, even as we build systems that get better the more they know.

We figured out phone cameras eventually. Not cleanly, but enough. There’s reason to think we’ll navigate this too — but the velocity is much higher now, and the window for establishing norms before ubiquity is shorter than most people realise.

The goal isn’t to remember everything. It’s to remember what matters. Capture that strengthens trust can turn conversations into durable insight. Capture that weakens it turns conversations into performance. That distinction will shape how organisations think in an always-on world.