AI in later living: Designing the future before it designs us
- Emily Adlington

- Dec 16
- 4 min read

by Vicky Carne (assisted by ChatGPT & NotebookLM)
None of us grew up with AI, but we’re all going to grow older with it. And the question isn’t whether it’s coming for your job, your inbox, or your resident handbook – it’s what kind of colleague you want it to be.
Every few decades, a technology arrives that’s going to ‘change everything’. Predictions about how it will do that are rarely right. The desktop computer was supposed to make us paperless; instead, we got endless printer jams. The internet was built for scientists to share research and now mostly shares images of cats, conspiracies, and outrage before breakfast. Now it’s AI’s turn to promise transformation, though the jury’s out on whether that means evolution or extinction.
But in the meantime, for those in later living, the real question isn’t what AI will look like in ten years’ time, but whether we’re fluent enough to shape it before it shapes us. Because the moment to get comfortable with this technology isn’t in 2035, it’s now.
A Friendly Voice from the Future
After decades building tech companies, I now live in a later-living development in London. Here, the future arrived with a friendly voice. We call it ‘Peggy’ – a voice-enabled AI concierge my company developed.
Peggy doesn’t roam the internet or make small talk about the weather. It just knows the practical stuff: rotas, appliance manuals, event calendars, safety information – the day-to-day questions that otherwise bounce endlessly between reception and residents.
And here’s the key: people trust it. Not because it’s clever, but because it’s predictable. It doesn’t gossip, guess, or Google. It simply gives accurate information, conversationally and with common sense, in a calm tone, every time. That reliability builds confidence. And each question not asked of a staff member gives them time back to do what only humans can do. When an eighty-something says, ‘Thank you, Peggy’, what you’re really hearing is not awe at the tech, but delight that it worked exactly as expected. That’s the holy grail of innovation in later living: dependable technology that earns trust one small success at a time.
When Tools Start Talking to Each Other
Across the sector, AI is no longer an experiment; it’s an inevitability. Beyond the medical headlines, the quiet revolution is beginning to get underway in the non-clinical systems that can shape daily life in later living, now coalescing into a three-layered framework:
Environmental awareness: Sensors on fridges and kettles learn daily rhythms and flag anomalies before they become emergencies. Smart lighting reduces night-time falls. These systems notice without intruding. It's about creating environments that respond intuitively, enhancing independence, rather than highlighting dependency.
Connectivity: Voice and chat technologies keep people connected, both with each other and with staff. They can nudge participation, relay messages, and help families stay in touch.
Operational harmony: Behind the scenes, AI analytics help teams spot patterns before they become problems – from maintenance issues to declining participation or dips in wellbeing. Once the data-gathering groundwork is done, AI can start adding real value: predicting when a lift might fail (if it ain’t broke, AI will tell you whether to fix it) or alerting staff when a usually sociable resident stops coming to lunch.
Together, these create anticipatory environments: homes that adapt gently to residents’ habits and needs. A light turns on before a stumble; a fridge left unopened triggers a lunchtime check-in. Dignity is built in from the start.
You’ve Got to Be in It to Win It
So where to start? AI confidence doesn’t come from expensive systems; it comes from experimenting safely with the tools that already exist. Staff fluency with AI will matter. You don’t need to know how electricity is made to enjoy the benefits, but you do need a healthy respect for its inherent risks.
Encourage teams to try LLMs (ChatGPT and the like) for rewriting notices, summarising minutes or drafting induction packs. They excel at making the complex comprehensible.
None of this replaces professional judgement. Give clear safeguarding guidelines: no personal data, no medical advice, and ensure human review is always part of the process. Treat it like a bright intern who’s eager, occasionally wrong, and needs supervision.
Starting a shared prompt library, with short examples of the kinds of questions that get the best from AI, can speed up learning and help build confidence. Through safe, regular use, hype can turn into habit.
It’s also worth noting that concern about gender bias in these systems is valid. Large language models are trained on vast amounts of online content, plus whatever new material users generate. Since much of that content has historically been created and tested by men, bias can be baked in through probability.
Stepping back from AI doesn’t solve the problem. Indeed, participation can be part of the fix: the more women write the prompts, correct errors and guide outputs, the more representative the system becomes. Over time, that changes how the model performs especially when you can influence the specific AI tools you’ll actually be using. It’s one practical way to steer this technology in a more equitable direction.
An AI Future
Whatever AI of the future brings, for retirement living communities, I suspect the best systems will be invisible: an early warning before a crisis, a small nudge toward community, a few minutes of staff time returned to conversation rather than compliance. Ignore it, and AI will still arrive but designed elsewhere, by people who don’t understand the nuance of ageing well. Preparing for that future can be as simple as picking one AI tool, setting clear boundaries, and sharing what you learn across your team.




Comments