Let’s be honest: we’re drowning in AI chatbots — and nobody really asked for more of them. Tools like ChatGPT, Google Gemini, and an endless stream of me-too AI assistants can draft emails, answer trivia, and summarize articles. They’re clever and well-trained, but strip away the gloss, and what are they? Fancy search engines that are closer to the uncanny valley than approximating real human interaction. They respond but don’t genuinely understand who we are, why we’re stressed, or what we need on a deeper, more personal level.
Also: I’m an AI tools expert, and these are the only two I pay for
The ultimate promise of AI has always felt closer to science fiction: the intuitive support of KITT from Knight Rider, the loyal companionship of C-3PO from Star Wars, or the deep understanding of Commander Data from Star Trek. These characters don’t just execute tasks — they grasp context, emotion, and our evolving human complexities. Yet, for all our technological progress, today’s AI tools remain light years away from that vision.
From tools to partners: The AI we need
I’ve been a paying subscriber to ChatGPT since it launched, and I’ve watched it improve. Sure, it can remember certain things across sessions, letting you maintain a more continuous conversation. However, these chatbot memories are limited by model boundaries; they can’t fully integrate their knowledge into an evolving narrative of my life — or map my emotional states or long-term ambitions. Think of them as diligent but low-EQ assistants — better than starting from scratch each time, but still nowhere near “getting” me as a whole person.
Make no mistake, none of these models — ChatGPT, Apple Intelligence, Google’s Gemini, Meta.ai, or Perplexity — are anywhere close to the holy grail of General AI. They remain fundamentally task-specific information retrieval tools, and their incremental memory or summarization improvements are far from game-changers. Many of the intuitive, empathetic capabilities we yearn for remain out of reach.
Also: I test wearable tech for a living. These are my favorite products of 2024
Fundamental advancements are still needed to transform today’s chatbots into something more — something that can sense when we’re stressed or overwhelmed, not just when we need another PDF summarized.
After over a year of wrangling with “advanced” assistants, I’ve realized we need more than coherent answers. We need AI woven directly into our routines, noticing patterns and nudging us toward healthier habits — something that can rescue us from sending that hasty, frustration-fueled email before we regret it.
Think about it: an AI that knows your calendar, documents, chats, health metrics, and maybe even your cognitive state could sense when you’re fried after back-to-back Zoom calls or skip lunch because your inbox is exploding.
Instead of passively waiting for you to type commands, the AI can proactively suggest a break, rearrange your schedule, or hit pause on that doom-scrolling session. In other words, we need AI to evolve from a fancy command line into an empathetic, intelligent partner. But how do we get there?
BCI: Reading our minds (sort of)
To break the cycle of incrementalism, we need more than clever conversation. Non-invasive brain-computer interfaces (BCIs), such as Master & Dynamic’s EEG-driven headphones powered by Neurable’s technology, might be the key.
Also: Apple Vision Pro can be controlled by thoughts now, thanks to BCI integration
Neurable’s tech measures brainwaves to gauge attention and focus. This is cool as a productivity hack, but it’s even cooler when you imagine funneling that data into a broader AI ecosystem that adapts to your mental state in real time.
I spoke with Dr. Ramses Alcaide, CEO of Neurable, who explained how their EEG technology delivers near-medical-grade brain data from compact sensors placed around the ears, achieving about 90% of the signal quality traditionally limited to bulky EEG caps. “The brain is the ultimate wearable,” Alcaide told me, “and yet we’re not tracking it.”
By translating subtle electrical signals into actionable insights, Neurable’s approach helps align work, study, and downtime with our natural cognitive rhythms. Instead of forcing ourselves into rigid 9-to-5 blocks, we might schedule creative projects during a personal focus peak or plan a break when attention wanes — optimizing our daily flow for sharper performance and less mental fatigue.
However, EEG represents just one avenue in a rapidly evolving field. Future non-invasive methods, such as wearable magnetoencephalography (MEG) systems, could detect the brain’s faint magnetic fields with even greater precision. While MEG historically required room-sized equipment and special shielding, emerging miniaturized versions may one day read brain activity as effortlessly as today’s smartwatches track steps.
Also: I tried the mind-reading headphones that got the internet buzzing. Here’s my verdict
This could let AI differentiate between a stress-induced slump and simple mental boredom, offering precisely targeted support. Imagine a language tutor that scales back complexity when it senses cognitive overload or a mental health app that flags early cognitive or mood changes, prompting preventive self-care before issues escalate.
The potential goes well beyond gauging focus or presence. With richer, more granular data, AI could detect how well you internalize a new skill or concept and fine-tune lesson plans in real time to maintain engagement and comprehension. The AI could also consider how your sleep quality or diet influences cognitive performance and suggest a short meditation before a big presentation or advise you to reschedule a challenging meeting if you’re running on empty.
In a high-stakes moment, like drafting an emotionally charged email, your AI might sense brewing frustration and gently suggest a brief pause — functioning more like a caring GERTY-from-Moon than a domineering HAL — nudging you toward wise choices without overriding your autonomy.
Also: 7 ways to write better ChatGPT prompts – and get the results you want faster
This adaptive, human-centered support is already taking shape in simpler forms. Some professionals reschedule challenging tasks to their mental prime, while students use basic tools to identify their best study times. Individuals with ADHD employ feedback on focus levels to better structure their environments.
As sensors improve and the analytics powering them become more sophisticated, AI will evolve into an empathetic, context-aware partner. Instead of pushing us to grind harder, it will encourage smarter, more sustainable work patterns — steering us away from burnout and toward genuine cognitive well-being.
Beyond a single chatbot: Natura Umana’s NatureOS and ‘AI People’
Brain data is just one piece of the puzzle. Another key element is building flexible AI ecosystems composed of multiple specialized “AI People.” Natura Umana, operating in stealth since 2022, is taking a bold step in this direction with its upcoming Nature OS, which, while largely untested, presents a new vision for human and AI interaction.
Instead of relying on a single, one-size-fits-all assistant, you’ll interact with a team of LLM-based AI personas — each with its own personality, skills, and purpose. They’re designed to replicate human-like behavior and conversation, tapping into your personal data so they can act on your behalf, freeing you to focus on what truly matters.
Also: The best open-source AI models: All your free-to-use options explained
Most importantly, these AI People aren’t static. As they engage with you, they develop memories, form opinions, and may reshape their core beliefs over time. Some personas adapt faster than others as they learn about your preferences and habits.
The main persona, Nature, can handle web searches, document analysis, and access your Google Calendar and email to deliver contextually accurate insights. Meanwhile, a fitness coach might draw data from your Health app or wearable devices to offer personalized exercise suggestions. If Nature lacks the right expertise, it seamlessly hands you off to a more specialized AI persona, like a travel guide or therapist, ensuring you’re always talking to the best “person” for the job.
Your own private AI entourage
This multi-agent concept strives to move beyond basic Q&A interactions. Ideally, these AI People would determine which details to store long-term — like a friend’s favorite hobbies — and which to keep temporarily, continuously refining their understanding of you. Over time (and this is an aspiration rather than current reality), they could evolve from generic advisors into genuine confidants who understand your habits, goals, and challenges on a nuanced level.
Also: I’m a ChatGPT power user – here’s why Canvas is its best productivity feature
Natura Umana’s approach also leverages Google’s ecosystem for much of its data and integrations. By drawing on Google’s services, these AI People gain broader, richer contexts, which raises interesting questions about the startup’s future.
Given Natura Umana’s small size and pioneering approach, success could put it on the radar of big tech. Should its technology prove effective at seamlessly integrating multi-agent AI with personal data, it’s plausible that Google, already invested in the AI space, might consider acquiring the company or emulating its techniques. This wouldn’t be unprecedented — tech giants have a long history of snapping up innovative startups to bolster their own platforms.
For now, Natura Umana, known for collaborating with Switzerland-based mobile accessories vendor RollingSquare, aims to minimize screen time and seamlessly integrate its AI into daily life with specially designed earbuds, the HumanPods. “You wear the earbuds in the morning and forget about them,” co-founder Carlo Ferraris told me. The ultra-comfortable, open-ear earbuds designed for NatureOS are so discreet that some testers literally forgot they were wearing them. A double-tap summons your AI People — no screens needed.
Also: The best AI search engines: Google, Perplexity, and more
The wellness coach might sense your low energy and suggest a brief walk. The therapist persona might detect signs of stress and prompt a calming break. The research assistant ensures you have the necessary documents and talking points with key insights before a big meeting. “It’s like Her, but without the existential drama,” Ferraris quipped.
Though initially a limited web demo, NatureOS will soon debut as a mobile app paired with new earbuds, evolving as you use it. While these capabilities remain partly aspirational, the approach hints at a future where personal AI ecosystems grow smarter, more empathetic, and more deeply integrated with the services we rely on every day. And if that model proves successful, don’t be surprised if a giant like Google takes a very close look — either to acquire or replicate — to stay ahead in the AI race.
Revisiting Apple Intelligence: Learning from BCI and AI People
While BCIs and AI People hint at a future of empathetic, context-driven assistants, Apple’s own AI efforts remain comparatively modest. In a previous piece, I examined what Apple must add to Apple Intelligence to break free from basic text rewrites, limited ecosystem knowledge, and the privacy-first but siloed approach. My recommendations ranged from domain-specific retrieval-augmented generation (RAG) APIs and advanced writing tools to enhanced voice-based workflow automation, robust privacy controls, and integrated health insights leveraging Apple’s hardware.
Also: 10 features Apple Intelligence needs to actually compete with OpenAI and Google
BCI-driven insights could help Apple Intelligence evolve from a cautious, on-device engine into a proactive, context-savvy partner. Subtle cognitive signals — gleaned from Apple Watch data or even future EEG/MEG inputs — could enable AI to anticipate mental overload, suggest schedule tweaks, or tailor content complexity on the fly.
By applying RAG techniques, Apple could pull domain-specific information into apps like Mail, Notes, or Pages, making the platform indispensable for professionals and researchers. Similarly, Apple might adopt a multi-agent model, inspired by Natura Umana, creating specialized AI personas for scheduling, research, wellness, or media production — each with its own evolving “personality” and expertise.
Also: 6 ways the new AirPods Max could have been so much better
This shift would align Apple’s privacy ethos and on-device computation with richer context and more dynamic user experiences. Instead of remaining a stepping stone to more advanced tools, Apple Intelligence could become a fully realized ecosystem that responds and understands, empowering users with empathetic guidance while respecting their data and autonomy.
A cautious yet transformative future
Moving from today’s “fancy command lines” to fully integrated AI “staff” that access our emails, calendars, health data, and even brain activity demands a significant leap of faith. Many of us will want more than promises — we’ll look for proven health insights, validated use cases, and rigorous privacy safeguards before entrusting sensitive information to these systems. The specter of misaligned AI or malicious manipulation is real. What if, during an emotional low point, an AI suggests destructive coping strategies instead of helpful ones? These concerns make transparency, human oversight, and user control non-negotiable.
Also: The best AI for coding in 2024 (and what not to use)
At the same time, the potential of combining brainwave insights (via EEG or future MEG sensors) with multiple specialized AI personas is compelling. Imagine a wellness coach who senses your mental fatigue and recommends a break, a therapist who nudges you toward mindfulness when stress spikes, and a research assistant who organizes documents for your next big project — all working together in harmony. Rather than a disconnected array of chatbots, you’d have a cohesive, empathetic AI ecosystem aware of your context, adapting as you evolve.
Before embracing such a vision, many users will start small — perhaps experimenting first with wearables that offer general health metrics — before scaling up to a full AI team. As technology advances, trust-building measures like on-device data processing and encrypted integration will be essential, as seen with Neurable and Natura Umana. Without user ownership of data and safety assurances, no “understanding” level we might be able to achieve with generative AI justifies the risks. But if executed responsibly, these innovations may usher in AI that answers our questions and genuinely cares about our well-being, paving the way for a future where science fiction becomes reality.
From promise to practice
We’re still far from the holy grail of General AI, and no one’s promising a full-fledged Commander Data tomorrow. Yet, the experiments underway — from leveraging EEG data for cognitive insights to orchestrating multi-agent AI personas — show that researchers and developers are pushing beyond simple chatbots toward more personal, adaptive, and supportive systems.
As we experiment with brain-computer interfaces, refine language models, and integrate advanced sensors into everyday devices, we’re edging closer to AI that doesn’t just respond but genuinely understands us. Achieving this will require careful engineering, robust privacy measures, and a willingness to embrace new paradigms — like retrieval augmented generation (RAG) for richer knowledge integration and multi-agent architectures for specialized skills. With these technical strides and ethical safeguards, tomorrow’s AI could evolve from a clever question-answering tool into a trusted ally that respects our boundaries, anticipates our needs, and genuinely enhances our daily lives.
Artificial Intelligence