From Code to Connection: The Case for Humanising Enterprise AI 
5/5 (2)
Spread the love
5/5 (2)

In AI’s early days, enterprise leaders asked a straightforward question: “What can this automate?” The focus was on speed, scale, and efficiency and AI delivered. But that question is evolving. Now, the more urgent ask is: “Can this AI understand people?” 

This shift – from automation to emotional intelligence – isn’t just theoretical. It’s already transforming how organisations connect with customers, empower employees, and design digital experiences. We’re shifting to a phase of humanised AI – systems that don’t just respond accurately, but intuitively, with sensitivity to mood, tone, and need. 

One of the most unexpected, and revealing, AI use cases is therapy. Millions now turn to AI chat tools to manage anxiety, process emotions, and share deeply personal thoughts. What started as fringe behaviour is fast becoming mainstream. This emotional turn isn’t a passing trend; it marks a fundamental shift in how people expect technology to relate to them. 

For enterprises, this raises a critical challenge: If customers are beginning to turn to AI for emotional support, what kind of relationship do they expect from it? And what does it take to meet that expectation – not just effectively, but responsibly, and at scale? 

The Rise of Chatbot Therapy 

Therapy was never meant to be one of AI’s first mass-market emotional use cases; and yet, here we are. 

Apps like Wysa, Serena, and Youper have been quietly reshaping the digital mental health landscape for years, offering on-demand support through chatbots. Designed by clinicians, these tools draw on established methods like Cognitive Behavioural Therapy (CBT) and mindfulness to help users manage anxiety, depression, and stress. The conversations are friendly, structured, and often, surprisingly helpful. 

But something even more unexpected is happening; people are now using general-purpose AI tools like ChatGPT for therapeutic support, despite them not being designed for it. Increasingly, users are turning to ChatGPT to talk through emotions, navigate relationship issues, or manage daily stress. Reddit threads and social posts describe it being used as a therapist or sounding board. This isn’t Replika or Wysa, but a general AI assistant being shaped into a personal mental health tool purely through user behaviour. 

This shift is driven by a few key factors. First, access. Traditional therapy is expensive, hard to schedule, and for many, emotionally intimidating. AI, on the other hand, is always available, listens without judgement, and never gets tired. 

Tone plays a big role too. Thanks to advances in reinforcement learning and tone conditioning, models like ChatGPT are trained to respond with calm, non-judgmental empathy. The result feels emotionally safe; a rare and valuable quality for those facing anxiety, isolation, or uncertainty. A recent PLOS study found that not only did participants struggle to tell human therapists apart from ChatGPT, they actually rated the AI responses as more validating and empathetic. 

And finally, and perhaps surprisingly, is trust. Unlike wellness apps that push subscriptions or ads, AI chat feels personal and agenda-free. Users feel in control of the interaction – no small thing in a space as vulnerable as mental health. 

None of this suggests AI should replace professional care. Risks like dependency, misinformation, or reinforcing harmful patterns are real. But it does send a powerful signal to enterprise leaders: people now expect digital systems to listen, care, and respond with emotional intelligence. 

That expectation is changing how organisations design experiences – from how a support bot speaks to customers, to how an internal wellness assistant checks in with employees during a tough week. Humanised AI is no longer a niche feature of digital companions. It’s becoming a UX standard; one that signals care, builds trust, and deepens relationships. 

Digital Companionship as a Solution for Support 

Ten years ago, talking to your AI meant asking Siri to set a reminder. Today, it might mean sharing your feelings with a digital companion, seeking advice from a therapy chatbot, or even flirting with a virtual persona! This shift from functional assistant to emotional companion marks more than a technological leap. It reflects a deeper transformation in how people relate to machines. 

One of the earliest examples of this is Replika, launched in 2017, which lets users create personalised chatbot friends or romantic partners. As GenAI advanced, so did Replika’s capabilities, remembering past conversations, adapting tone, even exchanging voice messages. A Nature study found that 90% of Replika users reported high levels of loneliness compared to the general population, but nearly half said the app gave them a genuine sense of social support. 

Replika isn’t alone. In China, Xiaoice (spun off from Microsoft in 2020) has hundreds of millions of users, many of whom chat with it daily for companionship. In elder care, ElliQ, a tabletop robot designed for seniors has shown striking results: a report from New York State’s Office for the Aging cited a 95% drop in loneliness among participants. 

Even more freeform platforms like Character.AI, where users converse with AI personas ranging from historical figures to fictional characters, are seeing explosive growth. People are spending hours in conversation – not to get things done, but to feel seen, inspired, or simply less alone. 

The Technical Leap: What Has Changed Since the LLM Explosion 

The use of LLMs for code editing and content creation is already mainstream in most enterprises but use cases have expanded alongside the capabilities of new models. LLMs now have the capacity to act more human – to carry emotional tone, remember user preferences, and maintain conversational continuity. 

Key advances include: 

  • Memory. Persistent context and long-term recall 
  • Reinforcement Learning from Human Feedback (RLHF). Empathy and safety by design 
  • Sentiment and Emotion Recognition. Reading mood from text, voice, and expression 
  • Role Prompting. Personas using brand-aligned tone and behaviour 
  • Multimodal Interaction. Combining text, voice, image, gesture, and facial recognition 
  • Privacy-Sensitive Design. On-device inference, federated learning, and memory controls 

Enterprise Implications: Emotionally Intelligent AI in Action 

The examples shared might sound fringe or futuristic, but they reveal something real: people are now open to emotional interaction with AI. And that shift is creating ripple effects. If your customer service chatbot feels robotic, it pales in comparison to the AI friend someone chats with on their commute. If your HR wellness bot gives stock responses, it may fall flat next to the AI that helped a user through a panic attack the night before. 

The lesson for enterprises isn’t to mimic friendship or romance, but to recognise the rising bar for emotional resonance. People want to feel understood. Increasingly, they expect that even from machines. 

For enterprises, this opens new opportunities to tap into both emotional intelligence and public comfort with humanised AI. Emerging use cases include: 

  • Customer Experience. AI that senses tone, adapts responses, and knows when to escalate 
  • Brand Voice. Consistent personality and tone embedded in AI interfaces 
  • Employee Wellness. Assistants that support mental health, coaching, and daily check-ins 
  • Healthcare & Elder Care. Companions offering emotional and physical support 
  • CRM & Strategic Communications. Emotion-aware tools that guide relationship building 

Ethical Design and Guardrails 

Emotional AI brings not just opportunity, but responsibility. As machines become more attuned to human feelings, ethical complexity grows. Enterprises must ensure transparency – users should always know they’re speaking to a machine. Emotional data must be handled with the same care as health data. Empathy should serve the user, not manipulate them. Healthy boundaries and human fallback must be built in, and organisations need to be ready for regulation, especially in sensitive sectors like healthcare, finance, and education. 

Emotional intelligence is no longer just a human skill; it’s becoming a core design principle, and soon, a baseline expectation. 

Those who build emotionally intelligent AI with integrity can earn trust, loyalty, and genuine connection at scale. But success won’t come from speed or memory alone – it will come from how the experience makes people feel. 

AI Research and Reports
0

Please rate this

Darian helps businesses navigate the path towards digital transformation, providing insight into cloud, automation, data management, and telecommunications. He has spent two decades advising business leaders on using technology to enter new markets, improve client experience, and enhance service delivery. Previously, Darian spent ten years at IBM, where he was a principal advisor for infrastructure services and hybrid cloud in Europe, with a focus on the telco and energy industries. Prior to this, he was a research manager at IDC, gaining emerging markets experience in Asia Pacific, Central Eastern Europe, Middle East, and Africa. In his final position, Darian headed up IDC’s ANZ offshore research team based in Kuala Lumpur. Originally from New Zealand, Darian holds a Bachelor of Business, majoring in marketing, from the University of Auckland. Outside of the office, Darian enjoys running up mountains, biking with his young daughters, and researching his family tree.


Similar Blogs

Join the community and receive insights and analysis directly to your inbox.

Connect with an Expert
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments