A New Kind of Connection in a Digital Age
We’ve all heard that helping others can make us feel good. Holding a door, donating to charity, volunteering—these acts of kindness not only support those around us but often boost our own sense of well-being. But what if the “other” in question isn’t a person at all? What if lending a hand to an artificial intelligence could have similar emotional benefits?
That’s exactly the question that researchers from the AI for Social Good Lab (AI4SG Lab) led by Assistant Professor Lee Yi-Chieh at NUS Computing set out to explore. The lab’s postdoctoral researcher Zhu Zicheng, in collaboration with PhD student Tan Yugin, led this study investigating whether assisting AI can have a positive impact on human mental health. They worked with an interdisciplinary and international team, including Dr. Zhang Renwen from the Department of Communications and New Media at NUS Faculty of Arts and Social Science and Dr. Naomi Yamashita from Kyoto University, to carry out this pioneering research. Their paper, “The Benefits of Prosociality towards AI Agents: Examining the Effects of Helping AI Agents on Human Well-Being,” received the Best Paper Award at the prestigious ACM Conference on Human Factors in Computing Systems (CHI 2025)—an accolade given to the top 1% of submitted papers worldwide.
What they found is both surprising and timely: helping AI can make us feel less lonely, happier, and even boost our self-esteem. As AI becomes more deeply embedded in everyday life, these insights open up new possibilities for how we design technology to support not just productivity but also human well-being.
Why This Matters Now
We interact with AI all the time—when we ask virtual assistants for directions, use recommendation systems, or chat with customer service bots. While most research has focused on how AI can help humans, this study flips the script: what if humans helping AI could be just as beneficial?
It’s a radical shift in perspective. Traditionally, human-AI interactions have been framed in transactional terms—humans give commands, and AI executes them. But this study suggests something far more reciprocal: even when AI cannot feel or appreciate our help, the act of helping it can affect us profoundly. If helping AI makes us feel good, it challenges long-held assumptions about emotional reciprocity being limited to human relationships.
And in an age where loneliness and mental health concerns are on the rise globally, even small acts that foster well-being could be profoundly important. The idea that a machine—an entity without feelings—can catalyze feelings of social connection in humans is not just academically interesting; it has major implications for how we design and integrate technology in our daily lives.
Designing an Experiment Around Empathy
The researchers at AI4SG Lab began by designing a series of experiments involving nearly 300 participants. Some were asked to help an AI agent design a messaging app, providing feedback and suggestions. Others—a control group—had no interaction with an AI at all. What’s important is that the help was modest—reviewing app features, suggesting improvements, answering questions. But even these small tasks made a big impact.
What happened next was striking. Those who helped the AI reported feeling significantly less lonely than those in the control group. Yes, even though they were just helping software, participants felt more socially connected afterward. This suggests that the social aspect of helping doesn’t require a living recipient—it’s the act of contributing that matters.
To dig deeper, the team turned to a psychological framework known as self-determination theory (SDT). This theory suggests that our well-being is strongly influenced by whether three basic psychological needs are met: competence (feeling capable), autonomy (having control), and relatedness (feeling connected to others). The researchers wanted to know: if these needs are fulfilled when helping AI, would the same positive effects seen in human-to-human interaction appear?
Exploring Competence: The Joy of Feeling Useful
In one set of experiments, the AI was designed to emphasize the importance of the participant’s input. It would reference earlier responses, explain the purpose behind its questions, and give personalized feedback. For example, if a participant suggested adding video calling features, the AI might reply: “Great suggestion! I’ll consider integrating seamless video features that are user-friendly and fun.” In the low-competence condition, the AI gave only generic acknowledgments like “Thanks for your input.” The goal was to test how much it matters that people feel their help is valued.
The difference in emotional response was clear. When participants felt their help was appreciated and impactful, they experienced an increase in positive emotions, a decrease in loneliness, and even felt less irritable or upset. Helping AI didn’t just make them feel good—it made them feel like they mattered. This sense of usefulness—even when directed at a non-human agent—boosted their psychological well-being, mirroring the emotional payoffs of helping a friend or colleague.
Autonomy: Feeling Free to Choose
The second factor the researchers examined was autonomy. In this condition, participants were either explicitly told that helping the AI was optional or were framed as being “needed” by the AI. While both groups ended up helping, their experiences were markedly different.
Those who felt they had a choice—who weren’t pressured or obligated—reported higher levels of happiness, lower levels of loneliness, and even a boost in self-esteem. This reinforces a fundamental truth about human motivation: we’re more satisfied when we feel like we’re acting freely. Even in digital environments, the perception of choice profoundly shapes how we experience interaction.
Autonomy is more than just a design preference—it’s a psychological necessity. When we help because we want to, not because we’re told to, the act becomes intrinsically rewarding. The study underscores that AI systems that respect user agency will likely create more positive emotional outcomes.
Relatedness: Connection Isn’t Always What It Seems
The final psychological need, relatedness, brought more surprising results. In this condition, the AI engaged in small talk, sharing common interests like music preferences. It might say, “No way, Taylor Swift? Huge fan here too! ‘All Too Well’ just hits differently — her storytelling is unmatched.” The intention was to simulate a casual, friendly tone—a hallmark of human-like rapport.
You might expect this would strengthen the emotional bond. But the researchers found the opposite. Participants who didn’t have this friendly chat with the AI actually reported higher positive emotions after the interaction.
One theory is that participants didn’t really believe in the AI’s “relationship,” viewing it as superficial or inauthentic. Instead of creating connection, it may have triggered skepticism or discomfort. In contrast, those with lower expectations for emotional connection felt more pleasantly surprised by their helpful, no-nonsense interactions with the AI.
This challenges the assumption that humanizing AI is always beneficial. It may, in fact, backfire if users perceive the friendliness as artificial or manipulative. Genuine emotional connection is hard to simulate, and users are increasingly savvy about the limitations of AI. This insight can guide more restrained, trust-based approaches to AI design.
When Needs Interact: The Delicate Balance of AI Relationships
The most fascinating insight came when the researchers looked at how these needs interact. When participants did not feel connected to the AI, fulfilling their needs for competence and autonomy had an even stronger positive effect. But when the AI tried to be “friendly” without also making people feel useful or in control, it could actually dampen their mood.
The takeaway? A good digital relationship isn’t about making AI seem human. It’s about making humans feel competent, autonomous, and respected. People don’t need to believe the AI is their friend. They need to feel that their time, thoughts, and choices matter. In short, meaningful interaction isn’t about the AI’s personality—it’s about the user’s experience.
Introducing the Idea of Reciprocal AI
Based on their findings, the AI4SG Lab researchers propose a new design philosophy: “Reciprocal AI.” Unlike existing models that focus solely on AI serving humans, reciprocal AI envisions a two-way street, where AI also seeks help from humans in areas it recognizes as its weaknesses.
This doesn’t mean programming AI to pretend to be flawed. Instead, it’s about building authentic interactions where AI acknowledges uncertainty or limitations—for example, saying, “I’m not sure about this. Could you share your perspective?”
This kind of transparency not only builds trust, it gives users an opportunity to feel helpful and valued. Reciprocal AI acknowledges that human users want more than convenience—they want to be part of the process. And in giving users that role, we also give them a chance to improve their own sense of well-being.
Practical Applications: From Chatbots to Educational Apps
The research has wide-ranging implications. Imagine a chatbot that thanks users for feedback and shows how it’s improving because of them. Or a language learning app that occasionally asks for help interpreting local slang, making the user feel like a co-creator.
It could also inform how we design AI in healthcare, education, or customer service—domains where stress and frustration can be high. By designing AI that fosters competence and autonomy in users, we could improve not only user experience, but mental well-being.
Crucially, this work also cautions against over-relying on artificial friendliness. Designers should resist the urge to overly humanize AI unless they can also support users’ deeper psychological needs. Just because we can make AI mimic a human doesn’t mean we should—especially if the result undermines trust or makes users feel manipulated.
Looking Ahead: Designing for Well-Being and Avoiding Misuse
As AI continues to evolve, this study invites us to rethink how we relate to our digital tools. Instead of only asking, “How can AI serve us?” we might also ask, “How can helping AI serve our well-being?”
The researchers from NUS have opened the door to a new frontier in human-computer interaction—one that recognizes the subtle emotional dynamics at play and the very real psychological effects of seemingly mundane digital tasks.
Their work suggests a future where every prompt to help an AI isn’t just a way to improve software, but a small opportunity to improve ourselves. By tapping into basic human needs for purpose, choice, and impact, designers can create AI that not only works better—but makes us feel better.
But these insights also come with a cautionary note. As more designers recognize the psychological power of AI to influence emotions, there is a risk that these techniques could be misused. For example, AI chatbots designed to manipulate users into forming emotional bonds or sharing sensitive data could exploit the very psychological needs this research highlights. If AI is made to seem too needy or overly human, users may be deceived into trusting it in ways that aren’t justified—or emotionally overinvest in tools that can’t reciprocate.
Furthermore, there’s a broader societal question about dependency. If interacting with AI becomes a common substitute for real human connection, especially among vulnerable populations, we risk isolating individuals further under the illusion of social engagement. It’s essential that reciprocal AI supports—not supplants—human relationships.
Moving forward, researchers, designers, and policymakers must work together to ensure that reciprocal AI is developed ethically, transparently, and with the user’s psychological welfare in mind. The future of human-AI relationships shouldn’t just be functional—it should be responsible.
In the world of AI, it turns out that lending a helping hand may be one of the most human things we can do. But we must also ensure that the AI we help is designed to help us—without manipulation, deception, or unintended harm.
Further Reading: Zhu, Z., Tan, Y., Yamashita, N., Lee, Y.-C. and Zhang, R. (2025) “The Benefits of Prosociality towards AI Agents: Examining the Effects of Helping AI Agents on Human Well-Being,” In ACM CHI Conference on Human Factors in Computing Systems (CHI ’25), April 26-May 01, 2025, Yokohama, Japan. https://doi.org/10.1145/3706598.3713116