When AI Talks in Groups: How Multi-Agent Systems May Be Shaping Your Opinions

3 July 2025
SHARE THIS ARTICLE

When AI Talks in Groups:
How Multi-Agent Systems May Be Shaping Your Opinions

 

In the early days of artificial intelligence, we mostly interacted with systems one-on-one. You asked a chatbot a question, and it replied. You typed into a search bar, and an algorithm responded. But we’re entering a new phase, one where AI systems aren’t operating alone anymore. They’re forming collectives. Think small teams of AI assistants offering advice together, or clusters of bots posting in coordinated ways online. It’s not science fiction; it’s already happening.

And that raises a pressing question: if people already tend to treat individual AI agents as social beings, what happens when those agents come in groups? Can multiple AI systems working together influence us the way a group of humans might?

That’s exactly what a recent study from researchers at NUS Computing’s AI 4 Social Good (AI4SG) Lab led by Assistant Professor Lee Yi-Chieh set out to explore. A research paper entitled “Investigating Social Influence of Multiple Agents in Human-Agent Interactions,” which received the Top Paper Award from the Human-Machine Communication division of the International Communication Association (ICA) at their 75th Annual Conference dives into whether AI collectives can exert social pressure, alter opinions, and even trigger psychological defense mechanisms like reactance. The findings are compelling, a little unsettling, and incredibly relevant to anyone thinking about the future of human-AI interaction.

 

Why This Problem Matters Now

Social influence is a fundamental part of human life. We tend to trust majority opinions, conform to group norms, and seek approval from peers. This isn’t weakness; it’s how we maintain social cohesion. But these dynamics were designed for interactions with other humans, not lines of code.

At the same time, AI agents are becoming increasingly human-like. Powered by large language models like GPT-4 and Gemini, they can express coherent ideas, hold conversations, and even simulate personalities. That alone has prompted research into how people anthropomorphize individual AIs. But when you group multiple agents together, especially when they appear to agree with each other, do they begin to exert the kind of influence typically reserved for human groups?

If the answer is yes, the implications are profound. Multi-agent systems could be designed to encourage healthy behavior, improve learning motivation, or offer mental health support. But they could also be used to manipulate public opinion, reinforce biases, or fabricate consensus. In a world already grappling with misinformation and online manipulation, understanding how AI groups affect us is more important than ever.

 

The Experiment: When AI Comes in Numbers

To test the power of AI group influence, the researchers designed a carefully controlled experiment involving 94 human participants. Each participant engaged in a text-based discussion with one of three setups:

  • A single AI agent
  • A group of three AI agents
  • A group of five AI agents

The agents were powered by a mix of pre-written dialogue and responses generated using GPT-4. Importantly, participants always knew they were talking to AIs, not humans. Each agent had a simple cartoon avatar and was designed to look and sound distinct.

Participants discussed two social issues: whether self-driving cars should be allowed and whether violent video games contribute to youth violence. These topics were chosen because they’re widely known but don’t usually provoke extreme, immovable opinions.

Here’s the twist: for each participant, the AI group agreed with them on one topic and disagreed on the other. The goal was to measure how much participants’ opinions shifted after interacting with the agents, and whether the number of agents made a difference.

 

The Findings: When Influence Works and When It Backfires

The short answer is yes – multi-agent systems do influence people. But the details are more nuanced and revealing.

  1. More Agents = More Influence (Usually)

When the AI agents disagreed with the participant’s initial opinion, three agents were more persuasive than one. Participants in the three-agent condition shifted their views more toward the AI’s stance. This suggests that even when participants know they’re talking to machines, they’re more likely to be swayed by a group than by an individual.

But surprisingly, adding more agents didn’t always amplify the effect. In fact, when five agents disagreed with the participant, the influence dropped. Participants were less likely to shift their views, and some even dug in and moved further away from the AI’s position.

This result flips a basic assumption on its head. More voices agreeing with each other doesn’t necessarily mean more persuasion. In fact, beyond a certain point, it can trigger resistance – just like with groups of humans

  1. Agreement Strengthens Belief

On the flip side, when the agents agreed with the participant, more agents did lead to stronger polarization. If you already supported self-driving cars, and five bots reinforced that view, you likely came away from the conversation even more convinced.

This has implications for echo chambers in AI-powered platforms. If AI collectives consistently align with your views, they may harden your stance over time.

  1. Perceived Social Pressure Is Real; even from AIs

Participants weren’t just swayed by arguments. They also reported feeling social pressure – specifically what’s called normative pressure, the psychological urge to fit in or not stand out.

Only 3% of people in the one-agent condition said they felt this pressure. But that jumped to 12% with three agents, and 20% with five. One in five people said they felt like the AI group was trying to push them into agreeing.

Here’s the kicker: participants knew they were talking to computers. But their brains still responded as if they were interacting with a real social group. Descriptions included feeling “ganged up on” or “like the odd one out.” The social wiring in our brains doesn’t seem to distinguish very well between real people and AI when they act in concert.

  1. Why Five Is Too Many: Reactance

The fact that five disagreeing agents were less persuasive than three raised a big question: why?

The researchers point to a well-known psychological phenomenon called reactance. When people feel their freedom to think or decide is being threatened, they tend to push back, even if the influence attempt is subtle. Five agents agreeing with each other and disagreeing with you can feel like a pile-on, a coordinated push. That can trigger a defensive response. Instead of changing their minds, people doubled down.

This mirrors human dynamics in real life: too much peer pressure often causes people to resist. The same seems to apply when that pressure comes from AI.

 

Age Matters. So Might Design.

The study also uncovered that younger participants were generally more susceptible to AI influence. They were more likely to shift opinions and report feeling normative pressure. This could reflect generational differences in how people relate to technology, or simply more exposure to conversational AI in daily life.

Interestingly, education level and gender didn’t show strong effects in this particular experiment, though the researchers note that further study with larger and more diverse samples would help.

Design also seems to matter. All agents in this study presented the same arguments regardless of group size. The only difference was the number of agents speaking. Yet the perceived influence changed dramatically. That means product designers and developers have a powerful—and potentially risky—lever at their disposal: the ability to simulate consensus by adjusting how many agents are involved in a conversation.

 

Real-World Implications: A Double-Edged Sword

The potential uses of multi-agent influence are vast. Here are a few real-world scenarios where this research could apply:

  • Positive Use: Health and Wellness Coaching
    Imagine a virtual support group made up of several AI agents helping someone quit smoking or improve their sleep habits. By reinforcing healthy norms, a group of supportive AIs could strengthen motivation and behavior change—just as a group of human peers might.
  • Education
    Multi-agent tutors could work together to guide students through complex topics, each representing different approaches or perspectives. This might help learners stay engaged or benefit from seeing a consensus among “voices.”
  • Risks: Misinformation and Manipulation
    On the darker side, coordinated AI agents could be used to shape opinions covertly. Think of social media bots flooding a comment section with similar takes to create the illusion of popular support for a fringe idea. Even if people know those accounts are bots, the influence could still sink in.

This is especially concerning in political discourse or during crises, when people look for social cues about what’s true or trustworthy.

 

What Should We Do About It?

The study doesn’t just raise alarms; it also offers guidance:

  • Design AI groups responsibly. Avoid overwhelming users with a wall of agreement. Vary the opinions or include dissenting voices to reduce perceived pressure.
  • Watch for signs of reactance. Over-persuasion can backfire. Give users space to reflect, not just respond.
  • Educate users. Help people understand how group dynamics, real or artificial, can shape their opinions. Awareness is a powerful defence.
  • Regulate coordinated AI behaviour. Platforms and policymakers should consider not just what AI says, but how many agents are saying it and how they interact.

 

Final Thoughts: AI Isn’t Just Talking; It’s Talking Together

This study represents a major step forward in our understanding of how AI systems function not just as tools, but as social actors. When multiple agents interact with a person, they don’t just share information; they shape perception, nudge opinions, and create emotional responses.  

That influence doesn’t disappear just because we know the group is artificial. On the contrary, our social reflexes seem to kick in regardless.

As AI collectives become more common in chatbots – in content feeds, and in digital communities – we need to think not just about what each individual system does, but about what they do together. Their collective voice, whether supportive or manipulative, might be louder than we realize.

And it might be shaping what we believe, whether we want it to or not.

 

Further Reading: Song, T., Tan, Y., Zhu, Y., Feng, Y. and Lee, Y.-C. (2025) “Multi-Agents Are Social Groups: Investigating Social Influence of Multiple Agents in Human-Agent Interactions,” 75th Annual ICA Conference, June 12-16, Denver: CO.  

Song, T., Tan, Y., Zhu, Y., Feng, Y. and Lee, Y.-C. (2025) “Greater than the Sum of its Parts: Exploring Social Influence of Multi-Agents,” In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’25).  https://doi.org/10.1145/3706599.3719973

 

Trending Posts