scorecard
  1. Home
  2. tech
  3. news
  4. This is what happened when 25 AI avatars were let loose in a virtual town

This is what happened when 25 AI avatars were let loose in a virtual town

Aaron Mok   

This is what happened when 25 AI avatars were let loose in a virtual town
Tech2 min read
  • Researchers let 25 AI avatars loose in a virtual town.
  • The avatars were able to make daily schedules, talk politics, go on dates, and even plan a party.

What happens when you let 25 AI agents loose in a virtual city? A recent study set out to see what they'd get up to — and turns out they aren't all that different than real people.

A team of researchers from Stanford University and Google conducted an experiment to see how AI generative avatars can "simulate believable human behavior" such as memory using large language models.

To do this, researchers created 25 AI agents with different identities and watched how they interacted with each other and their environment in a virtual town called "Smallville," which includes a dorm, park, school, cafe, bar, houses, and stores. To simulate human behavior, researchers used GPT 3.5, the model behind OpenAI's ChatGPT, to prompt the agents on the backend to perform specific tasks like read a book or talk to a librarian.

After some observation, researchers concluded these generative agents were able to "produce believable individual and emergent social behaviors."

AI avatars named Isabella Rodriguez and Tom Moreno, for instance, debated the town's upcoming election. When Isabella asked Tom what he thought of Sam Moore, the candidate running for mayor of Smallville, Tom replied with his opinion.

"To be honest, I don't like Sam Moore," the AI Tom said. "I think he's out of touch with the community and doesn't have our best interests at heart."

The agents were also able to respond to their environment. Isabella turned off her stove and made a new breakfast when she was told her food was burning. AI agent John Lin had spontaneous conversations without being prompted throughout the day, as he followed a schedule he'd made.

Agents were even able to organize a Valentine's Day party without prompts. When Isabella was given the task, she managed to "autonomously" invite friends and customers she met at the local cafe and decorate the party venue. The agents she invited made plans to arrive at the party together at 5 p.m. Maria, an AI agent invited to the party, even asked her "secret crush" Klaus on a date to join her at the party, and he agreed.

The findings show how the generative AI model behind ChatGPT can be used beyond its application as a virtual assistant, Michael Wooldridge, a computer science professor at Oxford University who studies AI and was not involved in the study, told Insider.

Woolridge said he can see these findings being realistically applied to task-management apps.

Jaime Sevilla, an AI researcher not involved in the study, told Insider the models behind the study could be applied to non-player characters in video games.

Researchers involved in the study declined Insider's request for comment.

The findings, Woolridge said, are "baby steps" towards achieving artificial general intelligence, the ability for AI tools to display complex human behaviors like consciousness. Still, he said "we've got a long, long way to go" before that goal is realized.

After all, the AI agents in the study were prone to hallucinations — like failing to recall certain events — which he attributes to how the model was trained.

While researchers concluded their AI agents displayed emergent human behaviors, Woolridge said "we need to be skeptical" and "question" what AI tells us at face value.


Advertisement

Advertisement