+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Researchers have created an AI that can simulate your personality with 85% accuracy after a 2-hour chat!

Nov 29, 2024, 09:53 IST
Business Insider India
Imagine sitting across from an AI interviewer, answering questions about your life, preferences, and opinions for two hours. When the session ends, the AI confidently declares it can replicate your personality — answering questions, making decisions, and reacting to situations just like you would.
Advertisement

A team of researchers from Google DeepMind, along with computer scientists and sociologists, has developed a groundbreaking AI system capable of creating eerily accurate simulations of individual personalities, blurring the line between technology and identity.

This innovative system uses artificial intelligence to analyse responses in real-time, generating what the researchers call “personality agents.” While the concept might conjure up images of clones or digital twins, the team is quick to point out that their aim is far less dystopian. Instead, they see it as a tool to revolutionise sociology and research, making it faster, cheaper, and more precise to study how people think and feel about the world around them.

The process begins with a two-hour interview conducted by a conversational AI through an interface designed to be intuitive and engaging. A friendly 2D sprite represents the AI interviewer, its white circle pulsating as it speaks. When it’s the participant’s turn, the sprite morphs into a microphone icon, capturing their responses while a progress bar keeps track of the session. It’s an almost game-like experience, but beneath the surface, the AI is hard at work. By analysing speech patterns, preferences, and decision-making tendencies, it constructs a digital personality — a model capable of answering questions with an impressive 85% accuracy compared to the real person, as claimed by the researchers.

The researchers, who interviewed 1,000 participants to train the system, are confident in its potential. They envision it as a game-changer for sociology, a field that relies heavily on surveys to understand human behaviour. Traditional surveys are time-consuming and expensive, requiring researchers to draft, distribute, and analyse them meticulously. With AI-generated personality agents, researchers could simulate responses to different scenarios without needing to interview thousands of individuals each time. This could dramatically reduce costs while increasing the scale and precision of social research.

Advertisement

The implications, however, extend beyond academia. The ability to simulate personalities could transform personal AI assistants, making them more intuitive and personalised. Imagine a digital assistant that truly “gets” you, anticipating your needs and preferences with uncanny accuracy. The technology could also enhance human-robot interaction, paving the way for robots that respond to emotions and situations as naturally as humans.

Of course, the concept isn’t without its challenges. The ethical considerations are significant — how do we ensure consent when creating and using these digital replicas? What happens if someone uses this technology maliciously? For instance, these AI models could be weaponised in targeted advertising or political campaigns, using their deep understanding of a person’s preferences to subtly manipulate behaviour. There’s also the psychological discomfort of knowing your digital "self" could be interacting with others in ways you can’t control, potentially leading to trust issues or even emotional harm.

The researchers acknowledge these concerns but emphasise their focus on transparency and ethical development. For now, they’re prioritising its use in sociological research, but the possibilities for broader applications are undeniable.

Whether that future excites or unsettles you, one thing is clear: the line between human and machine is becoming increasingly blurred.

The findings of this research have been published in a preprint journal and can be accessed here.
You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article