A team of researchers from
This innovative system uses artificial intelligence to analyse responses in real-time, generating what the researchers call “personality agents.” While the concept might conjure up images of clones or digital twins, the team is quick to point out that their aim is far less dystopian. Instead, they see it as a tool to revolutionise sociology and research, making it faster, cheaper, and more precise to study how people think and feel about the world around them.
The process begins with a two-hour interview conducted by a conversational AI through an interface designed to be intuitive and engaging. A friendly 2D sprite represents the AI interviewer, its white circle pulsating as it speaks. When it’s the participant’s turn, the sprite morphs into a microphone icon, capturing their responses while a progress bar keeps track of the session. It’s an almost game-like experience, but beneath the surface, the AI is hard at work. By analysing speech patterns, preferences, and decision-making tendencies, it constructs a digital personality — a model capable of answering questions with an impressive 85% accuracy compared to the real person, as claimed by the researchers.
The researchers, who interviewed 1,000 participants to train the system, are confident in its potential. They envision it as a game-changer for sociology, a field that relies heavily on surveys to understand human behaviour. Traditional surveys are time-consuming and expensive, requiring researchers to draft, distribute, and analyse them meticulously. With AI-generated
The implications, however, extend beyond academia. The ability to simulate personalities could transform personal AI assistants, making them more intuitive and personalised. Imagine a digital assistant that truly “gets” you, anticipating your needs and preferences with uncanny accuracy. The technology could also enhance
Of course, the concept isn’t without its challenges. The ethical considerations are significant — how do we ensure consent when creating and using these digital replicas? What happens if someone uses this technology maliciously? For instance, these AI models could be weaponised in targeted advertising or political campaigns, using their deep understanding of a person’s preferences to subtly manipulate behaviour. There’s also the psychological discomfort of knowing your digital "self" could be interacting with others in ways you can’t control, potentially leading to trust issues or even emotional harm.
The researchers acknowledge these concerns but emphasise their focus on transparency and ethical development. For now, they’re prioritising its use in
Whether that future excites or unsettles you, one thing is clear: the line between human and machine is becoming increasingly blurred.
The findings of this research have been published in a preprint journal and can be accessed here.