AI chatbots let you 'interview' historical figures like Harriet Tubman. That's probably not a good idea.
- Tech companies are launching AI chatbots that mimic historical figures.
- Some of these bots are intended to be educational tools — making classrooms interactive.
One of the more bizarre ideas to have emerged from the AI boom is the creation of apps that allow users to "chat" with famous historical figures.
Several of these bots are being designed by AI startups like Character.AI or Hello History, but major tech companies like Meta are experimenting with the idea, too.
Although some of these chatbots are designed purely for entertainment, others are intended to be educational tools, offering teachers a way of making classes more interactive and helping to engage students in novel ways.
But the bots present a major problem for teachers and students alike, as they "often provide a poor representation and imitation of a person's true identity," Abhishek Gupta, the founder and a principal researcher at Montreal AI Ethics Institute, told Insider by email.
Tiraana Bains, an assistant professor of history at Brown University, said the bots can close off other avenues for students to interact with history — like conducting their own archival research.
"It has this pretense of, you know, ready-made, easy access to knowledge," she said, "when in fact, there could be more exciting, arguably more enjoyable ways for students to figure out how we should be thinking about the past."
Khanmigo and Hello History
The Washington Post put one of these bots to the test, using Khan Academy's Khanmigo bot to "interview" Harriet Tubman, the US abolitionist.
At the time of the test, the Post said the GPT-4-powered technology was still in beta testing and was only available in a select few school districts.
The AI Tubman largely appeared to recount information that could be found on Wikipedia, but it did make some key errors and seemed to struggle to distinguish the quality of different sources.
In one instance, for example, the Post asked whether Tubman had said, "I freed a thousand slaves. I could have freed a thousand more, if only they knew they were slaves."
The bot replied, "Yes, that quotation is often attributed to me, although the exact wording may vary."
It's right that the quotation is often attributed to Tubman, but there's no record of her actually having said that, experts told Reuters after the quote began to resurface on social media earlier this year.
Insider asked the same question to Hello History, another historical AI chatbot, to see if it would fare any better.
Hello History's bot, which uses GPT-3 technology, replied almost verbatim, saying: "Yes, that is a quote often attributed to me."
Once again, the bot failed to point out there was no evidence Tubman said the quote. This shows there are still key limitations with the tools and reasons to be cautious when using them for educational purposes.
Sal Khan, the founder of Khan Academy, acknowledges on the bot's website that while AI has great potential, it can sometimes "hallucinate" or "make up facts."
That's because chatbots are trained and limited by the datasets they've learned from, often sites like Reddit or Wikipedia.
While these do contain some credible sources, the bots also take from those that are more "dubious," Ekaterina Babintseva, a historian of science and technology and an assistant professor at Purdue University, told Insider.
The bots can mix up details of what they've learned to produce new language that can also be entirely wrong.
Potential ethical solutions
Gupta said that to use the bots in an ethical manner, they would at least need defined inputs and a "retrieval-augmented approach," which could "help ensure that the conversations remain within historically accurate boundaries."
IBM says on its website that retrieval-augmented generation "is an AI framework for retrieving facts from an external knowledge base to ground large language models (LLMs) on the most accurate, up-to-date information."
This means that the bots' datasets would be supplemented by external sources of information, helping them to improve the quality of their responses while also providing a means of manually fact-checking them by giving users access to these sources, per IBM.
It's also "crucial to have extensive and detailed data in order to capture the relevant tone and authentic views of the person being represented," Gupta said.
Effects on critical-thinking skills
Gupta also pointed to a deeper issue with using bots as educational tools.
He said that overreliance on the bots could lead to "a decline in critical-reading skills" and affect our abilities "to assimilate, synthesize, and create new ideas" as students may start to engage less with original source materials.
"Instead of actively engaging with the text to develop their own understanding and placing it within the context of other literature and references, individuals may simply rely on the chatbot for answers," he wrote.
Brown University's Bains said that the contrived or wooden nature of these bots — at least as it stands — might help students see that history is never objective. "AI makes it quite obvious that all viewpoints come from somewhere," she said. "In some ways, arguably, it could also be used to illustrate precisely the limits of what we can know."
If anything, she added, bots could point students toward the kinds of overused ideas and arguments they should avoid in their own papers. "It's a starting point, right? Like, what's the kind of common wisdom on the internet," she said. "Hopefully, whatever you are trying to do is more interesting than the sort of basic summary of some of the more popular opinions about something."
Babintseva added that the bots may "flatten our understanding of what history is."
"History, just like science, is not a collection of facts. History is a process of gaining knowledge," she said.