Thursday, November 21, 2024
34.0°F

Sholeh: New chatbot can reform conspiracists

by SHOLEH PATRICK
| September 24, 2024 1:00 AM

With a twist of irony, chatbots may provide answers to the problems they create.

For most people, their first encounters with chatbots were consumer-oriented. “Have a question about your order? Chat with us now.” 

The other day, I was tempted to yell at one of those infuriating robotic voices trying to keep me from a real person (“in a few words, please describe …”), offering options that didn’t apply.

Frustrating but not dangerous. It’s a different story when what we’re consuming is information.

When chatbots first entered social media, researchers warned of a new risk: Chatbots, a form of generative artificial intelligence, could easily be used to spread disinformation in ways much more credible than before. 

Conspiracy theories would spread like a forest on fire, they warned. That’s pretty much what happened, especially as human “shares” multiplied.

Let’s backtrack a little. 

While they feel more recent, the first chatbot was actually invented in the 1960s by MIT professor Joseph Weizenbaum. He called her ELIZA and pretended she was a psychotherapist. Imagine a wall-sized, old-school computer typing personal questions and responding to your answers.

ELIZA: Is something troubling you?

YOU: My wife ignores me.

ELIZA: Can you give a specific example of that?

Fast forward decades and many incarnations as AI developed. You’d recognize Microsoft’s Cortana (2009), Siri (developed by Stanford researchers in 2007), or Amazon’s Alexa (2014). 

Then, in 2022, OpenAI developed ChatGPT, the virtual assistant we see often on websites and smartphone apps. By then what started out somewhere between useful and entertaining (and still is) had fully developed a nasty side.

Conspiracy theories and misinformation are nothing new, nor is the vigilance their avoidance demands. For decades afterward, some said the moon landing was faked. There are those who still think the same of the Holocaust. 

Wild theories on causes of pandemics and medical conditions repeat through the centuries. One October 2014 study published in the American Journal of Political Science estimated as much as half of Americans have believed at least one false conspiracy or paranoid theory.

The difference chatbots have made is how widely and rapidly they spread. An exponential difference, one much harder to counterbalance or diffuse.

Knowing chatbots would take this to a new level, some researchers were so concerned they invented an AI answer to an AI mess: DebunkBot. 

Its first experimental use successfully challenges the idea that logic and facts alone can’t pull believers out of the rabbit hole. The bot seems to do a better job of getting people to rethink than does human argument.

Just for kicks, try conversing with it at www.debunkbot.com/conspiracies as honestly as you can.

DebunkBot was developed by "vegapunk" to help AI researchers evaluate chatbot influences and effects. Earlier this month one such study in the journal Science, “Durably reducing conspiracy beliefs through dialogues with AI,” suggests chatbots can reduce the negative phenomena their misuse helped magnify.

MIT and Cornell researchers asked 2,190 adults across the country to describe a conspiracy theory and rank how much they believed it on a scale of zero to 100 before and after chatting less than 10 minutes with DebunkBot. They knew they were conversing with a chatbot, but they didn’t know the purpose of the experiment.

While the theories they believed differed, most believed them less after talking with the bot. Belief levels decreased by an average 20% and about one quarter no longer believed the theory at all after chatting with the bot. 

When surveyed two months later, the participants’ beliefs remained the same post-chat.

Chatbots are only as good as their programming and resources. So, while it’s true that some AI is not 100% reliable, this one seems to be close. When a professional fact-checker evaluated a sample of 128 claims made by the AI, 99.2% were true, 0.8% were misleading and none were false. See the study: science.org/doi/10.1126/science.adq1814

It was also fun. I didn’t fit the bill on conspiracy theories, so the DebunkBot and I debated what it called my “plausible” belief in the probability of life outside our solar system. 

With well-organized reasoning, facts, scientific principles and — perhaps the most important element in an attempt to change minds — acknowledging the pluses in my own argument, it convinced me that I really can’t know one way or another. Technically, therefore, the word “probability” is incorrect. My revised belief is in a “possibility.” I was impressed.

Perhaps what’s different about chatting with an AI as opposed to a human is two-fold. AI has a vast array of instantly available resources, capable of retrieving massive amounts of data backed up with specifics in mere seconds, making responses more convincing. AI also removes emotion from the exchange, an element that tends to get in the way of effective communication.

How can this be used on a larger scale? Forums, for example. 

The researchers suggest linking informational chatbots to online sites where topics are discussed or buying ads with facts and links to more information. 

A few such information-only ads already exist on medical and agency sites. In the medical arena especially this could be useful, allowing patients to have simulated conversations about treatments or preventions they’re considering or conditions they have.

This could be applied in civic arenas. States, counties and cities or local public boards could engage them to combat misinformation about proposed actions or controversial topics.

Tempting though it may be for neo-Luddites like me to throw the proverbial baby out with the bathwater, AI is here to stay. Finding ways to use it beneficially and, hopefully, ethically is the way forward.

“Absence of evidence is not evidence of absence, but it also isn’t evidence of presence.” — DebunkBot, quoting a scientific maxim

• • •

Sholeh Patrick, J.D. is a columnist for the Hagadone News Network. Email sholeh@cdapress.com.