Recognizing AI’s misinformation

This post was originally published on this site

There’s no doubt about it: we are living in the information age. Anyone can search the internet for whatever information their heart desires. Unfortunately, this creates a perfect opportunity for misinformation to be widely spread. But how susceptible is the public to this misinformation, especially in the age of COVID-19 when people have become exceedingly reliant on online content for both learning and other professional tasks? And what challenges exist in protecting people and computer networks from being misled by AI-generated misinformation?

Like people, computers must also be trained to spot threats. Their threats, however, come in the form of malware—computer viruses—which can be considered a form of misinformation.

Conrad Tucker, a professor of mechanical engineering, has teamed up with Challenger Center and the RAND corporation to investigate the link between AI-generated threats posed to humans and computers in the digital landscape. The Challenger Center will explore how K-12 students identify misinformation, CMU with college students, and RAND with teachers and other adults. This project will also introduce students to learning about AI.

AI is able to generate false data that looks like real data, but many people aren’t aware of the potential magnitude of these risks to the free flow and trust of publicly-available information. This makes them even more susceptible to believing what researchers call “hyper-realistic data.” Researchers hope that by informing people about AI-generated misinformation, they will be able to recognize it better or, at the very least, be more aware of its potential impact on society. 

It is critical for society to understand the threats that may compromise the veracity of information and computer networks.

Conrad Tucker, Professor, Mechanical Engineering

“It is critical for society to understand the threats that may compromise the veracity of information and computer networks,” Tucker said. “We are therefore motivated by the need to understand how susceptible different segments of society (both humans and computers) are to the potential for AI-generated misinformation.”

This project should have two outcomes. First, science education researchers will have a glimpse into what features make misinformation more believable to humans. Second, security researchers might be able to link human susceptibility to misinformation and computer susceptibility to malware. Researchers say they may not be able to achieve these outcomes in just one round of research—instead, this project may take multiple trials to find patterns that can be exploited by researchers.

“It is possible that the project team will home in on critical characteristics of AI-generated content that drive deception, providing a strong foundation for understating how to improve human recognition of misinformation,” Tucker said. “However, it is also possible that, on this first attempt, the team may not uncover these critical characteristics.”

This project was awarded an EAGER grant in 2020. NSF’s Early-concept Grants for Exploratory Research (EAGER) grants are awarded to research that is high-risk and high-reward. The project team includes Jared Mondschein and Christopher Doss from the RAND Corporation and Lance Bush, Valerie Fitton-Kane, and Denise Kopecky from the Challenger Center.

Author: