02.06.2023

Real or fake text? We can learn to spot the difference

Summary: There are concerns about job loss and education disruption, it is clear that the impact of large-scale language models like ChatGPT will be far-reaching. These sophisticated tools give rise to broader societal issues, including the potential for artificial intelligence to perpetuate social biases, facilitate fraud and identity theft, propagate fake news and spread misinformation.

A group of experts aims to help the public better understand and manage these risks. They have found that individuals can be trained to identify machine-generated text and distinguish it from human-written language.

The impact of ChatGPT, the large-scale language model, will extend beyond the job market and education system, where fears of disruption are dominant.

Its effects will surround special spheres of life across various spheres, causing some difference between social differences. Sparking apprehensions about its potential to perpetuate social biases, enable fraud and identity theft, propagate fake news, spread misinformation and more.

As experts aim to help the public manage these risks, they acknowledge that individuals can be trained to identify machine-generated text from human-written language. To address concerns about machine-generated text, a team of researchers from the University of Pennsylvania School of Engineering and Applied Science is working to equip technology users with the knowledge to identify such content.

The group's peer-reviewed paper, presented at the Association for the Advancement of Artificial Intelligence's February 2023 meeting, shows that we can learn to differentiate between machine-generated and human-written text.

Improving this skill, people can better rate information sources and make informed decisions when using technology, such as selecting recipes, sharing articles, or sharing payment information.

Chris Callison-Burch, an Associate Professor in the Department of Computer and Information Science, led the study with Ph.D. students Liam Dugan and Daphne Ippolito, both from CIS. The results of their research support the notion that AI-generated text can be detected.

According to Callison-Burch, the team has demonstrated that individuals can develop the ability to identify machine-generated texts. He points out that people often start with preconceived notions about the errors machines are likely to make, but these assumptions are not always accurate. With sufficient examples and explicit guidance, we can improve our ability to detect the particular types of mistakes that machines are prone to making.

Dugan adds that while AI can produce text that is fluent and grammatically correct, it is not infallible. The researchers found that machines tend to make specific kinds of errors, such as logical, reasoning, relevance, and common-sense errors, that can become recognizable with practice.

The research employs Real or Fake Text? a web-based training game that is unique in its approach to detection studies.

In traditional detection studies, individuals are asked to distinguish between machine-generated and human-written text with a yes-or-no response, and their accuracy determines their score. This method is limited in its ability to accurately recreate how individuals use AI-generated text.

The Penn team has created a more effective training task by using examples that start as human-written text and transition into machine-generated text. Participants need to identify the point of transition and describe the errors. This approach allows trainees to become proficient in detecting key features of machine-generated text that indicate errors with a score as feedback.

The study's findings demonstrate that participants performed considerably better than chance, indicating that AI-generated text can be detected to some degree.

According to Dugan, "Our approach not only transforms the task into a game, rendering it more enjoyable, but it also presents a more authentic environment for training purposes," as the texts produced by ChatGPT commence with prompts given by humans.

This research not only pertains to current AI functionality but also foresees a bright and reassuring future for our interactions with this technology.

Yasmin Anderson

AI Catalog's chief editor

Share on social networks:

Similar news

Stay up to date with the latest news and developments in AI tools at our AI Catalog. From breakthrough innovations to industry trends, our news section covers it all.

29.05.2023

Fashion Brands use AI to create a variety of models. To complete the idea of the diff...

30.05.2023

Country’s Spring Budget is directed towards supporting the AI industry. In the recent...

30.05.2023

Facial recognition tool Clearview AI has revealed that it reached almost a million sea...