This page in Swedish

School of Science and Technology

AI can detect the spread of fake news

A computer. On the screen is a woman screaming in a megaphone.

A correct quote or truthful image can suddenly be turned into false propaganda.
Extremely small modifications are enough to change the entire meaning.
Örebro University Professor Mehul Bhatt, together with an international team, is researching how artificial intelligence (AI) can analyse text, including fake news and extremist posts.

FACTS: FACT-CHECKING THE FAKE NEWS

AFP Fact Check

The French news agency is investing heavily in fact-checking of several different languages. It offers a wide span of reviewed content with opportunities to follow fact-checked news from other parts of the world.

AP Fact Check

The news organisation, Associated Press, fact checks fake news from all over the world, with a heavy emphasis on the USA. Each week AP collects inaccurate claims under the heading “Not Real News: A look at what didn’t happen this week”.

CNN politics – Facts First

Political statements in the USA are reviewed, along with what leading US politicians have said or tweeted.

Källkritikbyrån

A Swedish website for source criticism of Swedish content, started by the journalists who were previously in charge of the Metro Newspaper’s Viralgranskaren (the viral examiner). Their section Guider (guides) provides tips to those wanting to check texts and images themselves.

Google Fact Check Tools

Searches among web sites specialising in fact-checking from all corners of the world – many more than listed here. Searching is possible in several languages.

An international, interdisciplinary team is working together in a project, titled “Multimodal Rhetoric in Online Media Communications” supported by the Center for Interdisciplinary Research (ZiF) in Bielefeld, Germany. The group particularly aims to find new methods to discover and trace how extreme rhetoric in news arises, spreads and transforms.

And the need is enormous.

Today, analysing video and audio news media is a manual and time-consuming task, which means that much of what is posted in social media or mainstay media, has never been fact-checked. An example of this can be found at the French news agency, AFP. They have manually searched and checked statements that have been spread regarding the corona pandemic. As of today, more than 600 fake news stories have been disclosed. These range from old pictures that have new, fake text connected to them, to fake cures to help protect against COVID-19.

“Fake news hunting alone is becoming a full-time occupation. But this only scratches the surface. The broader question is to understand and retrace the explicit strategies by which rhetoric is planted in the public discourse, and how it is covered, disseminated, and mutated,” asks Mehul Bhatt, professor and artificial intelligence specialist at Örebro University – and head of the ZiF research team.

Through advancement and integration of methods from several areas, the team is trying to develop models and methods to these challenges, fake news being one of them.

“This will only succeed if AI specialists work hand in glove with researchers working in the social sciences and humanities, as well as practitioners and other stakeholders in journalism and media,” says Mehul Bhatt.

Text and images work together

The project is looking primarily at multimodal rhetorical analysis, which means that the entire formulation is analysed. Instead of only focusing on a text or image in isolation, researchers are looking at how text works together with images, or how people in a video communicate using a combination of gestures, speech and intonation. Both social media and traditional media work as when we are listening to another person speaking directly to us. The receiver of a message is often skilful at recognising irony in what is expressed, just as nuances and other details are recognised in face-to-face conversation.

“I make use of gestures and voice modulation, perhaps raising my eyebrows in a subtle manner at certain things. Subconsciously, I use a combination of techniques to communicate my opinion and how I really feel about something. My body language will nonetheless communicate things not expressed directly in my message.”

Mehul Bhatt, who is originally from India, has observed how the spread of fake news there has accelerated over the past few years. One contributor is that mobile phones and data traffic are relatively inexpensive now, and internet accessibility is far and wide. Additionally, many have little or no education, and so they may lack the potential to be critical of carefully planted rhetoric.

“This is a lethal combination. Education has not been democratised, but the use of technology has. For someone who never has been to school, all results that pop up on Google can appear credible.”

Our present time has sometimes been called the post-truth era; a world where truth has been overshadowed by communication that suits our convictions. There are plenty of examples of how politicians have cleverly used it to increase their power and impose their will.

“This is a very alarming trend,” says Mehul Bhatt.

A portrait of professor Mehul Bhatt

Professor Mehul Bhatt

AI provides a detailed analysis

The project’s international networking will carry on about a year.  During this time, researchers expect to have developed an initial set of principles and a model for how AI will comb through enormous amounts of news data to detect and analyse macro and micro-level news patterns.

“We don't intend for technology to come up with an answer on what is good and bad. Instead, AI will do a very fine grade analysis of what has happened, which can be used by someone who has a keen sense of the nature of the problem. This can then be used to perform better-informed, evidence-based analysis of the media, to come up with an opinion of what went on as well as what is good and bad about it," explains Mehul Bhatt.

Text: Jesper Mattsson

Photo: Jasenka Dobric and Jesper Mattsson

Translation: Jerry Gray