(Noted News) — Researchers from the University of Sheffield have developed a new artificial intelligence-based algorithm that they say can detect Twitter users who are likely to “share unreliable news sources,” before it happens.
Led by Yida Mu and Nikos Aletras, the research team from the University’s Department of Computer Science analyzed over a million tweets from approximately 6,200 different Twitter users with a natural language processer method that allows computers to understand large sets of language data.
They used their data to create an algorithm that most importantly divides Twitter users into 2 groups: those that share news from reliable sources and those that share news from unreliable resources. Their paper, which was published on PeerJ, defines misinformation as “an umbrella term to include any incorrect information that is diffused in social networks (Wu et al., 2019). On the other hand, disinformation is defined as the dissemination of fabricated and factually incorrect information with the main aim to deliberately deceive its audience.”
It is unclear how the researchers determine whether or not disinformation was spread with the “aim to deliberately deceive,” but the paper breaks disinformation and “deceptive news” into three main categories, which are:
“(1) serious fabrications including unverified claims coupled with exaggerations and sensationalism;
(2) large-scale hoaxes that are masqueraded as credible news which could be picked up and mistakenly disseminated;
(3) humorous fakes that present fabricated purposes with no intention
The paper gives the example of infowars.com, and disclose.tv as “unreliable sources” because of their annotations in journalism organizations like fakenewswatch.com and PropOrNot. They use BBC and Reuters as examples of “reliable sources” because of being verified on Twitter and having been used in another study cited in the paper by Glenski M., Weninger T., and Volkova S. called “Identifying and understanding user reactions to deceptive and trusted social news sources”.
The AI created groups of words for their algorithm to detect in tweets, and associated them with the Twitter user being either “reliable” or “unreliable.”
The researchers found that users who are spread disinformation are more likely to use the words “war,” “government”, “Israel”, and “liberal”, among others. Users who share from reliable news sources use words like “gonna,” “wanna,” “rn”, and “okay,” among others.
According to the paper, users who share unreliable news like to tweet about politics, religion, and controversial news stories.
“We observe that users reposting unreliable news sources in the future are more prevalent in tweeting about politics (note that we exclude user retweets in our study). For example, they use words related to the established political elite (e.g., liberal, government, media, MSM6) and Middle East politics (e.g., Islam, Israel).
This may be partially explained by studies which find that people who are more ideologically polarized might be more receptive to disinformation (Marwick, 2018) and engage more with politics on social media (Preoţiuc-Pietro et al., 2017). Users using language similar to the language used by unreliable and hyperpartisan sources can be explained by the fact that these users might already consume news from unreliable sources but they have not reposted any of them yet (Potthast et al., 2018; Pennycook, Cannon & Rand, 2018).”
The paper goes on to say users who share reliable news tend to tweet more about things going on in their personal lives like clothes, food, school subjects, and personal events.
“Users belonging in the reliable news sources category use words related to self-disclosure
and extraversion such as personal feelings and emotions (e.g., mood, wanna, gonna, I’ll,
excited). Moreover, words such as birthday and okay denote more frequent interaction with
other users, perhaps friends.”
Presumably, this type of technology will be used in the future by Twitter and other social media platforms to curate feeds into a more desirable environment, as suggested by Yida Mu, who said:
“Studying and analyzing the behavior of users sharing content from unreliable news sources can help social media platforms to prevent the spread of fake news at the user level, complementing existing fact-checking methods that work on the post or the news source level.”