Where artificial intelligence and misinformation meet

14 October 2022

A business professor at Arizona State University says Internet opponents will look to the midterm elections to move the pot with voters

With the midterm elections looming a few weeks away, barbs and political rhetoric are about to heat up.

An Arizona State University professor believes that most of the hyperbolic talk will come from malicious bots that spread racism and hate on social media and in the comments section of news sites.

Victor BenjaminAssistant Professor of Information Systems at WP Carey Business SchoolThis phenomenon has been researched for years. He says the next generation of AI is a reflection of what’s happening in society. So far, it doesn’t look good.

As AI learning becomes increasingly dependent on public data sets, such as online conversations, Benjamin says, it becomes vulnerable to influence from cyber adversaries who inject disinformation and social discord.

And these cyber adversaries don’t just post bad posts on social media. They influence public opinion on issues such as presidential elections, public health, and social tensions. If not curbed, Benjamin says, it could harm the health of online conversations and the technologies like artificial intelligence that depend on them.

Arizona State University News spoke with Benjamin about his research and his views on AI trends.

Editor’s note: Answers have been edited for length and clarity.

man wearing glasses and jacket

Victor Benjamin

QUESTION: We are weeks away from the midterm elections. What do you expect about the Internet community and political discourse?

Answer: Unfortunately, we are sure to see extremist views on both ends of the political spectrum become among the most resonant in online discourse. Many messages will push marginal ideas and attempt to dehumanize the opposition. The point of manipulating social media in this way is to make it appear that these extreme perspectives are commonplace.

Q: When did you start noticing this trend of social manipulation using AI?

a: Online social manipulation has always been common, but activism rebounded with the 2016 presidential election. For example, some social platforms such as Facebook have appeared to admit that they have allowed advertisements to be purchased by nation states to push hateful and inflammatory messages about social issues to American users. Moreover, the controversy over masks and COVID-19 has been largely driven by internet opponents who have played both sides. … Recently, the anti-work movement also sees some destructive and disheartening messages that encourage individuals to actively abandon and quit participating in society. We can expect to see more inhumane and extremist messages coming in the upcoming elections on various social issues.

Q: Why is this happening and who is behind it?

a: Much of this hostile behavior is driven by organizations and nation states that may have a vested interest in seeing American society divided and civilians transformed into unproductive morale. …Social media and the Internet give hostile groups the ability to directly target American citizens unprecedented in history. This type of activity is generally recognized as a form of “fifth column warfare” in defense societies, in which a group of individuals attempts to undermine a larger group from within.

Q: How does this affect the development of artificial intelligence in the future?

a: The implications for the development of artificial intelligence in the future are very significant. Increasingly, in order to advance AI, research groups are using public data sets, including social media data, to train AI systems so they can learn and improve. For example, consider the autocomplete feature on phones and computers. This feature is powered by allowing the AI ​​to see millions or even billions of example sentences where it can learn the structure of the language, what words appear together most frequently, in what order and more. After the AI ​​learns our language patterns, it can then use this knowledge to help us with different language tasks, such as auto-completion.

The problem arises when we think about exactly what AI learns when we feed it with social media data. We’ve all seen the media headlines about how various tech companies are releasing chatbots, only to take them offline soon after because the AI ​​quickly got lost and developed extreme views. We must ask ourselves, why is this happening?

… This learned behavior by artificial intelligence is merely a reflection of who we are as a society, or at least as dictated by online discourse. When Internet adversaries manipulate our social media to irritate and frustrate Americans, things tend to be said online that don’t reflect the best of us. These conversations, while malicious, are eventually aggregated and fed into AI systems to learn from. It is possible that Amnesty International will then pick up some extremist views.

Q: What can be done to reduce this current threat of social discord?

a: One obvious step in the right direction that I don’t see sufficiently discussed is to show metadata. Social media platforms contain all metadata but are never transparent. For example, in the case of Facebook ads about extreme social views, Facebook knew who the advertiser was but never disclosed it to users. I suspect that Facebook users would react differently to ads if they knew the advertiser was from a foreign nation-state.

Moreover, on platforms like Twitter or Reddit, a lot of the conversation that lands on the homepage is driven by what’s popular, not necessarily what’s true or true. These platforms need to be more clear about who is posting those messages and to what frequency, (as well as) if the conversations are actually organic or seem to be manufactured, etc. For example, if hundreds of social media accounts are simultaneously activated to start posting the same divisive messages that didn’t exist before, they are of course not organic, and platforms should limit this content.

Beyond that, I think everyone needs to develop the right mindset about what the Internet is today. …when we encounter some information online that we are not familiar with, we have to stop and think about what the source is, what are the possible motives for the source to share that information, and what information is trying to get us to do so, we need to think about how the systems are trying The information we encounter biases our behavior and thinking.

Top image courtesy of iStock / Getty Images

Leave a Comment