Search

Manipulated Realities: Tackling AI’s Role in Hate Speech and Misinformation Campaigns

Elgin Nelson

The ascent of Artificial Intelligence (AI) has ignited a heated discourse and apprehension throughout the internet and cyberspace, raising crucial concerns over the potential weaponization of this rapidly evolving sphere of intelligence.

Techniques such as AI-synthesized voice technology have even been employed in deceptive robocalls impersonating political figures, exemplified by a false message discouraging New Hampshire Democrats from voting allegedly under the guise of President Joe Biden.

And in recent news prominent individuals like Sean “Diddy” Combs, Steve Harvey, Denzel Washington, and Bishop T.D. Jakes have been targeted with spurious disinformation campaigns. YouTube has become inundated with misleading videos featuring a blend of artificially generated and altered media portraying well-known Black celebrities in a false light.

YouTube has responded to the situation by issuing a statement through a spokesperson to NBC News, detailing the actions taken against the violative—as well as hate—content, including terminating channels and removing videos in violation of their Terms of Service.

Angelica Nwandu, CEO of The Shade Room, has noted a troubling rise in phony AI-manipulated stories about African American celebrities, misleading their audience and prompting unfounded inquiries about their validity.

“We’ve seen these pages that pop up on YouTube or TikTok, and they will have an AI-generated picture of Rihanna crying over A$AP [Rocky] going to jail, and it’s completely fake,” Nwandu told NBC News. “Our audience will DM and say ‘Why aren’t you posting this news?’ ‘Why aren’t you covering this story?’ Because they believe these pages.”

With AI platforms like ChatGPT captivating upwards of 100 million users, and tech giants like Google on the cusp of releasing their AI ventures (with projections suggesting the possibility of attracting over a billion users), one expert suggests that they are only “scratching the surface of potential applications. Also on the rise is the trepidation that its malevolent uses could soon overshadow current benefits.

According to multiple reports, AI is being exploited by extremist groups amid conflicts such as the Israel-Hamas war, and even spreading misinformation here in the United States. These hate groups are targeting vulnerable communities that may be susceptible to these types of threats.

The most prevalent tool used by these hate groups is memes, which only amplify these messages across various social media platforms. Some groups are manipulating technology to disrupt public meetings, city council gatherings, or online events, injecting hate speech or rhetoric into these spaces.

A recent survey by the Anti-Defamation League shows that over half of American adults reported being harassed online at some point in their lives— and over a quarter have experienced online harassment just within the last year.

“Overall, reports of each type of hate and harassment increased by nearly every measure and within almost every demographic group,” the survey finds.

However, there is a glimmer of hope, as researchers at the University of Michigan have unveiled Rule by Example (RBE), a novel tool that utilizes deep learning algorithms to sift through and identify hate speech. It contrasts standard text against a database of identified hateful content exemplars. 

Researchers believe that as RBE and similar tools advance, they will offer greater defense against online hate speech and misconduct. One university researcher noted the need for improved automated detection and moderation methods.

“More and more tech companies and online platforms are developing automated tools to detect and moderate harmful content”, the researcher said. “But the methods we’ve seen so far have a lot of room for improvement.” 

Share the Post:

Related Posts