Search

How Should AI Be Policed Amid Misinformation & Harm to Communities of Color?

By Gerald Bell, Contributor

Fear and uncertainties surrounding Artificial Intelligence (AI) are mounting as the FBI and other global legal authorities have intensified warnings about threats towards voter misinformation, disrupting education and the economy, and its impact on racial discrimination, if swift actions are not taken.

Professor Apryl Williams, a Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University has identified recent research showing how AI technologies have struggled to identify images and speech patterns of nonwhite people. She said that Black AI researchers at giant tech firms have raised concerns about how excluding non-whites can be harmful to the Black community.

“The thing that scares me most for people of color is that we don’t understand just how much AI is already part of our everyday life,” laments Williams, who studies gender, and race in digital and algorithmic ‘techno-cultures’. “AI is a fascinating development for our society. But it’s important to remember the harm that cannot be ignored, especially pertaining to racial bias and discrimination against African Americans.”

Williams contends that to police AI, “There needs to be some kind of legislative prompting where tech companies suffer legal consequences, and economic consequences, when they violate trust with the public, and when they extract data without telling people.”

Amid a pivotal election season, voters in New Hampshire were the target of an AI-generated robocall that faked President Joe Biden’s voice hoping to dissuade Democrats from voting in the State’s primary election earlier this year. The robocalls were later traced to a political consultant who faces state criminal charges and a federal fine for using artificial intelligence to impersonate the President.

Governor Gavin Newsom signed an Executive Order in 2023 requiring state agencies to examine how AI might threaten the security and privacy of California residents. The directive further authorized state employees to experiment with AI tools to identify safe methods to integrate them into their operations. 

The Governor’s order is crucial because California is home to 35 of the top fifty AI companies. The technology is aggressively being co-opted by criminals, rogue states, extremists, and special interest groups, to manipulate people for economic gain, political advantage, and spreading hate. This has the White House and other governments struggling with how to regulate artificial intelligence.

“There’s a Pandora’s box being opened here, and we just want it done safely,” said Newsome in a Bloomberg News interview. “We can’t make the same mistakes we did with social [media].”

Law enforcement faces a steep challenge with AI regulation as policing for an advanced technological weapon only increases the accessibility to sophisticated foreign governments that want to interfere in U.S. affairs.

The three countries of most concern to the FBI in the current election year are Russia, Iran, and China. Officials admit they can’t prove yet what these countries hope to achieve using AI to influence American elections.

“I worry that we are less prepared for foreign intervention in our elections in 2024 than we were in 2020,” said Mark Warner, the chairman of the Senate Intelligence Committee. According to a U.S. Intelligence report, Russia sought to interfere in both the 2016 and 2020 elections.

In a world of easily generated fake content, gone are the days of “seeing is believing.” Therefore, tech experts advise individuals to adopt a “trust but verify” approach to media consumption, especially those in education.

Waldon University (WU) published a report pointing to the intersection of AI and education’s ethical considerations – zeroing in on bias, misinformation, and the future of educators’ roles. The report contests that AI is only as knowledgeable as the information it has been trained on. If a biased AI tool is used for grading, students could receive low grades based on race or gender.

“Neither students nor teachers should assume that information provided by AI is accurate,” the WU report noted.

Researchers and developers are prioritizing the ethical implications of AI to avoid negative societal impacts. Tech experts are saying it’s important for elected public officials to work closely with private firms to generate accountable, transparent algorithms.

“AI-generated content, such as deepfakes, contributes to the spread of false information and the manipulation of public opinion,” said Warner who’s seen this play among child predators. “Efforts to detect and combat AI-generated misinformation are critical in preserving the integrity of information in the digital age. So, regulation is a must.”

Share the Post:

Related Posts