Deepfakes Threaten National Security, Private Sector Tackles Issue: Broadband Breakfast

0

WASHINGTON, July 12, 2022 — Representatives from academia and a nonprofit organization diverged at a Bipartisan Policy Center event on Tuesday over whether the government should step in and downplay the problems associated with the artificial intelligence, including bias and discrimination in algorithms.

“We really want the actors to help us establish national and international guidelines,” said Miriam Vogel, president and CEO of EqualAI, a nonprofit that seeks to reduce bias in AI. “We’re going full speed with no lanes, no speed limits to manage expectations.”

While acknowledging the benefits of AI in today’s society, Vogel said its algorithms pose a risk that often leads to bias and discrimination. She shared the example of how facial recognition misses certain voices or skin tones.

AI is used in various industries and powers algorithms that provide services to individuals. Panelists referenced the use of AI algorithms in identifying suspects for criminal justice, in diagnosing illnesses in healthcare, and for movie and job recommendations.

Vogel said the regulations will set clear expectations for AI companies to minimize those risks.

Adam Thierer, a senior fellow at George Mason University’s Mercatus Center, said he was “a bit skeptical about creating a regulatory AI framework” and instead offered to educate workers on how to define risk management best practices. He called it an “educational institutions approach”.

He said that because of the time required for the federal law to be enacted, he wanted to directly reach AI workers, such as computer programmers and AI innovators “of tomorrow” to better “integrate the best practices” in AI.

“I think the principles of good cooking practice in design start with an educational focus,” Thierer said.

Thierer said he wanted to entrust this work to trusted third parties to suggest ways forward, including ethical reviews and consultations with AI companies. He said that when it comes to AI rules in different industries, “we don’t need one global standard to govern them all.”

Thierer added that because of how quickly AI is evolving, “it can’t go through the same regulatory process.” He argued that if regulation is put in place, we will lose AI innovators.

Vogel disagreed with Thierer, saying she doesn’t believe there is a risk of losing innovators with AI regulation, and instead said, “I see regulation as the partner innovation.”

She said that because there are no government regulations for AI, companies must do it themselves if they wish, referring to EqualAI’s Badge program which aims to help companies manage the risks.

“We need to put a governance system in place to make sure continuous testing is happening,” Vogel said.

Share.

Comments are closed.