FCC Proposes AI Disclosure Rules for Robocalls, Texts

The agency aims to combat scams by requiring transparency about AI use in automated calls and messages to consumers

Ben Wodecki, Junior Editor - AI Business

August 9, 2024

2 Min Read
GETTY IMAGES

The Federal Communications Commission (FCC) has proposed new rules that would force callers to disclose if they are using AI in calls and texts. 

Detailed in a Notice of Proposed Rulemaking (FCC 24-84), the FCC is considering imposing rules that would force callers to disclose to consumers whenever they’re using AI in their communications such as robocalls.

The FCC suggests such disclosures would allow consumers to identify and avoid calls that “contain an enhanced risk of fraud and other scams.”

“Robocalls and robotexts are the number one complaint that consumers raise to the FCC,” said FCC commissioner Anna M. Gomez. “We agree, they are incredibly frustrating. That is why we continuously work to combat robocalls and robotexts.”

These proposed robocall rules come as the FCC looks to clamp down on AI. It had already banned using AI-generated voices in robocalls because of the fake President Biden spoof on New Hampshire voters earlier this year.

Now the agency is looking more widely at AI.

To implement the rules, the FCC wants to define what constitutes an AI-generated call. 

The newly published notice from the FCC invites stakeholder comments on the proposed plans and seeks further input on ways to alert consumers to AI-generated unwanted and illegal calls and texts.

Related:New Guide Aims to Help Governments Implement Successful AI Strategies

“AI technologies can bring both new challenges and opportunities to combat this scourge, and responsible and ethical implementation of AI technologies is crucial to strike a balance,” Gomez said.

In addition to identifying potential scammers, the FCC also wants to introduce protections to help people with disabilities use AI for phone communication.“Transparency alone will not deter fraudsters,” said Kush Parikh, president at Hiya, which provides security solutions for mobile operators. “They will continue to misuse technology and will always find loopholes in the regulation, or in carrier technology.

“Guidance from the FCC for telecom providers and businesses on how to block AI-generated deepfakes in real-time and alert consumers would be a welcome development, but it is not enough unless it is mandated. Scammers are always using the latest technology, and it is imperative that telecom providers also offer technology immediately to prevent deepfakes and safeguard consumers.”

Beyond robocalls, the FCC previously proposed rules that would force political ads to disclose if they feature AI-generated content.

The public comment period for the proposal ends in September.

This story first appeared in IoT World Today's sister publication AI Business.

Related:What the FCC’s Cybersecurity Labeling Program Means for Business

About the Author

Ben Wodecki

Junior Editor - AI Business

Ben Wodecki is the junior editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to junior editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others.

Sign Up for the Newsletter
The most up-to-date news and insights into the latest emerging technologies ... delivered right to your inbox!

You May Also Like