The Federal Communications Commission on Thursday cracked down on a type of robocall that uses artificial intelligence to trick people.
This comes after New Hampshire residents received robocalls using AI to “clone” President Joe Biden’s voice last month. The fraudsters, posing as Biden, told voters to skip the state’s upcoming primary election. The robocalls were traced back to Texas-based Life Corp, operated by Walter Monk, and a criminal investigation is underway.
Last year, AI exploded in popularity and companies scrambled to incorporate the technology into its products and offerings. With warnings from industry experts largely ignored, the art, publishing, and entertainment industries were quickly left to wrestle with the implications of the technology themselves.
In the Commission’s declaratory ruling under the Telephone Consumer Protection Act, it said calls that use AI to simulate a human voice are illegal “unless callers have obtained prior express consent.”
In a statement, FCC Chairwoman Jessica Rosenworcel said the agency is “putting the fraudsters behind these robocalls on notice.”
Fraudulent calls in which the cybercriminal pretends to be a loved one or celebrity to con the recipient into divulging personal information isn’t new, but the FCC said the AI-voice generated calls are on the rise.
With the 2024 presidential election, on the horizon, the incident showed how the U.S. is still vulnerable to misinformation schemes that plagued the 2016 election.
“The use of generative AI has brought a fresh threat to voter suppression schemes and the campaign season with the heightened believability of fake robocall,” FCC Commissioner Geoffrey Starks said in a statement.
It’s unclear how much the ruling will help stop the AI robocalls, but the FCC hopes to eventually use the technology against cybercriminals as a “force for good that can recognize illegal robocalls before they ever reach consumers on the phone.”