Current:Home > NewsDeepfake of principal’s voice is the latest case of AI being used for harm -WealthGrow Network
Deepfake of principal’s voice is the latest case of AI being used for harm
View
Date:2025-04-17 08:40:00
The most recent criminal case involving artificial intelligence emerged last week from a Maryland high school, where police say a principal was framed as racist by a fake recording of his voice.
The case is yet another reason why everyone — not just politicians and celebrities — should be concerned about this increasingly powerful deep-fake technology, experts say.
“Everybody is vulnerable to attack, and anyone can do the attacking,” said Hany Farid, a professor at the University of California, Berkeley, who focuses on digital forensics and misinformation.
Here’s what to know about some of the latest uses of AI to cause harm:
AI HAS BECOME VERY ACCESSIBLE
Manipulating recorded sounds and images isn’t new. But the ease with which someone can alter information is a recent phenomenon. So is the ability for it to spread quickly on social media.
The fake audio clip that impersonated the principal is an example of a subset of artificial intelligence known as generative AI. It can create hyper-realistic new images, videos and audio clips. It’s cheaper and easier to use in recent years, lowering the barrier to anyone with an internet connection.
“Particularly over the last year, anybody — and I really mean anybody — can go to an online service,” said Farid, the Berkeley professor. “And either for free or for a few bucks a month, they can upload 30 seconds of someone’s voice.”
Those seconds can come from a voicemail, social media post or surreptitious recording, Farid said. Machine learning algorithms capture what a person sounds like. And the cloned speech is then generated from words typed on a keyboard.
The technology will only get more powerful and easier to use, including for video manipulation, he said.
WHAT HAPPENED IN MARYLAND?
Authorities in Baltimore County said Dazhon Darien, the athletic director at Pikesville High, cloned Principal Eric Eiswert’s voice.
The fake recording contained racist and antisemitic comments, police said. The sound file appeared in an email in some teachers’ inboxes before spreading on social media.
The recording surfaced after Eiswert raised concerns about Darien’s work performance and alleged misuse of school funds, police said.
The bogus audio forced Eiswert to go on leave, while police guarded his house, authorities said. Angry phone calls inundated the school, while hate-filled messages accumulated on social media.
Detectives asked outside experts to analyze the recording. One said it “contained traces of AI-generated content with human editing after the fact,” court records stated.
A second opinion from Farid, the Berkeley professor, found that “multiple recordings were spliced together,” according to the records.
Farid told The Associated Press that questions remain about exactly how that recording was created, and he has not confirmed that it was fully AI-generated.
But given AI’s growing capabilities, Farid said the Maryland case still serves as a “canary in the coal mine,” about the need to better regulate this technology.
WHY IS AUDIO SO CONCERNING?
Many cases of AI-generated disinformation have been audio.
That’s partly because the technology has improved so quickly. Human ears also can’t always identify telltale signs of manipulation, while discrepancies in videos and images are easier to spot.
Some people have cloned the voices of purportedly kidnapped children over the phone to get ransom money from parents, experts say. Another pretended to be the chief executive of a company who urgently needed funds.
During this year’s New Hampshire primary, AI-generated robocalls impersonated President Joe Biden’s voice and tried to dissuade Democratic voters from voting. Experts warn of a surge in AI-generated disinformation targeting elections this year.
But disturbing trends go beyond audio, such as programs that create fake nude images of clothed people without their consent, including minors, experts warn. Singer Taylor Swift was recently targeted.
WHAT CAN BE DONE?
Most providers of AI voice-generating technology say they prohibit harmful usage of their tools. But self enforcement varies.
Some vendors require a kind of voice signature, or they ask users to recite a unique set of sentences before a voice can be cloned.
Bigger tech companies, such as Facebook parent Meta and ChatGPT-maker OpenAI, only allow a small group of trusted users to experiment with the technology because of the risks of abuse.
Farid said more needs to be done. For instance, all companies should require users to submit phone numbers and credit cards so they can trace back files to those who misuse the technology.
Another idea is requiring recordings and images to carry a digital watermark.
“You modify the audio in ways that are imperceptible to the human auditory system, but in a way that can be identified by a piece of software downstream,” Farid said.
Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, said the most effective intervention is law enforcement action against criminal use of AI. More consumer education also is needed.
Another focus should be urging responsible conduct among AI companies and social media platforms. But it’s not as simple as banning Generative AI.
“It can be complicated to add legal liability because, in so many instances, there might be positive or affirming uses of the technology,” Givens said, citing translation and book-reading programs.
Yet another challenge is finding international agreement on ethics and guidelines, said Christian Mattmann, director of the Information Retrieval & Data Science group at the University of Southern California.
“People use AI differently depending on what country they’re in,” Mattmann said. “And it’s not just the governments, it’s the people. So culture matters.”
___
Associated Press reporters Ali Swenson and Matt O’Brien contributed to this article.
veryGood! (956)
Related
- What to watch: O Jolie night
- California researchers discover mysterious, gelatinous new sea slug
- Noem’s Cabinet appointment will make a plain-spoken rancher South Dakota’s new governor
- Flurry of contract deals come as railroads, unions see Trump’s election looming over talks
- 2025 'Doomsday Clock': This is how close we are to self
- Judge sets date for 9/11 defendants to enter pleas, deepening battle over court’s independence
- Caitlin Clark shanks tee shot, nearly hits fans at LPGA's The Annika pro-am
- Amazon launches an online discount storefront to better compete with Shein and Temu
- 'Most Whopper
- Bill on school bathroom use by transgender students clears Ohio Legislature, heads to governor
Ranking
- A South Texas lawmaker’s 15
- Maine elections chief who drew Trump’s ire narrates House tabulations in livestream
- Vogue Model Dynus Saxon Charged With Murder After Stabbing Attack
- Republican Rep. Juan Ciscomani wins reelection to Arizona US House seat
- Costco membership growth 'robust,' even amid fee increase: What to know about earnings release
- FBI offers up to $25,000 reward for information about suspect behind Northwest ballot box fires
- Disease could kill most of the ‘ohi‘a forests on Hawaii’s Big Island within 20 years
- Biden, Harris participate in Veterans Day ceremony | The Excerpt
Recommendation
Rolling Loud 2024: Lineup, how to stream the world's largest hip hop music festival
The Latin Grammys are almost here for a 25th anniversary celebration
Darren Criss on why playing a robot in 'Maybe Happy Ending' makes him want to cry
'This dude is cool': 'Cross' star Aldis Hodge brings realism to literary detective
House passes bill to add 66 new federal judgeships, but prospects murky after Biden veto threat
Georgia State University is planning a $107M remake of downtown Atlanta
Inflation ticked up in October, CPI report shows. What happens next with interest rates?
Congress is revisiting UFOs: Here's what's happened since last hearing on extraterrestrials