REALITY TV
TV
MOVIES
MUSIC
CELEBRITY
About Us Contact Us Privacy Policy Terms of Use Accuracy & Fairness Corrections & Clarifications Ethics Code Your Ad Choices
© MEAWW All rights reserved
MEAWW.COM / NEWS / CRIME & JUSTICE

Scammers use AI-generated son's voice to con couple resulting in heartbreak for sick wife

The scammer informed Fred and Kathy that their son was in the county jail and and would be charged with DWI and asked them for $15,000 for bail
UPDATED MAR 24, 2023
The scammer reportedly lied to Fred and Kathy, saying their son was involved in an accident (KHOU 11 screenshot/YouTube)
The scammer reportedly lied to Fred and Kathy, saying their son was involved in an accident (KHOU 11 screenshot/YouTube)

HOUSTON, TEXAS: A couple from Houston, Texas, said that thieves scammed them out of thousands of dollars by using artificial intelligence to clone their son's voice. The scammer reportedly lied to Fred and Kathy, saying there was a car crash involving a pregnant woman, in which their son was badly injured.

"This is a serious situation. He did hit a lady that was six months pregnant," Kathy said she was told, according to KHOU. "It was going to be a high-profile case and she did lose the baby." The couple was told $15,000 was needed to bail their son out of prison. The parents took the situation very seriously and even pushed back Kathy's chemotherapy for her cancer.

READ MORE

'IT'S ALIVE!' Terrifying warning from Google engineer who says company's AI has SENTIENCE of an 8-yr-old!

Artificial intelligence could be just as accurate in detecting diseases as health professionals, says study

'I could have sworn I was talking to my son'

The couple is now speaking out about the incident in order to raise awareness and help others avoid similar situations. On Wednesday, March 22, the phone rang, and upon answering it, they heard what sounded like their son's voice. The person said he had been in an accident and had hurt a pregnant lady. "I could have sworn I was talking to my son. We had a conversation," Kathy told KHOU

The scammer told them that their son was in the county jail and and would be charged with DWI. They said he was also hurt as a result of the crash, and had a broken nose. Kathy said she believed her son was in danger and did not hesitate. While the scammer initially asked for $15,000, they later reduced the amount to $5,000. They even said they would come over and pick up the money.

After handing over the money, Kathy and Fred learned that their son was actually at work the entire time. Now forensics expert Eric Devlin, who is with Lone Star Forensics, said vocal cloning has now become common and easy to achieve. "They actually don't need as much as you think," he said. "They can get it from different sources - from Facebook, from videos that you have public, Instagram, anything you publish."

Dangers of artificial intelligence

Despite its ability to make work easier for humans, artificial intelligence poses various real-life risks. Some of the major risks include consumer privacy, biased programming, danger to humans, and unclear legal regulation, according to Tableau. There are risks of autonomous weapons, social manipulation, invasion of privacy, and social grading. As in Fred and Kathy's case, it can also be used in fraud.

"There is the potential for AI to be used to facilitate fraudulent activities, such as generating fake or misleading information, or automating scams or other fraudulent schemes. AI can also be used to detect and prevent fraud by analyzing data and identifying patterns that may indicate fraudulent activity," ACFE Insights say. 

Last year, a Regina couple were nearly scammed out of $9,400 after they heard what sounded like their grandson's voice asking for help on the phone. Scammer told Ruth Card and her husband Greg Grace that their grandson was involved in a car collison and ended up with a broken nose in a holding cell. They were told he had been charged with criminal negligence but the charges would be dropped if he could pay for the damage the other vehicle suffered, according to The Washington Post.

In 2019, criminals used an AI-based software to impersonate a chief executive’s voice and went on to demand a fraudulent transfer of $243,000. The CEO of a UK-based company thought he was on the phone with his boss when it was actually a scam. He was asked to urgently pay the amount to a Hungarian supplier,The Wall Street Journal reported.

Deepfake is another risk AI poses. In February, a deepfake video of Joe Rogan promoting a libido booster for men went viral. Similarly, in December last year, utra-sexualized images of Billie Eilish likened to “deepfake pornography” went viral, and as many as 11 million people viewed a photo gallery video featuring the singer’s face on sexually exaggerated bodies in just four days. The content was eventually removed for violating TikTok’s community guidelines around sexual harassment.

POPULAR ON MEAWW
MORE ON MEAWW