FBI warns of growing use of AI-generated deepfakes in Ars Technica sextortion schemes

FBI warns of growing use of AI-generated deepfakes in sextortion schemes

FBI warns of growing use of AI-generated deepfakes in sextortion schemes

The FBI warned on Monday of the increasing use of artificial intelligence to generate fake videos for use in sextortion schemes that attempt to molest minors and non-advisory adults or force them to pay ransoms or comply with other demands.

The scourge of sextortion has been around for decades. It involves an online acquaintance or stranger tricking a person into providing a payment, an explicit or sexually-themed photo, or other inducement by threatening to share previously obtained incriminating images with the public. In some cases, the images the scammers have are real and were obtained from someone the victim knows or from an account that has been hacked. Other times, scammers just claim to have explicit material without providing any proof.

After convincing victims that their explicit or incriminating images are in the scammers’ possession, the scammers demand some form of payment in exchange for not sending the content to family, friends, or employers. In case victims send sexually explicit images as payment, scammers often use the new content to keep the scam going as long as possible.

In recent months, the FBI said in an advisory released Monday, the use of artificial intelligence to generate fake videos that appear to show real people engaging in sexually explicit activity has grown.

“The FBI continues to receive complaints from victims, including underage children and non-consenting adults, whose photos or videos have been tampered with with explicit content,” the officials wrote. “The photos or videos are then publicly disseminated on social media or pornographic websites for the purpose of victim harassment or sextortion schemes.

They went on to write:

As of April 2023, the FBI has observed an increase in sextortion victims reporting the use of fake images or videos created from content posted on their social media sites or web posts, provided to the malicious actor upon request or captured during video chats. Based on recent victim complaints, the malicious actors typically demanded: 1. Payment (e.g. money, gift cards) with threats to share the images or videos with family or social media friends if the funds were not received ; or 2. The victim sends real sexually themed images or videos.

Cloud-based software and services for creating so-called deepfake videos are plentiful online, ranging from freely available open source offerings to subscription accounts. With the advances in artificial intelligence in recent years, the quality of these offerings has drastically improved to the point where a single image of a person’s face is all that’s needed to create realistic videos using the likeness of the people in a fake video.

Most deepfake offerings include, at least ostensibly, protections designed to prevent deepfake abuse, for example, by using a built-in check designed to prevent the program from running on inappropriate media. In practice, these guard rails are often easy to bypass and there are services available in underground markets which are not subject to the restrictions.

Fraudsters often obtain photos of victims from social media or elsewhere and use them to create “sex-themed images that look realistic in a victim’s likeness, then circulate them on social media, public forums, or pornographic websites,” FBI officials warned. “Many victims, including minors, are unaware that their images have been copied, manipulated and disseminated until someone else brings them to their attention. The photos are then sent directly to the victims by bad actors for sextortion or harassment, or until it is self-discovered on the Internet. Once it is released, victims can face significant challenges in preventing the manipulated content’s continued sharing or removal from the Internet.”

The FBI has urged people to take precautions to avoid having their images used in deepfakes.

“Though seemingly harmless when posted or shared, the images and videos can provide malicious actors with an abundant supply of content to exploit for criminal activity,” the officials said. “Advances in content creation technology and personal images accessible online offer new opportunities for malicious actors to find and target victims. This leaves them vulnerable to long-term embarrassment, harassment, extortion, financial loss, or continued victimization “.

Persons who have received threats of sextortion should keep all available evidence, especially screenshots, texts, tape recordings, emails documenting usernames, email addresses, websites or names of platforms used for communication and addresses IP. They can immediately report sextortion to:

#FBI #warns #growing #AIgenerated #deepfakes #Ars #Technica #sextortion #schemes

Leave a Reply

Your email address will not be published. Required fields are marked *