English Section

Polish research institute warns against deepfake videos featuring celebrities

21.08.2024 10:00
A Polish state research institute has issued new warnings against deepfake videos featuring public figures and celebrities.
Pixabay License
Pixabay LicenseImage by Tung Nguyen from Pixabay

"Don't believe everything you see on the internet," the NASK research institute said.

It warned the public about manipulated content created using artificial intelligence — that are increasingly being used to deceive viewers online.

The deceiver's purpose is, most commonly, fraudulent commercial gain or political disinformation.

According to NASK, in the first half of this year alone, cybercriminals used the likenesses of more than 120 celebrities in falsified videos.

One of the most common themes in deepfake videos currently circulating online involves fraudulent promotions of financial investments.

These videos often feature the images of high-profile figures, including the president, prime minister and famous athletes, as well as the logos of major state-owned companies.

NASK's research shows that journalists and politicians are the most at risk of having their images stolen for deepfake purposes, making up 32 percent and 21 percent of verified cases respectively.

Alicja Martinek, an expert from NASK's artificial intelligence division, emphasized that deepfakes can often be recognized without any technical expertise. She advises viewers to engage in critical thinking, especially when encountering sensational content.

“There are a few red flags to watch for," Martinek said. "For example, when deepfake videos promote financial investments that sound too good to be true, you don’t need technological tools to realize something is off. The same applies to advertisements for so-called ‘miracle cures.’”

Martinek also highlighted some common signs of deepfake videos. These include unnatural voice tones, mechanical speech and incorrect use of personal pronouns. Additionally, deepfake algorithms often struggle with dates, numbers and gender consistency, resulting in awkward mistakes in the content.

Visually, there are also telltale signs. Martinek pointed out that one of the easiest ways to spot a deepfake is by observing the synchronization between a person's speech and lip movements.

“Thanks to fast internet connections, we can no longer blame data transfer issues for misaligned audio and video—this is often just a poorly made deepfake. The lips may also take on strange shapes,” she added.

If someone becomes a victim of fraud due to a fake video, NASK advises reporting the incident to the police, where some stations have specialized cybercrime units.

False recordings and related websites can also be reported to CERT Polska, a team that monitors the security of Polish networks, via their website at incydent.cert.pl.

For analytical purposes, such frauds can be sent to deepfake@nask.pl. NASK's Artificial Intelligence Division is working on algorithms to automatically detect fake videos.

(rt/gs)

Source: IAR, polskieradio.pl