Tom Cruise test shows people can’t detect fake videos even when they know they are fake

Research shows that most people are unable to tell that they are watching a “deep fake” video even when they are told that the content they are watching has been digitally altered.

The term “deep fake” refers to a video in which artificial intelligence and deep learning – an algorithmic learning method used to train computers – have been used to make a person appear to be saying something they didn’t say.

A notable example is a manipulated video of Richard Nixon’s presidential speech on Apollo 11 and Barack Obama insulting Donald Trump – in which some researchers have suggested that the illicit use of technology could make it the most dangerous form of crime in the future.

In the first experiment, conducted by researchers from Oxford University, Brown University and the Royal Society, participants watched five unedited videos followed by four unedited videos and one deep fake video — with viewers asked to discover which one was wrong.

The researchers used videos of Tom Cruise created by sound effects artist Chris Ohm, which watched the American actor perform magic tricks and tell jokes about Mikhail Gorbachev in videos uploaded to TikTok.

The second experience is the same as the first, except that viewers have content that warns them that one of the videos is going to be a deep fake.

Participants who received the advance warning identified deepfakes at 20 percent compared to 10 percent who did not, but even with a direct warning, more than 78 percent of people were unable to distinguish deepfakes from original content.

“Individuals are not more likely to notice anything out of the ordinary when exposed to a deep fake video of neutral content,” the researchers wrote in an advance of the paper, “compared to a control group who only watched original videos.” The paper is expected to be published and peer-reviewed within a few months.

Regardless of participants’ familiarity with Mr. Cruz, their gender, their level of social media use, or their confidence in being able to detect the modified video, they all showed the same errors.

The researchers found that the only trait that correlated closely with the ability to detect deepfakes was age, with older participants being more able to identify deepfakes.

The researchers predict that “the difficulty of manually (with the naked eye) detecting real videos from fake videos threatens to completely reduce the information value of video media.”

“As people get to grips with deepfake’s deceptive power, they will logically discredit all online videos, including original content.”

If this continues in the future, people will have to rely on warning signs and moderation in content on social media to ensure that misleading videos and other misinformation does not become endemic to the platforms.

However, Facebook, Twitter and other sites routinely rely on regular users reporting content to moderators – a task that can be challenging if people are unable to distinguish between misinformation and original content.

Facebook in particular has been criticized repeatedly in the past for not providing adequate support to content moderators and for failing to remove fake content. Research at New York University and France’s Grenoble Alps University found that from August 2020 to January 2021, articles from known providers of disinformation received six times as many likes, shares, and interactions as legitimate news articles.

Facebook claimed that such research does not show the full picture, as “sharing [with Pages] It should not… be confused with the number of people who actually view it on Facebook.”

The researchers also raised concerns that “such warnings might be written off as politically motivated or biased,” as evidenced by the conspiracy theories surrounding the COVID-19 vaccine or Twitter’s rating of former President Trump’s tweets.

Fifteen percent of people in a 2020 study believed the aforementioned deepfake that President Obama called then-President Trump a “total and complete deception” was accurate, even though the content itself was “highly improbable.”

The researchers warn that general mistrust of online information is a potential consequence of both deepfakes and content warnings, and “policy makers should take action.” [that] taken into account when evaluating the costs and benefits of curating online content.”

Leave a Comment