Thursday, March 28, 2024

AdvertiseDonateSubmit
NewsSportsArtsOpinionThe QuadPhotoVideoIllustrationsCartoonsGraphicsThe StackPRIMEEnterpriseInteractivesPodcastsBruinwalkClassifieds

Campus Queries: What are deepfakes?

(Cody Wilson/Daily Bruin)

By Lauren Bui and Karina Seth

Dec. 9, 2019 12:19 a.m.

Campus Queries is a series in which Daily Bruin readers and staff present science-related questions for UCLA professors and experts to answer.

Q: What are deepfakes?

A: Deepfake videos transform a recording of someone into artificial scenarios that never actually occurred using machine learning algorithms.

This advanced technology could potentially be abused by individuals with cruel intent. For example, deepfakes are now being used to produce fake pornography or to attack political candidates by making it seem like they said or did things that they did not actually do, said Jacob Foster, assistant professor of sociology and computational sociologist at UCLA.

It is necessary to understand how deepfakes work to combat this problem and control it before it gets out of hand, he added.

“You can imagine situations where producing a deepfake might be covered by free speech, and others where it shouldn’t be,” said Mark Green, a mathematics professor at UCLA. “(Deepfakes) require greater vigilance and greater critical thinking in evaluating information you receive.”

Deepfakes fall under the umbrella of deep learning, Green said. Deep learning is a subset of artificial intelligence, which is based on artificial neural networks. It is an expression of input-output relations that allow machines to make decisions.

“Synthetic neurons take inputs with different weights and then either fire or don’t fire,” Foster said. “With deep learning, you stack these (inputs) one on top of another.”

Deepfakes are created through dimensionality reduction, a process of condensing billions of pixels, Green said. There are a finite number of expressions a face can make because our faces have a finite number of muscles, and deepfakes allow these expressions to be projected onto a face through artificial intelligence, he added.

One famous example of a deepfake is Buzzfeed’s video with actor Jordan Peele impersonating former U.S. President Barack Obama, showing him deliver a public service announcement he never actually gave.

“We’re entering an era in which our enemies can make anyone say anything at any point in time,” says Peele as Obama in the deepfake.

Foster said this video exemplifies how words, gestures, and facial expressions can be transferred in real time.

When someone is depicted poorly or inappropriately in a deepfake, it is difficult for our minds to separate this artificial image from the truth, Foster said.

“These are harms that are really hard to undo,” he said.

According to a New York Times article published in November, deepfakes have become an international threat to our notions about what is fake and what is authentic.

The article cited an incident when the president of the small country of Gabon left the country for medical care. His government released an alleged proof-of-life video, which opponents accused as fake. This confusion, the article stated, is referred to as the liar’s dividend, where the growing prevalence of deepfakes and attempts to debunk them also allows people to discredit real videos as fakes.

Foster said there is an arms race dynamic of detecting deepfakes versus creating them as more sophisticated technologies are developed.

“Within the context of machine learning, there is a race between building fancier algorithms to make (deepfakes) and building fancier algorithms to detect (deepfakes),” Foster said.

One potential solution to this rising problem is the use of cryptography – creating a long code that authenticates the original video, Green said.

“If you change anything about the video, you’ll change this code,” Green added.

Share this story:FacebookTwitterRedditEmail
Lauren Bui
COMMENTS
Featured Classifieds
More classifieds »
Related Posts