Editor & Publisher

QUESTION WHAT YOU SEE AND HEAR

Sophisticated technologies now make video manipulation easier and potentially more dangerous

Sophisticated technologies now make video manipulation easier and potentially more dangerous . . . . . . .

War itself makes things murky. Information is hard to come by; embedded journalists are often few and under assault. Context is challenging to glean in real-time. Communications infrastructure can be uncertain and vulnerable. Propaganda abounds, and now it’s easier than ever to create and digitally distribute propaganda, like deep-fake videos — or “deepfakes,” as they’re sometimes called.

In the thick of Russia’s war on Ukraine, a video began to circulate online, purporting to show Ukraine’s president, Volodymyr Zelenskyy, encouraging surrender to the invading

enemy. The video was poorly done and unconvincing to most, and it quickly became the butt of jokes online. However, if there’s one thing the nation has learned in recent years, it’s that misinformation has legs. It spreads fast, and people believe it, and sometimes it’s at their own peril. It quickly became apparent that the laughable-quality video might still be taken seriously, so Facebook, Twitter, Youtube and other social media platforms took it down.

E&P reached out to Facebook and Twitter to learn how big of a problem deepfakes are on their platforms and what they’re doing to detect them and squash them when they’re used as propaganda, disinformation or to harass. Twitter did not respond. Meta provided E&P with some clarification on its policies. Images or videos “that have been edited or synthesized, beyond adjustments for clarity, in ways that are not apparent to an average person and would likely mislead an average person” are subject to removal.

“This policy does not extend to content that is parody or satire or is edited to omit words that were said or change the order of words that were said,” according to Meta. “Consistent with our existing policies, audio, photos or videos, whether deepfake or not, will be removed from Facebook if they violate any of our other Community Standards, including those governing nudity, graphic violence, voter suppression or hate speech.”

Meta also employs third-party independent fact-checkers to investigate manipulated or harmful images shared on its platform. The company told E&P that its own engineers are working to develop technology that will better detect deepfakes.

ALL IN GOOD FUN?

Deepfakes have become part of the pop-culture experience on social media platforms, especially video-centric apps like Tiktok.

Last year, Hollywood superstar

Tom Cruise was the “star” of a series of deepfake videos. He was seen sucking on a lollipop, being silly, growling and grooving to a Dave Matthews

Band song. The videos became so popular and were so convincing that the creator, Chris Umé, co-founded a brand-new company, Metaphysic, to produce Ai-assisted commercials and digitally restored films.

Buzzfeedvideo on Youtube posted a 2018 video that appears to show former President Barack Obama saying things he never actually spoke; the video was created using artificial intelligence (AI) technology and a near spot-on voiceover supplied by comedian Jordan Peele.

Sometimes, they’re made just for folly, as comic parody or satire. More maliciously, deepfakes are being weaponized and used to harass people, especially women. Search for #deepfake on Twitter, and make sure there are no children in the room when you do it because the results are vulgar and shocking — video after video of women in pornographic scenes, with other women’s faces superimposed.

An alleged deepfake video was at the center of the prosecution’s case in Buck’s County, Pennsylvania when they charged Raffaela Spone with harassing girls on her daughter’s high school cheerleading squad. Prosecutors initially alleged that Spone had doctored photos and videos, depicting the young women as naked, drinking alcohol and vaping, which violated their squad’s code of conduct. Two months after the charges were brought, the prosecutor’s office had to come clean that they could not actually prove that the digital images had been purposely altered — only that they’d been distributed to harass the students. Spone was convicted of three counts of cyber-harassment.

“By our records, the number of fake videos online is growing exponentially since 2018, roughly doubling every six months,” according to Sensity AI, a developer that’s designed a deepfake forensics platform for analyzing image and video files.

Sam Gregory is the program director at WITNESS.ORG, a nonprofit that provides tools, counsel and advocacy for people who chronicle human rights stories and abuses. Gregory has been in video since “video” meant “film.” In fact, the organization hosted one of the first video streaming channels online in 2000.

Gregory was E&P Publisher Mike Blinder’s guest on a recent taping of E&P Reports and shared some perspective on how quickly deepfake tech is evolving — a phenomenon they’re studying at the WITNESS Media Lab, a partnership with the Google News Initiative (GNI).

“We work globally with a range of ordinary people and journalists who take out their smartphones and try and film what’s happening in the world,” Gregory explained.

WITNESS was founded more than 30 years ago in the wake of the brutal Rodney King beating at the hands of Los Angeles police officers. Gregory pointed to that moment in time, 1991,

when a person witnessing police violence would’ve needed a bulky film-based camera to document the incident. These were not widespread technologies.

Today, ubiquitous mobile phones with exceptional cameras give billions of people the ability to capture video on the fly. And while Hollywood had computer graphic imaging (CGI) technologies at their disposal 25 years ago, today, these digital deepfake tools effectively deliver CGI to the masses.

Gregory explained that there’s a difference between deepfakes and “shallow fakes.”

“Shallow fakes are when someone edits a video or changes the caption or claims it’s from a different date or time,” he said. By contrast, a deepfake renders a realistic impression that the person is doing something they never actually did.

“Sometimes, that’s manipulating an existing image, so it could be, like, I take your lips and I make them move to a different soundtrack,” he said.

AI isn’t only used to create deepfake videos. A recent study by the Stanford Internet Observatory discovered more than 1,000 fake Ai-generated avatars on the professional networking platform Linkedin. NPR’S Shannon Bond further investigated and reported on NPR’S Morning Edition that most fake accounts with fake faces were used to make connections and pitch products or services to Linkedin users. “Think telemarketing in the digital age,” Bond explained.

The tools deepfakers use to create alternative-reality images are plentiful, and many are free. For example, there’s the face-swapping Fakeapp, Reface and Face Swap

Live; Wombo, a lip-sync app; Jiggy, a face-swap and GIF generator; and Deepfacelab, which Github credits with creating 95% of the deepfakes circulating online.

Fossbytes.com published a list of the “8 Best Free Deepfake Apps to Have Fun With In 2022.”

As the graphics technologies become more adept and accessible, the quality and effectiveness of the videos will get better, too.

Congress has taken note. On July 29, 2021, Senator Rob Portman (R-ohio), who serves on the Homeland Security and Government Affairs Committee, introduced

S.2559, the Deepfake Task Force Act, which would establish a Deepfake Provenance Task Force that would coordinate between the Department of Homeland Security (DHS) and the White House Office of Science and Technology Policy and produce a report after 90 days of study on deepfakes. The bill is co-sponsored by four Senate Democrats.

A couple of years ago, Facebook — now Meta — held a contest for the best open-source app for detecting deepfakes. FORTUNE’S Jeremy Kahn reported on the winner in a June 2020 article: Selim Seferbekov, a “machine learning engineer” based in Belarus, who was awarded $500,000 for his algorithm that could detect deepfake manipulation about 65% of the time.

“Most of the tools are not great, and they require real skills to use,” Gregory said.

But they’re getting better. Gregory has been participating in the Adobeled Content Authenticity Initiative, which seeks a standardized digital authentication method.

The ideal is a digital chain of custody of sorts, with metadata about when the image or video was created, how and by whom, and notations about how and when that image or file was intentionally and non-nefariously altered or manipulated in some way.

Gregory reflected on the Zelenskyy deepfake, noting that it wasn’t only the poor quality of the video itself that made it easy to debunk. The whole world had the context of having seen Ukraine’s President’s resolve and heroism. It was unlikely that he’d suddenly surrender. Zelenskyy also had the international megaphone to immediately debunk it himself.

“What did we observe when we first started looking at deepfakes?

They seemed to focus most on the people who are best protected,”

Gregory said — a U.S. President, or the President of France, for example.

Deepfakes present a threat not only in the cyber warfare theater, as was the case with the Zelenskky video, but also to journalists, who need to determine the veracity of a video before reporting on it or using it to substantiate reporting. It falls on the shoulders of journalists and documentarians to now invest resources in digital forensics to verify the veracity and sources of video content, so they can be part of the solution to stop the spread of disinformation.

While high-tech forensics apps may be fledgling, there are some simple, low-tech detection methods that newsrooms can employ. For example, do subjects in the video not blink or blink oddly? Are skin tones inconsistent? Do lips move in a way that doesn’t seem naturally compatible with the speech? Are backgrounds intentionally muted or blurred? Is there anything about the image that seems off?

“The trajectory of the technology is pointing towards this getting easier to do, and so, as we start to look at that, we need to think, what are the protections we have in place,” WITNESS’ Gregory said.

“We shouldn’t panic. We should prepare,” he said.

Of course, the threat that deepfakes present is exponential. The more often deepfakes are made and perpetuated, and the better and more effective they get, the more likely people will disbelieve what they see with their own eyes and hear with their own ears. It’s also more likely that people caught in compromising videos will accuse the footage of being deepfaked.

“You could, increasingly, fake media, but it’s also a lot easier just to say it’s been faked and put the pressure on journalists and civil society to prove it’s real,” Gregory said.

“We need to start talking to the public about this,” he added, “but the last thing you want to do is make them think there’s a deepfake behind every door or that every video they encounter is a deepfake because, in fact, that plays into the hands of people who want to basically dismiss real footage.”

Gretchen A. Peck is a contributing editor to Editor & Publisher.

She’s reported for E&P since 2010 and welcomes comments at gretchenapeck@gmail.com.

CONTENTS

en-us

2022-05-01T07:00:00.0000000Z

2022-05-01T07:00:00.0000000Z

https://editorandpublisher.pressreader.com/article/281921661620661

Editor and Publisher