![](https://img.huffingtonpost.com/asset/67ad09461b000018008abda0.jpg?ops=scalefit_720_noupscale)
On Tuesday night, a video went viral that seemingly showed a who’s who of Jewish celebrities flipping off Ye, the artist formerly known as Kanye West.
The rapper had ramped up his antisemitism this week by hawking T-shirts emblazoned with aswastika on his website ― and this black and white video was purported to portray celebrities finally fighting back.
In the clip, we see entertainers (Jerry Seinfeld, Drake, Scarlett Johansson, along with an embracing Simon and Garfunkel) and tech CEOs (Mark Zuckerberg, OpenAI’s Sam Altman) with Jewish ancestry wearing a white T-shirt featuring a Star of David inside a hand making a middle-finger gesture. “Kanye” is written underneath the hand.
The video, set to the Jewish folk song “Hava Nagila,” ends with “Adam Sandler” extending the actual bird to Ye and a call to “Join the Fight Against Antisemitism.”
We used quotation marks there because it isn’t really Adam Sandler. The video was made using AI, and none of the celebrities featured permitted their likeness to be used.
Many online users shared the deepfake including “Little House on the Prairie” actor and former Screen Actors Guild president Melissa Gilbert. Before she deleted it 19 hours later, Gilbert’s post on Threads had over 8,000 “likes,” 1,800 retweets and 2,400 shares.
When some pointed out that it was fake, others expressed shock. “How can you tell it’s AI?” one woman asked. “The fabric of the shirts move, there is correct shadows? I don’t know how to tell.”
![The viral video, digitally created or altered with AI to seem real, featured “Friends” stars like David Schwimmer and Lisa Kudrow.](https://img.huffingtonpost.com/asset/67ad19551600002400aff76a.png?ops=scalefit_720_noupscale)
On Wednesday, Johansson released a statement to People magazine urging lawmakers to curb the widespread use of artificial intelligence in the wake of the video. (The Marvel star has taken issue with AI before.)
“I am a Jewish woman who has no tolerance for antisemitism or hate speech of any kind,” Johansson said in the statement. “But I also firmly believe that the potential for hate speech multiplied by A.I. is a far greater threat than any one person who takes accountability for it. We must call out the misuse of A.I., no matter its messaging, or we risk losing a hold on reality.”
Why was the video shared so widely and so quickly? Experts who study AI and the spread of online mis- and disinformation say the video was of a higher quality than the more obvious AI slop we’re used to seeing.
“Many of the video’s features, like its grayscale, quick cuts and blank background, make it really hard to spot the kinds of tell-tale signs we’ve come to expect from generative AI,” said Julia Feerrar, an associate professor and the head of digital literacy initiatives at the University Libraries at Virginia Tech.
If you scrub through the clip frame by frame, though, Feerar said there are some signs. At around the 00:28 mark, for instance, the fake Lenny Kravitz’s fingers merge into themselves. (AI is notoriously bad at rendering fingers and hands; it will produce hands with two extra digits, for instance, or fingers protruding from the middle of a palm.)
Still, few people, if any, take the time to freeze-frame a video before “liking” it.
“I would have never noticed that without spending a lot of time and actively looking for it,” Feerar said.
![Playing armchair AI debunker will be increasingly important as these videos and images get more and more sophisticated, one expert said.](https://img.huffingtonpost.com/asset/67ad341f1b000024008abdbd.jpeg?ops=scalefit_720_noupscale)
Some contextual details made this fake more believable: Many of the celebrities featured have been vocal about the rise in antisemitism in the last few years. And celebrities tend to band together a la Justice League and collectively respond to whatever’s in the news ― think of Gal Gadot’s ill-conceived “celebrities sing ‘Imagine’ to take on COVID” video back in 2020.
The fact that a Hollywood insider like Gilbert shared the clip only lent it more credibility.
Amanda Sturgill, an associate professor of journalism at Elon University and the host of the “UnSpun” podcast, which covers critical thinking and media literacy, agrees that the video is pretty well done overall.
But it also made a mark because it’s a piece of content that people inherently want to “like.” You’d be hard-pressed to find someone whose opinion of Ye differs from that of President Barack Obama back in 2009: The Chicago rapper is a “jackass,” Obama let slip back then, and this was all before Ye’s far right and antisemitic leanings were revealed.
Then there’s the wider issue the video claims to be addressing. A study last year put out by the American Jewish Committee found that 93% of Jews and 74% of U.S. adults surveyed felt that antisemitism is a “very serious problem” or “somewhat serious problem.”
“I think all the ‘likes’ suggests this is a really emotional issue for audiences,” Sturgill told HuffPost. “It’s the kind of thing that people would want to believe is real, and that has a way of short-circuiting one’s usual shenanigan detection abilities.”
Digital literacy is a spectrum, though; some of us are better at discerning fakes than others, and we can’t assume that every “like” and “share” is evidence that someone was duped.
“I’d bet a share of them saw some kind of content about antisemitism and supported it, whether it was real or a deepfake,” said Lee Rainie, the director of the Imagining the Digital Future Center at Elon University, North Carolina.
“A video like this is a social and political happening as much as it is a media literacy issue,” he told HuffPost.
“The default setting for any media consumer needs to be this: be careful, be skeptical, and be doubtful.”
– Lee Rainie, the director of the Imagining the Digital Future Center at Elon University, North Carolina
Still, Rainie thinks the spread and reaction to this video is a perfect example of the need for overall greater digital literacy in the age of AI.
In a survey he conducted last spring ahead of the election, 45% of American adults say they’re not confident that they can detect fake photos, and that was across age groups, genders and political lines. As this current viral video shows, more people should probably have their guard up.
“Emerging digital tools are getting so much better at faking images, audio files, and video that the default setting for any media consumer needs to be this: be careful, be skeptical, and be doubtful,” Raine said.
Instead of pausing the video for tell-tale signs of a fake, Feerar suggested relying on context clues.
“One useful step when you see content that’s supposed to represent real people or places is to find out what those people and places actually look like from other sources,” she said.
For instance, while watching, Feerar noticed that quite a few of the celebrities depicted look more like their younger selves: “I already had that contextual knowledge, but I verified that idea with some quick searches for recent images of the celebrities I recognized,” she said.
Playing armchair AI debunker ― and calling out the more troubling examples of deepfakes to curb the spread― will be increasingly important as these videos and images get more and more sophisticated, Rainie said.
“The long-term consequences of deepfakes lie in the way they could eventually shatter basic human trust in each other and in the media environment,” he said. “We all need to depend on each other to convey truth and accurate information.”