JERUSALEM – Over the past two weeks, since Palestinian terrorist group Hamas carried out its deadly attack in southern Israel killing some 1,400 people and Israel, there is a fear that a new front in the old war between Israelis and Palestinians could open up – in the digital realm.
While doctored images and fake news have long been part of the Middle East wartime arsenal, with the arrival less than a year ago of easy-to-use artificial intelligence (AI) generative tools it seems highly probable that deepfake visuals will soon be making an appearance on the war front too.
“Hamas and other Palestinian factions have already passed off gruesome images from other conflicts as though they were Palestinian victims of Israeli assaults, so this is not something unique to this theater of operations,” David May, a research manager at the Foundation for Defense of Democracies, told Fox News Digital.
He described how in the past, Hamas has been known to intimidate journalists into not reporting about its use of human shields in the Palestinian enclave, as well as staging images of toddlers and teddy bears buried in the rubble.
“Hamas controls the narrative in the Gaza Strip,” said May, who follows Hamas’ activities closely, adding that “AI-generated images will complicate an Israeli-Palestinian conflict already rife with disinformation.”
There have already been some reports of images reupped from different conflicts, and last week, a heartbreaking photograph of a crying baby crawling through the rubble in Gaza was revealed as an AI creation.
“I call it upgraded fake news,” Dr. Tal Pavel, founder and director of CyBureau, an Israeli-based institute for the study of cyber policy, told Fox News Digital. “We already know the term fake news, which in most cases is visual or written content that is manipulated or placed in a false context. AI, or deepfake, is when we take those images and bring them to life in video clips.”
Pavel called the emergence of AI-generated deepfake visuals “one of the biggest threats to democracy.”
“It is not only during wartime but also during other times because it’s getting harder and harder to prove what is real or not,” he said.
In day-to-day life, Pavel noted, cases of deepfake misinformation have already come to light. He cites its use by criminal gangs carrying out fraud with voice-altering technology or during election campaigns where videos and voice-overs are manipulated to change public perception.
In war, he added, it could be even more dangerous.
“It’s a virgin land and we are only in the first stages of implementation,” said Pavel. “Anyone, with pretty low resources, can use AI to create some amazing photos and images.”
The technique has already been used in Russia’s continuing war in Ukraine said Ivana Stradner, another research fellow at the Foundation for Defense of Democracies who specializes in the Ukraine-Russia arena.
Last March, a fake and heavily manipulated video of President Volodymyr Zelenskyy appearing to urge his soldiers to lay down their arms and surrender to Russia was posted on social media and shared by Ukrainian news. Once it was discovered to be fake, the video was quickly taken down.
“Deepfake videos can be very realistic and if they are well crafted, then they are difficult to detect,” said Stradner, adding that voice cloning apps are readily available and real photos are easily stolen, changed and reused.
Inside Gaza, the arena is even more difficult to navigate. With almost no well-known credible journalists currently in the Strip – Hamas destroyed the main human passage into the Palestinian enclave during its Oct. 7 attack and the foreign press has not been able to enter – deciphering what is fact and what is fake is already a challenge, with easy to use AI platforms that could get much harder.
However, Dr. Yedid Hoshen, who has been researching deepfakes and detection methods, at the Hebrew University of Jerusalem, said such techniques are not totally foolproof yet.
“Creating images in itself is not hard, there are many techniques available out there and anyone reasonably savvy can generate images or videos but when we talk about deepfakes, we are talking about talking faces or face swapping,” he said. “These types of fake images are more difficult to create and for a conflict like this, they would have to be made in Hebrew or Arabic when most of the technology is still only in English.”
Additionally, said Hoshen, there are still tell-tale signs that set AI visuals apart from the real thing.
“It is still quite difficult to make the visuals in sync with the audio, which might not be detectable with the human eye but can be detected using automated techniques,” he said, adding, “small details like the hands, fingers or hair don’t always appear realistic.”
“If the image looks leery then it might be fake,” said Hoshen. “There is still a lot that AI gets wrong.”