- Israel has been using AI to help it figure out which targets to strike in Gaza.
- The IDF relies on a "target factory" powered by an AI system that produces recommendations.
A week-long truce between Israel and Hamas ended Friday morning local time after the militant group was accused of violating the agreement. The Israeli military said that in the hours since, it has already struck over 200 targets in the Gaza Strip.
The resumption of fighting indicates the start of a new phase of the war and raises questions about how Israel's bombardment of the besieged coastal enclave will proceed. Prior to the ceasefire, Israeli airstrikes triggered global backlash, and statements from the military and new reports published this week are revealing details about the military's relatively little-known targeting process.
After nearly a month of war, the Israel Defense Forces (IDF) revealed that it was relying on an artificial intelligence system to help determine where in Gaza it should bomb. It's part of an operation known as the "target factory," which has increased the number of strike locations available to the military by over 70,000 percent since the system first became functional several years ago.
The "target factory," or more officially the Targeting Directorate, is a unit staffed by officers and soldiers who are tasked with cybersecurity, decoding, and research. It first began operating in 2019 as a way to find and identify more Hamas targets in the Gaza Strip. During the ongoing conflict, it has worked with other intelligence units across the Israeli military to quickly find targets so strikes can then be carried out against them.
"This is a plan to make intelligence fully accessible to the brigade and division levels, and to 'close circles' immediately, with help from an artificial intelligence system," a senior IDF official said in a November 2 statement.
According to the IDF, the AI system that powers the Targeting Directorate absorbs intelligence and rapidly produces recommended targets for a researcher in an attempt to not just match but exceed what a human might otherwise suggest. The official stressed that the unit relies on a high standard for the targets that are produced.
"We do not compromise the quality of the intelligence and produce targets for precise attacks on infrastructure associated with Hamas, inflicting great damage on the enemy and minimal harm to those not involved," the senior IDF official said. "We work without compromise when defining who and what the enemy is. Hamas terrorists are not immune — no matter where they hide."
Before the targeting program became operational, Israel could produce 50 targets in Gaza in a year, Aviv Kochavi, the former IDF chief of staff, said in a June interview with Israeli news outlet Ynet. But once the AI system was activated during the 2021 Operation Guardian of the Walls, it could effectively generate as many as 100 targets in a single day — half of which would be attacked.
The IDF said in the early November statement that the Targeting Directorate was working "around the clock" and had struck over 12,000 targets, with "thousands" more identified, in just a few weeks of the war. Nearly a month later, it's unclear exactly how much that figure has grown, although it is likely to be significantly higher.
A new investigation by +972 Magazine and Local Call, a Hebrew-language site, into the intense Israeli bombing campaign of Gaza describes how AI contributes to the extensive targeting, which has leveled entire neighborhoods in the coastal enclave and killed nearly 15,000 people, according to data from Hamas-run health authorities.
The report, which was published on Thursday, cites Israeli intelligence sources in explaining that the AI system called "Gospel" produces recommendations for targeting homes or areas where suspected Hamas or Palestinian Islamic Jihad militants might reside, which can then be hit with airstrikes. Sources also said that the Israeli military knows in advance how many civilians might be killed in attacks on residences, and strikes are determined based on assessments of the potential collateral damage.
One former Israeli intelligence officer told the outlets that the Gospel system creates a "mass assassination factory," with an emphasis on "quantity and not on quality."
It's unclear exactly what data and intelligence goes into the AI system. The IDF did not immediately respond to Business Insider's queries about how the system determines who is and isn't a Hamas militant, what human checks are in place on the targeting program, or how many militants and civilians have been killed as a result of the AI assessments during the war.
Israel's use of AI is not necessarily a new concept in warfare. There are examples of its use amid the ongoing Russian invasion of Ukraine, and the US and China are also developing weapons that can select targets autonomously.
Nearly 50 countries have endorsed a declaration on the responsible use of AI and autonomy for military purposes that was launched earlier this year. The Pentagon says military AI capabilities include "decision support systems" and intelligence collection, in addition to weapons.
According to the US State Department, the declaration stresses that the military use of AI should be ethical, responsible, and enhance international security, and outlines several measures that states should implement as they develop autonomous systems.
Experts on AI and international humanitarian law told The Guardian that even when humans are involved in AI decision-making, they could still develop an "automation bias" and rely heavily on the systems. Israeli officials, they said, also may not be able to question or push back against a list of AI-produced targeting recommendations, nor would they know what criteria factored into them.
"Artificial intelligence poses extremely grave risks, and to a large extent, the genie is already out of the bottle with the release of numerous AI tools online," said Kochavi, the former IDF chief.
"Alongside the immense potential and tremendous contribution of artificial intelligence to humanity, it is capable of replacing and even alienating humans from decision-making altogether," he said, "including decisions about themselves."
Business Insider's Deep Cleaned Video Fellow Alisa Kaff provided translations for this report.