Home Africa News Israel and Gaza: AI in the time of warfare

Israel and Gaza: AI in the time of warfare

156
Israeli Airstrikes In Residential District Of Rafah, Southern Gaza

While debates on machine learning question the threat of artificial intelligence for humanity, AI-assisted bombardments show a new magnitude of algorithmically programmed death in times of war.

On Friday, 1 December 2023, the Israel Defence Forces (IDF) released a map of Gaza turned into a grid of more than 600 blocks. The blocks are supposed to help civilians identify active combat zones. The map, which Palestinians must access through a QR-code amid power cuts and air strikes, is to alert them of targeted evacuation warnings for areas facing intense bombardments.  

The map offers precision and operates as a public relations tool for shaping international opinion regarding civilian protection. Used by the IDF as evidence of its efforts to minimise civilian casualties, the interactive map is meant to show the world that, for the IDF, the residents of the Gaza Strip are not the enemy

For Human Rights Watch, these evacuation orders ignore the reality on the ground and should not erase protections under the law of war. On 5 December, United Nations Children’s Fund (Unicef) spokesperson James Elder stated going to the so-called safe evacuation zones was a death sentence. 

An investigation published on 30   November  by independent journalist news platforms +972 Magazine and Local Call interrogated the wider use of AI in the Israeli war on Gaza. Based on interviews with current and former members of Israeli’s intelligence community, it showed that the IDF’s intelligence units had shifted to become a “mass assassination factory” that operated under the guise of statistically precise and technically advanced intelligence tools. 

The investigation exposes the use of a system called “Habsora” (“the Gospel”), which deploys AI technology to generate four types of marks: tactical, underground, power targets and family homes. Targets are produced according to the probability that Hamas combatants are in the facilities. For each target, a file is attached that “stipulates the number of civilians who are likely to be killed in an attack”. These files provide numbers and calculated casualties, so when intelligence units carry out an attack, the army knows how many civilians are likely to be killed. 

In an interview for the nonprofit news organisation Democracy Now, Yuval Abraham said the use of AI relies on automated software to generate targets with life-and-death consequences. Although there was a strict limitation to the collateral damage in the past, these AI-generated targets are unprecedented — they are automated, rely on AI-powered data-processing technologies, allow a potential collateral damage of hundreds of civilians and are produced “faster than the rate of attacks”. 

Former IDF chief of staff Aviv Kochavi said the Targeting Directorate established in 2019 processes data to generate actionable targets. Powered by “matrix-like capabilities”, the system generates “100 targets in a single day, with 50% of them being attacked” while, in the past, the intelligence unit would produce 50 targets a year. In the escalating process of AI-generated targets, the criteria for killing civilians was significantly relaxed.

On 6 December, Malika Bilal, host of Al Jazeera podcast The Take, released an episode to further investigate Israeli army war protocol and its use of the Gospel. One central question she asks is how and when did the limitation of civilian casualties change and who chose to lower the restrictions. 

Bilal interviewed Marc Owen Jones, associate professor of Middle East studies at Hamad bin Khalifa University in Doha, Qatar, who stated: “AI is being used to select people for death and destruction.” 

In Jones’s words, when the Israeli military trains AI models, the intelligence units are modelling them in the full knowledge that these targets will include civilians. 

“They are outsourcing people’s lives and people’s destiny to a piece of technology that has probably inherited the ideology of occupation and extermination,” he said. 

When AI models are trained, it is done according to the precedents that have been set. In the case of the Israeli army, the killing of civilians is part of the model. A key feature of AI technology is that it relies on the data collected and the model used. Not only the use of AI technology will be biased if the technology learns from biased data, but the model of prediction and actionable recommendations will be biased too if it is deployed in a context where the technology is employed to serve and justify a certain ideology.

The IDF leverages on algorithmic accuracy while dismissing fairness procedures and accountability. The clinical efficiency of its AI-generated targets is portrayed by the political marketing wings of mainstream media as advanced tools that give the right to kill in the name of technological sophistication. 

Although AI is generally promoted as making warfare more precise, evidence from the lived experience in Gaza shows that saving lives is not part of the model. Instead, “maximum damage” is on the agenda given that several hundreds of targets are bombed every day. 

AI is used in most sectors of society. Huge numbers of papers and talks have been dedicated to AI-generated content and chatbots. But the use of AI for propaganda and mass killings belongs to less visible and less promoted concerns. 

Often, critical discussions on AI as seen in Web summits, press conferences, international colloquia and interviews fall into two categories. One wants to prove that AI is not really intelligent, at least not in the human sense of knowledge making. The other presents AI as a threat because the technology can surpass human capabilities in the fields of cognition. These intellectual propositions often omit to question the set of values and priorities that shape the theoretical models of their inquiry.

The dismissal of human reality in technology is a strategic mode of cancelling out accountability in the name of advancement. It allows for the implementation of a new form of obedience that is algorithmically driven, one in which the life of the poorest and most vulnerable do not even register as important factors to be taken into consideration. 

To critically think about AI, its creation, modelling and application as well as its development as a technology of behavioural prediction is a responsibility not to leave such a technology in an apolitical blur. 

In 2024, AI technology can now develop skills by capturing data from every gesture, movements and interactions. The systemic tracking of people’s lives and the opaqueness of the models designate a new paradigm in the formation of truth, because censorship is enabled on a new scale. 

AI-powered technology can both promote accuracy and hide the standards of measurement and circulation of information. It can also produce models that are opaque. As such, the new paradigm of AI asks us to ponder about societal values and sets of priorities we want to promote. 

What matters in the regime of truth promoted by fascist ideologies is the accuracy of the data collected as well as the computation, control and prediction of behaviours through systemic data surveillance. The data collected are portrayed as a measure of truth and function to substitute the reality of lived experience. 

As philosopher Antoinette Rouvroy points out, in this digital regime, the individual is replaced by a set of “a-significant” data. The person as a singular individual with memory, experience and flesh no longer exists: they are transformed into a profile that can be tracked and whose behaviours can be preempted. 

Ways of thinking, living and existing depend on a technological arrangement between the tools that help us retain information and the tools that help us anticipate future outcomes. With the development of AI, the mind is now surrounded by smart devices that learn from our conducts, censor certain content and promote others. 

The fast development of AI technology requires that we question the ecosystems of devices that are shaping our psychic and collective existences, including the ways in which it is both undoing forms of social trust and implementing censorship. 

In 2024, just over two-thirds of the world’s population will use the internet, while in 2020 one person in four did not have access to safe drinking water at home. According to a joint report by the World Health Organisation and Unicef, progress in drinking water, sanitation and hygiene is largely insufficient and unequal. 

What this parallel between digital networks and access to water shows is the distortion of international priorities in terms of civic and moral responsibility. Although water meets a vital need of first necessity, what we see being deployed is the use of this drinking water to cool down vast data centres. 

According to Google’s environmental report, published on 24 July 2023, the giant will have withdrawn 28  765 billion litres of water in 2022; 98% of this was drinking water, two thirds of which was used to cool its data centres where the equipment enabling the management of information systems is grouped. The energy cost is alarming, the human cost is distressing. 

And 75% of the world’s supply of cobalt, the material essential to the lithium-ion batteries in our cellphones, computers, tablets and electric cars, comes from eastern Congo, where millions of people (children and adults) live and work in dehumanising conditions. In 10 years, more than five million people have died because of disease and malnutrition.

To understand the shift in the making of digital-driven fascist regimes, where technological advancement supports mass manipulation and dehumanisation, we must understand the rise of artificial intelligence and algorithmic obedience. 

In 2024, the loosening of army protocols in the name of AI-driven accuracy serves a global economy where international laws are hijacked in front of our eyes. We, the people fighting for freedom around the world, are the living witness of a digital regime that has drastic consequences for the future of justice, and solidarity. 

Anaïs Nony is a French theorist and philosopher. Her research focuses on the philosophy of technique, the foundations of digital technology and its effect on society. An associate researcher at Centre for the Study of Race, Gender and Class at the University of Johannesburg, she is interested in the application of philosophical knowledge to understanding governance in the digital age.

Artificial intelligence can be used to improve people’s lives but it is also used to calculate civilian killings and support propaganda