The third revolution in the field of war… Has artificial intelligence become more dangerous than nuclear weapons?
Introduction to translation:
The world is currently witnessing a vast technical revolution due to the entry of artificial intelligence algorithms into almost everything, from your smartphone to medical diagnostics, but have you ever thought about what will happen when artificial intelligence enters the military field? Well, you do not need to guess anything, the matter has already begun, as artificial intelligence drones are currently used in the wars of the Arab world. But what is the nature of the military tool that possesses the capabilities of artificial intelligence? What makes it more lethal compared to a nuclear missile? And how awful can it reach? These questions are answered by an artificial intelligence specialist, Kai-Fu Lee, who worked as the head of Google in China, and also worked for other companies, then Apple and Microsoft.
With the passing of the twentieth anniversary of the events of September 11 following the immediate withdrawal of US forces from Afghanistan, it has become difficult to ignore the horrific reality of armed conflicts and the challenge posed by asymmetric suicide terrorist attacks, because weapons technology has changed greatly over the past two decades, and therefore thinking about the future The near forces us to ask: What would happen if weapons technology developed and terrorists were able to replace the human element (those who would blow themselves up)* in favor of artificial intelligence? As someone who has studied and worked in AI for decades, I must be concerned about this technological threat.
Killer robots… the third revolution of wars
Autonomous weapons or lethal autonomous weapons, known as “killer robots”, are the third revolution in warfare after gunpowder and nuclear weapons. The development that took place, from landmines to guided missiles, was just a prelude to the real independence of the field of artificial intelligence, and its incorporation into killing operations by searching for specific people, making decisions to engage them, and then eliminating them completely without the slightest human intervention.
The Israeli Harpy (suicide drone)* is an autonomous (unmanned)* weapon that is programmed to fly into a specific area to search for specific targets, and then destroy them using a high-explosive warhead via a feature called “Fire”. and Forget). But there is a more provocative example that appeared in the short film “Slaughterbots”, which tells the story of a flock of bird-sized drones that can search for and kill a specific person, by shooting a small amount of dynamite towards his skull, and because these planes are very small, and have the lightness And intelligent, they cannot be caught, stopped or destroyed easily.
These killer robots are not just an illusion, but a real danger whose consequences were proven when one of these drones nearly killed the President of Venezuela in 2018. The deeper problem we face today is the ability of seasoned hobbyists to manufacture these drones easily, and at a cost of less than a thousand dollars, because all Airplane parts are now available for purchase online, and open source technologies are available for download. That was an unintended consequence as it became easier and cheaper to use AI and robotics. Imagine with me that we have a political killer whose cost does not exceed a thousand dollars! What we are talking about is not an unlikely danger that may occur in the future, but in fact it is a clear danger that threatens us now.
We have already seen the rapid progress made by artificial intelligence in many areas, and with these developments, it is likely that in the near future we will witness the rapid spread of these autonomous weapons that will not only become smarter, more accurate, faster, and cheaper, but will also learn new capabilities, such as How to form swarms depends in their movements on teamwork, doubling their speed, and repetition of their movements, which makes their missions virtually unstoppable. Alarmingly, a swarm of 10,000 drones capable of annihilating half a city now costs less than $10 million.
If we move to the bright side of the story, we will find that these autonomous (managed) weapons have several benefits, for example, they are able to save the lives of soldiers if the machines wage wars in their place, and they can – if they fall into the hands of responsible military institutions – to help soldiers target Only combatants in the enemy’s army, avoiding inadvertently killing friendly troops, children and civilians (similar to the way autonomous cars brake if a collision is imminent to save the driver’s life), these weapons can also be used to defend against killers and perpetrators.
But the negatives of these weapons and the responsibilities that fall on humans as a result of their use far outweigh these benefits, and the greatest responsibility of this kind is the moral one, as almost all moral and religious systems consider the loss of the human soul to be a thorny issue, and a controversial issue that requires strong justification and scrutiny. Commenting on this, United Nations Secretary-General António Guterres stated, “The prospect of machines having the freedom to act and the ability to take a human life is repugnant and morally unacceptable.”
Sacrifice one’s life for a cause – as suicide bombers do – is not easy, and it is still a great obstacle to anyone contemplating it, but with these unmanned weapons, there is no chance of anyone giving up their life in order to kill others. Another key issue is having a clear line of accountability, that is, knowing and holding those responsible for something wrong, and this is a central issue for soldiers on the battlefield. But when the killing is attributed to these killer robots, then who are we going to hold to account? (Just like when an autonomous car collides with a pedestrian, to whom is this incident attributed? Who is responsible? This is what we call the lack of accountability.)
This ambiguity may ultimately absolve the aggressors from accountability for committing grievances and violating the laws, which reduces the prestige of wars, lowers their borders, and makes it easier for anyone to wage war at any time. The greater danger is the ability of these autonomous weapons to target individuals using facial or gait recognition, phone signal tracking, or the Internet of Things (IOT), a term referring to billions of physical devices around the world that are connected to the Internet. , which can collect, send and process data from the surrounding environment using sensors and processors)*. This may not only result in the assassination of one person, but also lead to the genocide of any target group.
Increasing the autonomy of these deadly weapons without a deep understanding of what is happening would accelerate war (and thus casualties and casualties)*, potentially leading to catastrophic escalation, including nuclear war. Despite all that artificial intelligence has achieved, it is still limited by its lack of common sense and the human ability to think in various fields. No matter how much training these drones have, we still don’t fully understand the consequences of using them.
In 2015, the Future of Life Institute (a research and outreach organization in Boston that works to monitor the existential risks to humanity, especially the existential risks of artificial intelligence)* published an open letter about AI weapons, warning that “ The global arms race has become almost inevitable.” This escalatory dynamic of AI resembles other familiar races, whether it is the Anglo-German naval arms race or the Soviet-American nuclear arms race.
Powerful countries have always fought wars to prove their military superiority, and with the advent of these unmanned weapons, nations have been emboldened to launch more wars because these weapons provide many avenues to “win”, as a result of being (smaller, faster, lighter infiltration, more lethal, etc.) that).
Also, seeking to gain military power by manufacturing these autonomous weapons can be less costly, removing barriers to entry into such global conflicts, and smaller countries with powerful technologies, such as Israel, which are armed with the most military robots, have already joined this global race. In advance, these robots are characterized by a small size, almost the size of flies. And because everyone is now certain that opponents will resort to the manufacture of these killer robots, this will force the ambitious countries to join this race and compete.
Where will this arms race take us?
“The capabilities of autonomous weapons will be more constrained by the laws of physics — such as range, velocity, and payload — than by the AI systems that control them,” says Stuart Russell, a professor of computer science at the University of California, Berkeley. Millions of organizations are expected to manufacture these. “Weapons are so agile and lethal, leaving humans completely defenseless. So if this multilateral arms race has a chance to continue on its course, it will eventually turn into a race to humanity’s extinction or oblivion.”
Although nuclear weapons pose an existential threat, they are still monitored and even helped limit the spread of conventional warfare due to “deterrence theory” (a theory based on the assumption that force is the best remedy for force, meaning if A state achieves superiority in strength, it can impose its will on other states, and only another force that opposes it or is superior to it can restrain it), and because the results of nuclear war are a certain destruction for both parties, any state that started the nuclear strike will most likely face reciprocity, Thus, it will be the cause of self-destruction.
But when we talk about autonomous weapons, here things are different, the theory of deterrence does not apply in this field, because the first sudden attack may not be traceable. As we discussed earlier, autonomous weapons attacks can lead to a quick response from other parties, not to mention that escalations can be very fast, leading to nuclear war. The real obstacle is that the first attack may not be launched by a state, but by terrorists or other non-state actors, exacerbating the danger of these weapons.
Several proposed solutions emerged to avoid this existential catastrophe. The first solution was the need for the human element to be present in this vicious circle of wars, or to make sure that the human being is the one who makes all decisions to kill. But the ingenuity of autonomous weapons stems from the speed and accuracy gained from not being human in this loop, and thus this concession may not be accepted by any country that wants to win the arms race, and the protective measures to involve humans in these processes depend to a large extent on the moral character of that individual And his rule over things.
The second proposal is to ban the use of these weapons, and one of the supporters of this proposal is a campaign entitled “Stop the Use of Killer Robots.” A paper was submitted with the signatures of 3,000 people, including Elon Musk, the late Stephen Hawking, and thousands of experts in the field of artificial intelligence, in which they object to the use of These weapons.
In the past, biologists, chemists, and physicists have made similar efforts by rejecting the use of biological, chemical, and nuclear weapons. We know that banning these weapons will not be easy, of course, but previous bans on the use of lasers and blinding chemical and biological weapons have paid off. The main obstacle we face today is that the United States, Britain and Russia oppose this idea, declaring that it is too early to do so.
The third solution is to regulate the use of these unmanned weapons, which unfortunately will also be complicated by the difficulty of imposing effective technical specifications without being very comprehensive. Above all, we must first ask: What defines autonomous weapons? How will we review violations resulting from the use of these weapons?
Although these questions present extremely difficult short-term obstacles, creative solutions may be feasible in the long-term, though difficult to imagine. When thinking about these solutions, we will be faced with many questions such as: Can all countries agree that robots will fight all wars in the future? Which will not result in casualties, but will aim to obtain the traditional spoils of war, and deliver them to the other parties only.
who knows? We may see a future in which robots fight wars with humans, but on the condition that robots are allowed to use weapons that would disable other combat robots in the enemy’s army, without being harmful to humans. Ultimately, we must be well aware that autonomous weapons (killer robots) do represent a clear and present danger, with their transformation over time into smarter, more flexible, more lethal, and more readily obtainable weapons that are causing concern, and fear as well.
But in the end, what will accelerate the proliferation of these weapons is the arms race in which all states are engaged, which lacks the natural deterrence of nuclear weapons. Ultimately, we must recognize that autonomous weapons are an application of artificial intelligence that clearly and severely contradicts our morals, and poses a real threat to humanity.
This article is translated from The Atlantic It does not necessarily represent the site of Medan.
Translation: Somaya Zaher.