The state of autonomous weapons in the world today


In November 2020, a New York Times report confirmed that leading Iranian nuclear scientist Mohsen Fakhrizadeh was murdered by Israeli activists using an AI-powered sniper rifle. This report confirms earlier claims by the Iranian Revolutionary Guard that an “intelligent satellite-guided machine gun” was used to kill the scientist while he was driving with his wife. While Fakhrizadeh was shot four times, his wife was uninjured.

Fakhrizadeh’s assassination

Register for our upcoming Masterclass>>

The weapon used to kill Fakhrizadeh was accompanied by a robotic device weighing approximately tons. The entire system was built into the back of a truck with multiple cameras to give the bombers a full view of the area. The truck was also packed with explosives to blow up evidence once the mission was over / compromised.

The weapon was linked to an Israeli command center via a satellite communication relay. Behind this weapon was an agent who could aim at the target through a computer screen. An AI system was designed to track Fakhrizadeh’s car movement and had a 1.6 second delay. The facial recognition software built into the AI ​​system was designed to only target Fakhrizadeh and leave his wife unharmed.

While the attackers were able to carry out the mission and the truck exploded after the assassination, the intelligent rifle system was not completely destroyed. The remains of the rifle were used by the Iranian Revolutionary Guards to investigate the attack. This investigation revealed some interesting facts about modern warfare.

In a similar but failed attempt in 2018, an AI-controlled drone nearly killed the then President of Venezuela, Nicolas Maduro. He was attending an event when two drones exploded explosives near him.

Advances in AI have also accelerated the development of autonomous weapons. In the future, these weapons should become more precise, faster and even cheaper. If this development is carried out ethically and responsibly, these machines could reduce the number of casualties, help soldiers only hit combatants, and use autonomous weapons defensively against perpetrators.

In one (n items of The Atlantic, Taiwan-born American computer scientist Kai-Fu Lee said autonomous weapons were the third revolution in warfare after gunpowder and nuclear weapons. He wrote that true AI-powered autonomy involves full engagement in killing, which includes “searching for, choosing to engage, and eradicating another human life entirely without human involvement.”

This year, the Pentagon’s US Defense Advanced Research Projects Agency (DARPA) tested fully autonomous AI-based drones with weapons. In August, an exercise with AI-controlled drones and tank-like robots took place in Seattle. These drones received specific instructions from the human operators, but they worked autonomously for actions like finding and destroying targets. This exercise demonstrated the benefit of using AI systems in combat situations where the conditions are too complex and dangerous for human intervention.

Not only the US, but also many other countries are actively researching the inclusion of AI in warfare. China is arguably leading the race. According to a Brookings Institute report, the Chinese military has made significant investments in robotics, swarming, and other AI-based weapons. Although it is difficult to determine the sophistication of these systems, the report states that the weapons can have varying degrees of autonomy.

Inherent dangers

Activists and experts across the board believe that there are many dangers to the use of autonomous weapons, sometimes far outweighing the benefits. Kai Fu-Lee said in his recent interview, “The single greatest threat is autonomous weapons.” He said warfare is the only time AI is trained to kill people. Lee said autonomous weapons, which are becoming increasingly advanced and affordable, are wreaking havoc and could even be used by terrorists to commit genocide. “We need to figure out how to ban it or regulate it,” added Lee.

In 2015, technology and business leaders like Elon Musk and Steve Wozniak, along with 200 other AI researchers, signed an open letter proposing a complete ban on autonomous weapons. This proposal was supported by over 30 countries; however, a report commissioned by Congress advised the US to oppose the ban.

Is regulation an option?

See also

Human Rights Watch and other non-governmental organizations launched the campaign to stop killer robots in 2013. Since then, concerns about fully autonomous weapons have risen on the international agenda. It has been recognized as a major threat to humanity that deserves urgent multilateral action.

Since 2018, the United Nations Secretary-General António Guterres has called on states to ban autonomous weapons that could only target and attack people, calling them “morally repulsive and politically unacceptable”.

A legally binding instrument called the Convention on Conventional Weapons came into force in 2014. The partner countries meet each year to discuss concerns about lethal autonomous weapons systems (LAWS). Almost 30 countries have called for a ban on fully autonomous systems, and 125 member states in the Unaligned movement demanded a “legally binding international document” to LAW. Critics still believe that a full ban will come into force shortly.

In a previous interview with Analytics India Magazine, Trisha Ray, an Associate Fellow of the Observer Research Foundation whose research focuses on LAWS, said, “The CCW is unlikely to be calling for a ban, but is calling for safeguards consistent with international humanitarian law, including reasonable human scrutiny.”

Join our Discord server. Become part of a dedicated online community. Join here.

Subscribe to our newsletter

Get the latest updates and relevant offers by sharing your email.

Shraddha Goled

Shraddha Goled

I am a journalist with a postgraduate degree in Computer Network Engineering. When I’m not reading or writing, I can be scribbled to your heart’s content.


About Author

Leave A Reply