The voice against the use of AI (Artificial Intelligence) in war has been raised by a group of scientists. A meeting named as ‘The American Association for the Advancement of Science’ was held in Washington DC, in which this issue was raised by various scientists and ethics experts. The main concern in using AI is their unpredictable behavior. One cannot say when these War-bots malfunction and what they can do when malfunctioned. There is no sense in developing AI-powered robots for killing without any human intervention. Ethics experts said.
HRW (Human Rights Watch), an NGO (Non-Government Organizations) listed among the 89 NGO’s from 50 countries around the world which together have started a campaign titled as ‘Stop Killer Robots’ appealed for an international treaty against the use of AI robots in war, in a press meet. In an interview with BBC news, HRW’s Mary Wareham told that their campaign is not concerned about the development of humanoids, what they are actually concerned about is the use of artificial intelligence in war. A ban needs to abolish on autonomous weapon systems.
These weapons have started taking part in real wars, with drones as the best example. The military is also using an aircraft which takes-off, flies and lands on its own. Adding to the same robotic sentries used for tracking movements has proved that AI has joined our defense. Ryan Gariepy (Chief Technological Officer, Clearpath robotic) said while in an interview with BBC news. Though his company itself takes contracts from the military, they don’t indulge in developing such robots which can act autonomously in wartime. He continued by saying that when these AI-powered robots fail, they fail in unpredictable ways.
The presence of AI actually limited by image recognition, however when used in wars or battles, an AI robot cannot judge the difference between a judge, observers, and the target. When they malfunction they can kill anyone. As per the words of Peter Asaro from New York’s New School, usage of AI robots raises legal liability if the bot kills an innocent. It will be considered as unlawful to license such robots to kill, these machines cannot make sense with life and death and thus any mishap will be on the heads of their developers.