Recently on the BBC news website I stumbled across an article about the development of robot tanks controlled by a machine learning artificial intelligence. Machine learning is an area of data science which uses statistical models to infer a set of instructions from a set of training data on some past actions. Machine learning algorithms have a wide range of uses, such as retailers recommending products to a certain customer, email filtering and financial trading. Here the use of machine learning in the AI (artificial intelligence) of unmanned war vehicles is explored.
In 1985 the US army attempted to develop an AI controlled anti-aircraft tank with the development of the M247 Sergeant York. The project however was scrapped after an incident where it attempted to target a group of high ranking US army officers. Yet in the time since this debacle, major advances in the field of machine learning have been made, such as the development of reinforcement learning, support vector machines, and long short term memory recurrent machines. With these advances among many others, multiple defense contractors have taken up renewed interest in AI controlled war vehicles.
The UK based defense technology group Qinetiq has been in talks with the British army about supplying AI controlled vehicles to carry heavy supplies and equipment as well as move into position to provide cover for human soldiers. Yet these are not programmed to fire weapons. According to the article, a spokesperson from Qinetiq, they never intend to make a vehicle capable of firing a weapon due to difficulty in distinguishing between civilians and soldiers.
Meanwhile in the USA, their army is investing in an unmanned vehicle supplied by FLIR. Technology being developed here involves machine learning being used in conjunction with visual and thermal sensors which have been used to train the vehicle to identify a human and what they are holding (i.e. if they are holding a firearm or not). The aim of this being that it should be able to successfully identify and take out enemies.
All of this however still raises an interesting question about whether or not we should limit the application of machine learning. Ethical questions are raised going forward involving the value of a soldiers life: if the AI is 99.99% accurate, is it worth risking the lives of your own soldiers with a chance the AI will mistake one for an enemy? Is this worth it for the sake of the reduced casualties due to deploying fewer soldiers? These are questions which must be asked in the future if the path of AI controlled weapons is continued down.
About the author