Famous Chinese military strategist Sun Tzu, who is credited as the author of the famous war strategy guidebook Art of War had said; “know thy self, know thy enemy. A thousand battles, a thousand victories.”

Those words were from the 5th century BC. The ‘art of war’ in the 21st century is drastically different. So much so that Christopher Coker, professor at the London School of Economics and author of Warrior Geeks in a 2013 piece declared that technology, in fact, is now making man the weakest link in warfare.

India is also now looking to get into the world of advanced drone warfare. New Delhi has shown renewed interest in buying some of the top US-made drones currently proving their worth for the American military around the world. As border management for India remains a critical challenge, not just with Pakistan but China as well, drones with capabilities of surveillance and strike operations would be a boon for the Indian armed forces.

Technological advancements over the past two centuries have largely taken place as by-products of military solutions to make warfare smarter, more accurate and more lethal. An aircraft is capable of taking you from Delhi to Rio de Janeiro within a few hours in utmost comfort and safety today, and the same technology is also used to drop bombs in war zones. Nuclear technology is used today for both mass destruction of humankind via its weaponisation and mass development of humankind via its immense potential for providing clean energy to millions of people.

War, after all, is widely considered as a human condition by philosophers and scholars, a view that has upheld over the evolution of mankind. But humans being an integral part of conflict throughout history is now being challenged, something perhaps not many would have envisaged. Automation in warfare by machines is now being driven by the advent of Artificial Intelligence or AI. Machines in battlefields are being prepared to think for themselves and decide who is a target and who is not.

Drone warfare is one of the big success stories of machines taking over the more dangerous tasks in a battlefield while the pilots are brought into a strange dystopian-utopian world of killing people from behind an office desk that is inside a military base thousands of miles away. Drones today are seen as the ace of spades in military arsenals, specifically with the United States, which has used them for targeted killings in Afghanistan, Iraq, Yemen and now even Pakistan with immense success.

Most of America’s drone program, which is spread out over military bases around the world, actually takes place from the parched state of Nevada, which is home to the Nellis and Creech Air Force bases, just outside the glitz and glam of Las Vegas. From here, a drone pilot can operate his machine remotely sitting in his living room if he wants. This is an awkward phase in air warfare at least, where the rush of flying a fighter plane into a war zone for pilots is slowly turning into an unlikely desk job. A modern military drone, such as the famous MQ-1 Predator model, has capabilities of flying non-stop for 18 hours at a height of nearly 30,000 ft (drones have now also successfully completed mid-air refuel trials, meaning in the near future they can be kept in the air indefinitely). It can keep circling a target over enemy territory and perform pinpoint surveillance, including high-definition video, audio, still photography and even heat-sensor based videography. This means that if you are sitting inside your home having a hot meal, the drone’s sensors can catch the heat from your food and body to get your exact location. The aircraft can do all this in a hostile environment while not endangering a pilot’s life and as a machine costing a fraction of the price compared to a conventional fighter jet manned by a human crew.

American drone pilots have known to struggle with moral and ethical dilemmas with this new job of killing people remotely and then going back home and trying to lead normal lives. The recent Hollywood film Eye In The Sky touched on this very subject, bringing this aspect of modern warfare to public eye.

Artificial Intelligence’s infusion with drone warfare could wipe out these dilemmas. Machines don’t have a conscience, ethics or morals and neither do algorithms.

Governments and developers have now started thinking much beyond such basic automation in warfare. The advent of AI is now taking ideas of drones in a war zone to a more futuristic, questionable and, perhaps, worrying place. Programmes are now being developed to integrate AI with drones. To make it simpler, work is taking place to make it possible for drones to decide their targets themselves, in effect removing human interference (not to be confused with removing human oversight).

According to a June 2016 article from the University of Cincinnati (UC), a recent simulation conducted at the university in a man vs machine situation saw the AI machine win simulated aerial combat exercises against experienced US pilots. But this is automation at an operation level, a direct challenge to the limitations of the human brain and attempts to breach the envelop of its capacities. The more worrying prospects, as mentioned above, is developing abilities for machines to think for themselves, specifically in a war zone. More importantly, the UC project has achieved this feat of defeating human pilots in combat simulation using the processing power of a tiny computer called Raspberry Pi which is available for as little as $35 in the market.

Lethal Automated Weapons Systems (LAWS) are being funded world over on an advanced level now. However, the intent is clear— to make machines plug the gaps of human thinking, physical endurance and so on. Experts have said that ‘self-thinking’ war machines are going to be a reality not in a matter of decades, but years. In fact, many of them are already in operation, as PW Singer, former Director of the 21st Century Defense Initiative at the Brookings Institution explains so well in his paper titled ‘Robots at War’. Currently, international law requires human intervention in engaging targets during wartime and, perhaps, this is the only major reason that is keeping full-automation of drones at an arm’s distance.

The technology exists and work is being conducted. For example, the joint Anglo-French drone program called Taranis is being developed as a futuristic evolution of drones, to be battle-ready by 2030. But the program’s manager Clive Morrison has said that they were working on the presumption that capabilities of autonomous strikes might be needed in the future. This is a strong indication that it is indeed only international laws keeping automation in targeting away from drones at the moment and that the technology is perhaps near fructification for drones to make their own decisions on who to take out or what to strike today itself.

Influential scientists and entrepreneurs such as Stephen Hawking and Elon Musk of Tesla and Space X fame have called for more robust debates on how AI is going to be incorporated into humanity as a beneficial force and not a destructive one. Hawking, in 2014, had warned that development of AI could “spell the end of human race”. While perhaps Hawking’s is a quite an alarmist proposition over the future of AI, there is no doubt that the exit of human conscience, ethics and intuition from war machines in the future, which today is being protected by the international legal framework, is a possibility with grave consequences for mankind in a world already fraught with so many fault lines.