In one of the more heated debates surrounding the tech sector today involves the use of automated machinery, i.e., robots, during times of war. Should we use robots and drones as a means of waging war and fighting enemy combatants? More importantly, what's to stop our enemies from using the same technology against us?

These are the questions being asked which poses a moral dilemma in our quest to end human suffering caused by war, terrorism, etc. On the one hand, with the help of robots and drones, we'll be able to penetrate enemy-controlled territory with ease and eliminate the threat. All of this would be done without the need of risking boots on the ground and our own being killed in the process. In other words, no more families mourning over a lost loved one.

On the other hand, with any technology, once you open up it up to a wide-scale application, it's bound to eventually land in the hands of those with nefarious motives. All it would take is for one of those drones to be captured or shot down by an enemy combatant. From there, they'd analyze both the hardware and software. And then proceed in reverse-engineering it, using our own weapons against us.

With this risk in mind, a couple of years ago, the Future of Life Institute published an open letter which called for the ban on autonomous weapons. Since then, over 18,000 people have signed the letter, including notable celebrities throughout the sci-tech sector like Stephen Hawking, Max Tegmark, Elon Musk, and much more.

This week, the United Nations Convention on Conventional Weapons began discussions to consider a treaty to ban autonomous weapons. During this Convention, AI researcher Stuart Russell unveiled a short film, of which attempted to depict the negative implications of using automated weaponry. That short film is provided below.