AMU Military

Stephen Hawking Warns of Robot Based Warfare

Could AI-Based Robot Warfare Be Closer Than We Think?

Say hello to your new robot overlords…

An open letter to the UN asking for the banning of AI-based weaponry has been signed by a number of influential individuals including scientists, businessmen, and other VIPs who are growing concerned about the rate of advancement of this technology and its possible effects on mankind.  Among these individuals are Stephen Hawking, Steve Wozniak, and Elon Musk.

A letter announced at the opening of the International Joint Conference on Artificial Intelligence (IJCAI), published by The Future of Life Institute indicates that AI controlled weapons that have the capability to search and eliminate people on their own will most likely be possible “within years, not decades,” a potential “third revolution in warfare, after gunpowder and nuclear arms”. It explains that even though the AI may reduce the number of human casualties, the likelihood of war would increase because of the inexpensive materials needed to develop these weapons.  It would create a lower threshold for countries to go to war.  The letter goes on to say “If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable… autonomous weapons will become the Kalashnikovs of tomorrow”.

ls3

If this power fell into the wrong hands, the weapons could be used for horrific purposes such as acts of terror, ethnic cleansing, population control or assassinations.

At the conclusion of the letter, it is stated that most researchers involve in AI development are not interested in weapons creation.  It continues to say that autonomous weapons should be viewed similarly to chemical and biological weapons and must be prohibited by the UN.

The announcement of this letter is not the first instance in which some of the signatories have expressed their concern about AI development.  Stephen Hawking, Steve Wozniak and Bill Gates have all previously voiced their opinions on this matter.

Stephen Hawking believes that certain versions of current AI have proven to be beneficial and influential to many people.  It is the unlimited and unexplored scope of AI potential that creates a risk.  AI could have the ability to evolve without the limitations that effect the human condition.  He warns “Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded”. During an interview, Stephen Hawking said that if machines are created and have the ability to learn and think at an equal or higher level than the average human it could be the end of humanity.  He quoted “The development of full artificial intelligence could spell the end of the human race”.

sw

Even though Steve Wozniak, co-founder of Apple, is known for pushing the limits of technology, he still can see the danger in the existence of AI on that level.  At first he did not believe that type of intelligence in machines would be seen any time soon but now he is sure of it.  He states “Computers are going to take over from humans, no question”.  He goes on to say “If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently”.  He can envision a future were machines treat humans as humans currently treat their pets.  With that thought in mind he commented “if I’m going to be treated in the future as a pet to these smart machines … well I’m going to treat my own pet dog really nice.”

Bill Gates is also of the opinion that there should be concern regarding super intelligent AI.  He stated “First the machines will do a lot of jobs for us and not be super intelligent, that should be positive if we manage it well. A few decades after that though, the intelligence is strong enough to be a concern.”  He is surprised and confused as to why some people cannot see the potential threat.  Gates said he is not wanting to hinder progress in the area of AI development, simple raise the awareness of the potential these machines have for taking over jobs and once advanced enough conflicting with “the goals of human systems”.

article-0-15B2AC52000005DC-842_634x435

On the other side of the opinion, Google Chairman Eric Schmidt is of the mindset that humans should not fear the increasingly advanced AI developments.  He stated “I think that this technology will ultimately be one of the greatest forces for good in mankind’s history simply because it makes people smarter”.  He explains that advanced artificial intelligence is already a part of our everyday lives and its further advancement will only improve our situation.  Schmidt stated “I can’t think of a field of study, a field of research — whether it’s English, soft sciences, hard sciences or any corporation — that can’t become far more efficient, far more powerful, far more clever”.

What do you think?  Should battlefield robots be able to make autonomous decisions to engage human targets?  Sound off!

Wes O'Donnell

Wes O’Donnell is an Army and Air Force veteran and writer covering military and tech topics. As a sought-after professional speaker, Wes has presented at U.S. Air Force Academy, Fortune 500 companies, and TEDx, covering trending topics from data visualization to leadership and veterans’ advocacy. As a filmmaker, he directed the award-winning short film, “Memorial Day.”

Comments are closed.