We have entered a time in our history in which advanced technologies based on Artificial Intelligence (AI) may become increasingly prone to unintended actions that threaten the safety and autonomy of human beings. And those of us who believe in the safety and autonomy of human beings–trust me, most at the top of the current power pyramid don’t need to become increasingly aware and vigilant of this growing threat. The arguments for and against the unfettered development of AI and its integration into military capabilities are as follows: those in favor of the development of AI simply point to the increased efficiency and accuracy bestowed by AI applications. However, their unrestrained zeal tends to be based on a rather naive (or
No Author considers the following as important:
This could be interesting, too:
Tyler Durden writes Shaping Eurasia: Russia – China Bilateral Trade And Cooperation
Tyler Durden writes Egypt To Elon Musk: No, Aliens Did Not Build The Pyramids
We have entered a time in our history in which advanced technologies based on Artificial Intelligence (AI) may become increasingly prone to unintended actions that threaten the safety and autonomy of human beings. And those of us who believe in the safety and autonomy of human beings–trust me, most at the top of the current power pyramid don’t need to become increasingly aware and vigilant of this growing threat.
The arguments for and against the unfettered development of AI and its integration into military capabilities are as follows: those in favor of the development of AI simply point to the increased efficiency and accuracy bestowed by AI applications. However, their unrestrained zeal tends to be based on a rather naive (or feigned) trust in government, corporations and military intelligence to police themselves to ensure that AI is not unleashed into the world in any way that is harmful to human individuals. The other side of the argument grounds its fundamental mistrust in current AI development on the well-documented notion that in fact our current corporate, governmental and military leaders each operate based on their own narrow agenda that give little regard for the safety and autonomy of human beings.
Nobody is arguing against the development of Artificial Intelligence as such, for application in ways that will clearly and incontestably benefit humanity. However, as always, the big money seems to be made available in support of WAR, of one group of humans having dominance and supremacy over another, rather than for applications that will benefit all of humanity and actually help to foster peace on the planet.
Ex-Google Engineer Speaks Out
Perhaps there is no way to fully prevent militaries from doing research into AI enhancements to their applications. However there seems to be one clear line of demarcation that many feel should not be crossed: giving AI programs sole authority to determine if a given individual or group of human beings should be killed.
Software engineer Laura Nolan resigned from Google last year in protest after being sent to work in 2017 on Project Maven, a project used to dramatically enhance US military drone technology, and put much more of the onus on AI to determine who and what should be bombed or shot at. She felt that her work would push forward a dangerous capability. She could see that the ability to convert military drones, for instance into autonomous non-human guided weapons, “is just a software problem these days and one that can be relatively easily solved.”
Through the protestations and resignations of brave people like Laura Nolan, Google allowed the Project Maven contract to lapse in March this year after more than 3,000 of its employees signed a petition in protest against the company’s involvement. It should be indicative to all of us that these big corporate giants do not make ethical decisions on their own, since they are fundamentally amoral, and continue to require concerned human beings to speak up and take actions in order for humanity’s interests to be considered.
Since resigning, Nolan has continued her activism amidst news about the development of “killer robots,” AI machines designed to operate autonomously on the battlefield with the capacity to kill large swaths of enemy combatants. She has called for all AI killing machines not operated by humans to be banned. She joined the Campaign to Stop Killer Robots and has briefed UN diplomats in New York and Geneva over the dangers posed by autonomous weapons.
Unlike drones, which are controlled by military teams often thousands of miles away from where the flying weapon is being deployed, Nolan said killer robots have the potential to do “calamitous things that they were not originally programmed for”:
The likelihood of a disaster is in proportion to how many of these machines will be in a particular area at once. What you are looking at are possible atrocities and unlawful killings even under laws of warfare, especially if hundreds or thousands of these machines are deployed.
There could be large-scale accidents because these things will start to behave in unexpected ways. Which is why any advanced weapons systems should be subject to meaningful human control, otherwise they have to be banned because they are far too unpredictable and dangerous. (source)
Pledge From AI Researchers?
Certainly we see mainstream headlines like ‘Top AI researchers say they won’t make killer robots‘ where pledges have reportedly been made:
More than 2,600 AI researchers and engineers have signed a pledge to never create autonomous killer robots, published today by the Future of Life Institute. Signees include Elon Musk, Alphabet’s DeepMind co-founders Mustafa Suleyman, Demis Hassabis, and Shane Legg, as well as Google’s Jeff Dean, and the University of Montreal’s Yoshua Bengio.
However this does not mean that we can desert our posts and trust that corporations that could make billions of dollars from contracts to advance such automated applications will decline to pursue them if they thought they could get away with it. Indeed, it is the watchful eyes and powerful words of conscious people that has so far prevented this from occurring.
Events in our world such as the emergence of autonomous ‘killer robots’ are ominous and foreboding, but we need not shrink away from this kind of news in a state of fear and resignation. If we can see, in the bigger picture, that it is ultimately a projection of our collective consciousness that brings these events into being, then we can take these events to be a trigger for each of us to determine exactly what kind of world we want to live in going forward, and have that determination clearly reflected in our thoughts, words, and actions. In this way, we participate in the larger awakening process and help to move humanity forward in the transition to a world of greater peace and harmony.
Reprinted with permission from Collective Evolution.