Industry unites against AI enabled weaponry.
Experts have come together to condemn the autonomous and AI assisted weapons arms race
Following on from yesterday’s article looking at PwC’s report into the future of business and AI, industry leaders in technology have come together to sign an accord against AI weaponry.
The ‘Lethal Autonomous Weapons Pledge’ was drawn up by the Future of Life Institute, whose mission statement is: “To catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.”
“If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.” – Elon Musk, Tesla and SpaceX Founder.
The pledge, which is open for any member of the public to sign online has gathered thousands of signatures from individuals and hundreds of companies and institutions, including many industry experts.
The letter condemns weapons which withe the ability to kill without human intervention, stating “we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable.”
It goes on to say that “lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons” and thus require different regulation and that there is a danger of sparking a new cold-war-esque arms race.
Alongside the pledge, but not related to it, 26 counties have thus far called for a ban on autonomous weapons by the UN. They include: Algeria, Argentina, Austria, Bolivia, Brazil, Chile, China, Colombia, Costa Rica, Cuba, Djibouti, Ecuador, Egypt, Ghana, Guatemala, Holy See, Iraq, Mexico, Nicaragua, Pakistan, Panama, Peru, State of Palestine, Uganda, Venezuela, Zimbabwe.
Among those signing the Future of Life Institute pledge are Elon Musk, CEO and co-founder of Tesla Motors and SpaceX as well as Demis Hassabis and Shane Legg. Hassabis and Legg are the co-founders of Google’s DeepMind AI operation.
Elon Musk is well known for voicing his opinion on AI and weapons. In 2017, Musk and Alphabet’s Mustafa Suleyman led a group of 116 specialists in an open letter to the UN to ban “morally wrong” autonomous weapons.
If you're not concerned about AI safety, you should be. Vastly more risk than North Korea. pic.twitter.com/2z0tiid0lc— Elon Musk (@elonmusk) August 12, 2017
Are these weapons already in development?
Despite industry condemnation, and potential intervention from the UN, the AI arms race has been heating up for years. The recent increase in use of unmanned drones means the idea of an unmanned war is no longer science fiction.
The current US military policy on AI assisted weapons is very open to the development of free-thinking weapons. “Autonomous … weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”
Over in the Russian Federation, arms manufacturer Kalashnikov Group announced in 2017 they had developed a “fully automated combat module” capable of identifying, but not engaging targets within human intervention.
The Kalashnikov Group is perhaps most famous for the infamous Kalashnikov assault rifle, developed in 1947 and still in use today by armies and terror groups alike.
You can read the full Future of Life Institute letter below:
“Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.
In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable. There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual. Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems. Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage. Stigmatizing and preventing such an arms race should be a high priority for national and global security.
We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons. We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge“