Last month, the US Army asked private companies to contribute ideas that would help them improve its planned semi-autonomous, AI-driven tank targeting system. In the request, the Army focused on input that could assist the Advanced Targeting and Lethality Automated System (ATLAS) to “acquire, identify, and engage targets at least 3X faster than the current manual process.”
The language used by the Army didn’t sit well with everyone, particularly as people become increasingly concerned about the possibility of AI-powered murder robots.
According to Gizmodo, the US Army decided to add a disclaimer to their request instead of altering the original wording.
The Army noted that the Defense Department’s policy hasn’t changed, asserting that fully autonomous killing machines are not permitted.
Any robots created by the DoD will need to adhere to the strict standards set forth in the guidelines that dictate what a machine can and cannot be capable of doing independently.
The new disclaimer read: “All development and use of autonomous and semi-autonomous functions in weapon systems, including manned and unmanned platforms, remain subject to the guidelines in the Department of Defense (DoD) Directive 3000.09, which was updated in 2017. Nothing in this notice should be understood to represent a change in DoD policy towards autonomy in weapon systems. All uses of machine learning and artificial intelligence in this program will be evaluated to ensure that they are consistent with DoD legal and ethical standards.”
In DoD Directive 3000.09, it lists that people must be able to “exercise appropriate levels of human judgment over the use of force.” As a result, fully autonomous killer robots that have the ability to choose to kill someone on their own goes against the policy.
Ultimately, it keeps a person “in the loop,” ensuring that humans make all decisions regarding the use of lethal force.