Out-of-control killer robots. The concept was the premise for numerous Hollywood blockbusters in the second half of the twentieth century. And now the US military seems to be considering the scenario in a much less fictitious way. We are now, they say, at risk of developing machines we can’t control.

Gen. Paul Selva, America’s penultimate military officer told senators Tuesday that one current objective is “keeping the ethical rules of war in place lest we unleash on humanity a set of robots that we don’t know how to control.”

Sen. Gary Peters, a Michigan Democrat, had asked about Selva’s view on autonomous weapons systems that are capable of killing without a human pulling the proverbial trigger. A Department of Defense directive requires a human operator to make decisions on taking life, but that directive is set to expire later in 2017.

“I don’t think it’s reasonable for us to put robots in charge of whether or not we take a human life,” Selva told the Senate Armed Services Committee. Selva appeared before the committee to answer questions about his reappointment as the vice chairman of the Joint Chiefs of Staff.

“There will be a raucous debate in the department about whether or not we take humans out of the decision to take lethal action,” Selva said. Yet he is “an advocate for keeping that restriction.”

Selva framed the discussion as moral argument. How do we judge what makes a response appropriate? Even as barriers are broken between artificial intelligence, morality is still a uniquely human attribute.

He’s not alone. As CNN reports, “in July 2016, a group of concerned scientists, researchers and academics, including theoretical physicist Stephen Hawking and billionaire entrepreneur Elon Musk, argued against the development of autonomous weapons systems.” The group wants a “ban on offensive autonomous weapons beyond meaningful human control.”

The morality seemed less important to Sen. Peters.

“Our adversaries often do not to consider the same moral and ethical issues that we consider each and every day,” the senator said.

Perhaps, Peters suggested, we should research and develop the capabilities, but not use the, Peters suggested.  Agreeing not to use them “doesn’t mean that we don’t have to address the development of those kinds of technologies and potentially find their vulnerabilities and exploit those vulnerabilities.”