Mattis Contemplates: What Happens if AI Technology Changes War-fighting Forever?

 In Military, opinion, Technology

Elon Musk back in August said that we should be worried about the advancement of AI Technology, calling it “more dangerous than North Korea.” Secretary of Defense James Mattis and leaders like him will have to grapple with that issue if AI technology someday takes over for humans who are fighting a war.

“The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast—it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most.” Elon Musk on

What if it happens sooner….

Mattis has been getting briefings from a Silicon Valley group called the Defense Innovation Board, (DIB) which was formed by Ash Carter. They have been specifically talking to him about AI technology, according to the C4ISRNET. Fortunately one of his nicknames- “The Warrior Monk” reminds us that his wisdom and ability to think through an issue is a massive protection.

“The character of war changes all the time. An old dead German [Carl von Clausewitz] called it a ‘chameleon.’ He said it changes to adapt to its time, to the technology, to the terrain, all these things.” James Mattis

But should the character of war change to remove humans from the equation, things could get dicey. Think SciFi on steroids. Amazon, Facebook and Google already use AI technology in their business practices (ever heard of Siri and Alexa?- AI technology).

The Future of Warfare?

Defense News reported a speech by Kersti Kaljulaid, President of Estonia, at the Munich Security Conference in Germany.

First, “we need to understand the risks — what we’re afraid of,” said Kaljulaid, pointing to three: someone using AI disruptively; intelligence going widespread; and AI depleting energy.

“Right now we know we want to give systems some right of auto decision-making when it has the necessarily information to react,” Kaljulaid said. But once that is accomplished, “then we have the responsibility” to establish standards — the ability to apply reporting requirements to AI system, or to even shutdown systems if they are deemed threatening.

The kind of standards gradually being put in place for cybersecurity “need to apply to the AI world, exactly the same way,” she said.

“Disruptively” isn’t a term I’d have used…using them for lethal killing machines is closer. In other words, currently the Pentagon is focused on AI technology that is used to help pilots and troops make better decisions. But if humans were taken out of that loop…all bets are off.

“If we ever get to the point where it’s completely on automatic pilot and we’re all spectators, then it’s no longer serving a political purpose. And conflict is a social problem that needs social solutions.” James Mattis

He’s right, of course. The reasons for war are varied but they nearly always come down to a quest for power …who controls what.  Something to think about, and we’re glad Mattis is on it.

Leave a Comment

Start typing and press Enter to search

braconline predator