Dark Mode Light Mode

What AI Fashions for Warfare Really Look Like

AI Lab Autonomous Weapons Business 2 AI Lab Autonomous Weapons Business 2


Anthropic might have misgivings about giving the US military unfettered access to its AI models, but some startups are building advanced AI specifically for military applications.

Smack Technologies, which announced a $32 million funding round this week, is developing models that it says will soon surpass Claude’s capabilities when it comes to planning and executing military operations. And, unlike Anthropic, the startup appears less concerned with banning specific types of military use.

“When you serve in the military, you take an oath you’re going to serve honorably, lawfully, in accordance with the rules of war,” says CEO Andy Markoff. “To me, the people who deploy the technology and make sure it is used ethically need to be in a uniform.”

Markoff is hardly a regular AI executive. A former commander in the US Marine Forces Special Operations Command, he helped execute high-stakes special forces operations in Iraq and Afghanistan. He cofounded Smack with Clint Alanis, another ex-Marine, and Dan Gould, a computer scientist who previously worked as the VP of technology at Tinder.

Smack’s models learn to identify optimal mission plans through a process of trial and error, similar to how Google trained its 2017 program AlphaGo. In Smack’s case, the strategy involves running the model through various war game scenarios and having expert analysts provide a signal that tells the model if its chosen strategy will pay off. The startup may not have the budget of a conventional frontier AI lab, but it’s spending millions to train its first AI models, Markoff says.

Battle Lines

Military use of AI has become a hot topic in Silicon Valley after officials at the Department of Defense went head-to-head with Anthropic executives over the terms of a roughly $200 million contract.

One of the issues that led to the breakdown, which resulted in defense secretary Pete Hegseth declaring Anthropic a supply chain risk, was Anthropic’s desire to limit the use of its models in autonomous weapons.

Markoff says the furor obscures the fact that today’s large language models are not optimized for military use. General-purpose models like Claude are good at summarizing reports, he says. But they’re not trained on military data and lack a human-level understanding of the physical world, making them ill suited to controlling physical hardware. “I can tell you they are absolutely not capable of target identification,” Markoff claims.

“No one that I’m aware of in the Department of War is talking about fully automating the kill chain,” he claims, referring to the steps involved in making decisions on the use of deadly force.

Mission Scope

The US and other militaries already use autonomous weapons in certain situations, including in missile defense systems that need to react at superhuman speeds.

“The US and over 30 other states are already deploying weapon systems with varying degrees of autonomy, including some I would define as fully autonomous,” claims Rebecca Crootof, an authority on the legal issues surrounding autonomous weapons at the University of Richmond School of Law.

In the future, specialized models like the one Smack is working on could be used for mission planning purposes, too, according to Markoff. The company’s models are meant to help commanders automate much of the drudgery involved in sketching out mission plans. Planning military missions is still typically done manually with whiteboards and notepads, Markoff says.

If the US went to war with a “near peer” such as Russia or China, Markoff says, automated decisionmaking could offer the US a much needed “decision dominance.”

But it’s still an open question whether AI could be used reliably in such circumstances. One recent experiment, run by a researcher at King’s College London, alarmingly showed that LLMs tended to escalate nuclear conflicts in war games.



Source link

#Models #War

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Add a comment Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post
Big Tech Signs Nonbinding White House Pledge To Protect Consumers From Data Centers Science 22509910

Large Tech Indicators White Home Information Middle Pledge With Good Optics and Little Substance

Next Post
nucleus origin 800 x 560 m

How the Nucleus Made its Nice Debut