Leidos Holdings Inc.

28/03/2024 | Press release | Distributed by Public on 28/03/2024 19:50

A hackathon produces AI-enabled cJADC2 solutions for the battlefield

It happened at the BRAVO 11 Bits2Effects hackathon, organized by the Office of the Secretary of Defense Chief Digital and AI Office, Defense Innovation Unit, U.S. Indo-Pacific Command, U.S. Army Pacific Command, and the U.S. Air Force.

"The hackathon was a chance to build something in five days that moves the needle on important use cases," says Charles Ott, a Leidos solutions architect specializing in national security applications, one of the Leidos experts who participated in the hackathon.

The hidden meaning in M2M

Ott's specific challenge was helping commanders pull meaningful information out of a vast sea of battlefield data. He notes that most of the data pouring in are machine-to-machine (M2M) - or generated by devices swapping acknowledgments and other low-level communications tasks of little significance to a commander.

"Just think about the number of messages zipping around the battlefield coming from everything from jets to hand-held radios," says Ott. "There's a portion of those messages that have important meaning to someone who needs a picture of the battle, but it's not in a human-readable form."

The solution that Ott and others on his team at the hackathon worked out is something he refers to as a "combat reporter"-a system capable of sifting through all the noise, finding what matters, and summarizing it in language that a commander can quickly digest. It's a solution built around a large language model (LLM), a type of generative AI that can process and produce information in plain language.

"The LLM can quickly learn how to take live data from tactical systems and bubble it up into analysis," says Ott. "Then a commander can query the system in plain English to get answers to questions like what are the high-priority targets, and who is engaging what target with what resources."

RELATED READING: Highlights from Generative AI Palooza

Ott notes that working with LLMs and other AI requires vigilance against "drift" - the tendency of AI over time to start producing occasional false or misleading responses. The challenge of building trustworthy AI is one taken on by Leidos' Framework for AI Resilience and Security (FAIRS). This method approaches AI so the results are predictable and resilient and don't put humans or missions at risk. Ott adds that there are several ways to achieve that extra reliability with an LLM, including retraining the model with better data, giving the model less leeway to be "creative," or turning to a different LLM.

Weaving a resilient communications web

Ott points out that being able to pull battlefield data into an LLM requires ensuring that the different machines transmitting the data aren't using proprietary protocols that the system can't access.

"Open architectures that provide interoperability are becoming more important, especially to execute cJADC2," he says.

In fact, coping with the different protocols that might be encountered on the battlefield was the aim of the second application Leidos experts tackled at the hackathon. Joint-developed by Edson Dos Santos, a Leidos senior software engineer, the effort was built around STITCHES - a DARPA-developed project now overseen by the U.S. Air Force that can automatically develop software to connect two machines speaking different protocols.

"It's a way to find a communications path from point A to point B," explains Dos Santos, "even if the data has to first be translated into many different protocols to get there."

To show how STITCHES could be put to good use on the battlefield, Dos Santos and his hackathon team came up with software that can provide real-time communications between aircraft and artillery units relying on incompatible communications protocols.

"Now a jet can tell the artillery unit what targets are in and out of its range, and the artillery unit can redirect an aircraft to a different target," he says.