In 2021, the Maryland Division of Well being and the state police had been confronting a disaster: Deadly drug overdoses within the state had been at an all-time excessive, and authorities didn’t know why.
Searching for solutions, Maryland officers turned to scientists on the Nationwide Institute of Requirements and Know-how, the nationwide metrology institute for the USA, which defines and maintains requirements of measurement important to a variety of business sectors and well being and safety purposes.
There, a analysis chemist named Ed Sisco and his crew had developed strategies for detecting hint quantities of medication, explosives, and different harmful supplies—strategies that might defend regulation enforcement officers and others who needed to accumulate these samples. And a pilot uncovered new, important info virtually instantly. Learn the total story.
—Adam Bluestein
This story is from the following version of our print journal. Subscribe now to learn it and get a replica of the journal when it lands!
Section two of army AI has arrived
—James O’Donnell
Final week, I spoke with two US Marines who spent a lot of final yr deployed within the Pacific, conducting coaching workouts from South Korea to the Philippines. Each had been answerable for analyzing surveillance to warn their superiors about doable threats to the unit. However this deployment was distinctive: For the primary time, they had been utilizing generative AI to scour intelligence, via a chatbot interface just like ChatGPT.
As I wrote in my new story, this experiment is the newest proof of the Pentagon’s push to make use of generative AI—instruments that may have interaction in humanlike dialog—all through its ranks, for duties together with surveillance. This push raises alarms from some AI security specialists about whether or not massive language fashions are match to research refined items of intelligence in conditions with excessive geopolitical stakes.