Spy vs. AI

In the early 1950s, the United States faced a critical intelligence challenge in its burgeoning competition with the Soviet Union. Outdated German reconnaissance photos from World War II could no longer provide sufficient intelligence about Soviet military capabilities, and existing U.S. surveillance capabilities were no longer able to penetrate the Soviet Union’s closed airspace. This deficiency spurred an audacious moonshot initiative: the development of the U-2 reconnaissance aircraft. In only a few years, U-2 missions were delivering vital intelligence, capturing images of Soviet missile installations in Cuba and bringing near-real-time insights from behind the Iron Curtain to the Oval Office.

Today, the United States stands at a similar juncture. Competition between Washington and its rivals over the future of the global order is intensifying, and now, much as in the early 1950s, the United States must take advantage of its world-class private sector and ample capacity for innovation to outcompete its adversaries. The U.S. intelligence community must harness the country’s sources of strength to deliver insights to policymakers at the speed of today’s world. The integration of artificial intelligence, particularly through large language models, offers groundbreaking opportunities to improve intelligence operations and analysis, enabling the delivery of faster and more relevant support to decisionmakers. This technological revolution comes with significant downsides, however, especially as adversaries exploit similar advancements to uncover and counter U.S. intelligence operations. With an AI race underway, the United States must challenge itself to be first—first to benefit from AI, first to protect itself from enemies who might use the technology for ill, and first to use AI in line with the laws and values of a democracy.

For the U.S. national security community, fulfilling the promise and managing the peril of AI will require deep technological and cultural changes and a willingness to change the way agencies work. The U.S. intelligence and military communities can harness the potential of AI while mitigating its inherent risks, ensuring that the United States maintains its competitive edge in a rapidly evolving global landscape. Even as it does so, the United States must transparently convey to the American public, and to populations and partners around the world, how the country intends to ethically and safely use AI, in compliance with its laws and values. — Read More

#dod, #ic

OpenAI launches ChatGPT Gov, hoping to further government ties

OpenAI has announced a new more tailored version of ChatGPT called ChatGPT Gov, a service that the company said is meant to accelerate government use of the tool for non-public sensitive data. 

In an announcement Tuesday, the company said that ChatGPT Gov, which can run in the Microsoft Azure commercial cloud or Azure Government cloud, will give federal agencies increased ability to use OpenAI frontier models. The product is also supposed to make it easier for agencies to follow certain cybersecurity and compliance requirements, while exploring potential applications of the technology, the announcement said.

Through ChatGPT Gov, federal agencies can use GPT-4o, along with a series of other OpenAI tools, and build custom search and chat systems developed by agencies. — Read More

#dod, #ic

Microsoft launches AI chatbot for spies

Microsoft has introduced a GPT-4-based generative AI model designed specifically for US intelligence agencies that operates disconnected from the Internet, according to a Bloomberg report. This reportedly marks the first time Microsoft has deployed a major language model in a secure setting, designed to allow spy agencies to analyze top-secret information without connectivity risks—and to allow secure conversations with a chatbot similar to ChatGPT and Microsoft Copilot. But it may also mislead officials if not used properly due to inherent design limitations of AI language models. — Read More

#ic

M-24-10: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence

Artificial intelligence (AI) is one of the most powerful technologies of our time, and the
President has been clear that we must seize the opportunities AI presents while managing its
risks. Consistent with the AI in Government Act of 2020, the Advancing American AI Act,
and Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial
Intelligence, this memorandum directs agencies to advance AI governance and innovation while
managing risks from the use of AI in the Federal Government, particularly those affecting the
rights and safety of the public. — Read More

#dod, #ic

Department of Homeland Security Unveils Artificial Intelligence Roadmap

DHS Will Launch Three Pilot Projects to Test AI Technology to Enhance Immigration Officer Training, Help Communities Build Resilience and Reduce Burden for Applying for Disaster Relief Grants, and Improve Efficiency of Law Enforcement Investigations.

… As part of the roadmap, DHS announced three innovative pilot projects that will deploy AI in specific mission areas. Homeland Security Investigations (HSI) will test AI to enhance investigative processes focused on detecting fentanyl and increasing efficiency of investigations related to combatting child sexual exploitation. The Federal Emergency Management Agency (FEMA) will deploy AI to help communities plan for and develop hazard mitigation plans to build resilience and minimize risks. And, United States Citizenship and Immigration Services (USCIS) will use AI to improve immigration officer training. — Read More

The RoadMap

#ic

National Artificial Intelligence Research Resource Pilot

The National Artificial Intelligence Research Resource (NAIRR) is a vision for a shared national research infrastructure for responsible discovery and innovation in AI. 

The NAIRR pilot brings together computational, data, software, model, training and user support resources to demonstrate and investigate all major elements of the NAIRR vision first laid out by the NAIRR Task Force.

Led by the U.S. National Science Foundation (NSF) in partnership with 10 other federal agencies and 25 non-governmental partners, the pilot makes available government-funded, industry and other contributed resources in support of the nation’s research and education community.    – Read More

#dod, #ic

What ChatGPT Can and Can’t Do for Intelligence

In November 2022, ChatGPT emerged as a front-runner among artificial intelligence (AI) large language models (LLMs), capturing the attention of the CIA and other U.S. defense agencies. General artificial intelligence—AI with flexible reasoning like that of humans—is still beyond the technological horizon and might never happen. But most experts agree that LLMs are a major technological step forward. The ability of LLMs to produce useful results in some tasks, and entirely miss the mark on others, offers a glimpse into the capabilities and constraints of AI in the coming decade.

The prospects of ChatGPT for intelligence are mixed. On the one hand, the technology appears “impressive,” and “scarily intelligent,” but on the other hand, its own creators warned that “it can create a misleading impression of greatness.” In the absence of an expert consensus, researchers and practitioners must explore the potential and downsides of the technology for intelligence. — Read More

#ic

DHS Announces First-Ever AI Task Force

On Friday, Department of Homeland Security Secretary Alejandro Mayorkas announced the formation of a new resource group focused solely on combating negative repercussions of the widespread advent of artificial intelligence technologies.

The AI Task Force, unveiled during Mayorkas’s remarks before a Council on Foreign Relations event, will analyze adverse impacts surrounding generative AI systems such as ChatGPT as well as potential uses for the emerging technology.

… Some of the focal points of the AI Task Force highlighted by DHS include integrating AI in supply chain and border trade management, countering the flow of fentanyl into the U.S., and applying AI to digital forensic tools to counter child exploitation and abuse.  Read More

#cyber, #ic

AI Task Force Asks Congress for $2.6B to Stand Up R&D Hub

The White House-led National Artificial Intelligence (AI) Research Resource (NAIRR) Task Force is asking Congress for $2.6 billion to fund its plans to stand up a national research infrastructure that would broaden access to the resources essential to AI research and development (R&D).

In a final report released on Tuesday, the task force estimated the NAIRR – a Federal AI data and research hub – would need $2.6 billion in congressional appropriations over its first six years to reach initial operating capacity. Read More

#china-vs-us, #dod, #ic

IARPA Kicks off Research Into Linguistic Fingerprint Technology

The Intelligence Advanced Research Projects Activity (IARPA), the research and development arm of the Office of the Director of National Intelligence, today announced the launch of a program that seeks to engineer novel artificial intelligence technologies capable of attributing authorship and protecting authors’ privacy.

The Human Interpretable Attribution of Text Using Underlying Structure (HIATUS) program represents the Intelligence Community’s latest research effort to advance human language technology. The resulting innovations could have far-reaching impacts, with the potential to counter foreign malign influence activities; identify counterintelligence risks; and help safeguard authors who could be endangered if their writing is connected to them. Read More

#privacy, #ic