A recent Chinese state media report claims that pilots from the country’s air force have been losing a not insignificant amount of the time to artificial intelligence-driven opponents in simulated dogfights. This sounds reminiscent of the very public outcome of the U.S. Defense Advanced Research Projects Agency’s AlphaDogfight Trials last year, work that has since been leveraged in more advanced demonstrations. It also underscores the People’s Liberation Army’s growing interest and investment in the development of advanced artificial intelligence and machine learning technologies, generally. Read More
#china-ai, #dodTag Archives: DoD
A national strategy for AI innovation
The Air Force’s AI Brain Just Flew for the First Time
The U.S. Air Force just took a major step toward a future crowded with AI-powered warplanes.
Late last month, the Air Force’s new Skyborg Autonomy Core System (ACS) flew a pilotless drone over Florida and the Gulf of Mexico, proving the AI could adhere to basic flight commands. The system will eventually lead to high-speed drones, powered by Skyborg, equipped with sensors, weapons, and other payloads to accomplish lonely—and dangerous—jobs that manned fighters used to carry out. Read More
Cyberspace Is Neither Just an Intelligence Contest, nor a Domain of Military Conflict; SolarWinds Shows Us Why It’s Both
Operations in cyberspace—at least those perpetrated by nation-state actors and their proxies—reflect the geopolitical calculations of the actors who carry them out. Strategic interactions between rivals in cyberspace have been argued by some, like Joshua Rovner or Jon Lindsay, to reflect an intelligence contest. Others, like Jason Healey and Robert Jervis, have suggested that cyberspace is largely a domain of warfare or conflict. The contours of this debate as applied to the SolarWinds campaign have been outlined recently—Melissa Griffith shows how cyberspace is sometimes an intelligence contest, and other times a domain of conflict, depending on the strategic approaches and priorities of particular actors at a given moment in time.
Therefore, rather than focusing on the binary issue of whether a warfare versus intelligence framework is more applicable to cyberspace, the fact that activity in cyberspace takes on both of these characteristics at different times raises interesting questions about how these dimensions relate to one another at the operational level. Read More
The Department of Defense’s Looming AI Winter
The Department of Defense is on a full-tilt sugar high about the potential for AI to secure America’s competitive edge over potential adversaries. AI does hold exciting possibilities. But an artificial AI winter looms for the department, potentially restraining it from joining the rest of the world in the embrace of an AI spring.
The department’s frenzy for AI is distracting it from underlying issues preventing operationalization of AI at scale. When these efforts fail to meet expectations, the sugar rush will collapse into despair. The resultant feedback loop will deprioritize and defund AI as a critical weapon system. This is known as an “AI winter,” and the Department of Defense has been here twice before. If it happens again, it won’t be because the technology wasn’t ready, but because the Department of Defense doesn’t know enough about AI, has allowed a bureaucracy to grow up between the people who will use AI and those developing it for them, and is trying to tack “AI-ready” components onto legacy systems on the cheap. Read More
AI.gov
The just launched AI.gov is home of the National AI Initiative and connection point to ongoing activities to advance U.S. leadership in AI. The National AI Initiative Act of 2020 became law on January 1, 2021, providing for a coordinated program across the entire Federal government to accelerate AI research and application for the Nation’s economic prosperity and national security. The mission of the National AI Initiative is to ensure continued U.S. leadership in AI research and development, lead the world in the development and use of trustworthy AI in the public and private sectors, and prepare the present and future U.S. workforce for the integration of AI systems across all sectors of the economy and society. Read More
The AI arms race has us on the road to Armageddon
It’s now a given that countries worldwide are battling for AI supremacy. To date, most of the public discussion surrounding this competition has focused on commercial gains flowing from the technology. But the AI arms race for military applications is racing ahead as well, and concerned scientists, academics, and AI industry leaders have been sounding the alarm.
Compared to existing military capabilities, AI-enabled technology can make decisions on the battlefield with mathematical speed and accuracy and never get tired. However, countries and organizations developing this tech are only just beginning to articulate ideas about how ethics will influence the wars of the near future. Clearly, the development of AI-enabled autonomous weapons systems will raise significant risks for instability and conflict escalation. However, calls to ban these weapons are unlikely to succeed.
In an era of rising military tensions and risk, leading militaries worldwide are moving ahead with AI-enabled weapons and decision support, seeking leading-edge battlefield and security applications. The military potential of these weapons is substantial, but ethical concerns are largely being brushed aside. Already they are in use to guard ships against small boat attacks, search for terrorists, stand sentry, and destroy adversary air defenses. Read More
China leads the U.S. in three critical AI areas — data, applications, and integration — according to Bob Work
The US has a narrow lead on China in artificial intelligence, but the Chinese are catching up fast. In fact, they’re already at least narrowly ahead in three of six critical areas, the vice-chair of the National Security Commission on AI said today.
“We do not believe China is ahead right now in AI” overall, Robert Work said, speaking at a Pentagon press conference alongside Lt. Gen. Mike Groen, the director of the Joint Artificial Intelligence Center. But, Work went on, “look, AI is not a single technology, it is a bundle of technologies” – what professionals in the field call the “AI stack.”
As Work and the commission’s final report explain it, the AI stack has six interdependent layers. The foundational layer is not technology but people who know what to do with it. The second most fundamental layer is data, the raw material machine learning must ingest en masse to evolve. Then there’s hardware, on which everything else runs; algorithms, the complex and ever-evolving equations that drive machine learning; applications, which apply algorithms to specific functions; and integration, which ties different applications together. Read More
The future of AI is being shaped right now. How should policymakers respond?
The US government is contemplating how to shape AI policy. Competition with China looms large.
For a long time, artificial intelligence seemed like one of those inventions that would always be 50 years away. The scientists who developed the first computers in the 1950s speculated about the possibility of machines with greater-than-human capacities. But enthusiasm didn’t necessarily translate into a commercially viable product, let alone a superintelligent one.
And for a while — in the ’60s, ’70s, and ’80s — it seemed like such speculation would remain just that. The sluggishness of AI development actually gave rise to a term: “AI winters,” periods when investors and researchers got bored with lack of progress in the field and devoted their attention elsewhere.
No one is bored now. Read More
The Perils of Overhyping Artificial Intelligence
In 1983, the U.S. military’s research and development arm began a ten-year, $1 billion machine intelligence program aimed at keeping the United States ahead of its technological rivals. From the start, computer scientists criticized the project as unrealistic. It promised big and ultimately failed hard in the eyes of the Pentagon, ushering in a long artificial intelligence (AI) “winter” during which potential funders, including the U.S. military, shied away from big investments in the field and abandoned promising areas of research.
Today, AI is once again the darling of the national security services. And once again, it risks sliding backward as a result of a destructive “hype cycle” in which overpromising conspires with inevitable setbacks to undermine the long-term success of a transformative new technology. Military powers around the world are investing heavily in AI, seeking battlefield and other security applications that might provide an advantage over potential adversaries. In the United States, there is a growing sense of urgency around AI, and rightly so. As former Secretary of Defense Mark Esper put it, “Those who are first to harness once-in-a-generation technologies often have a decisive advantage on the battlefield for years to come.” However, there is a very real risk that expectations are being set too high and that an unwillingness to tolerate failures will mean the United States squanders AI’s potential and falls behind its rivals. Read More