There are four major findings about operations critical to the effectiveness and success of future of intelligence operations in 2035-2050 and beyond. The findings apply broadly not only to military intelligence, but the greater Intelligence Community (IC), the DoD, and by default to several other elements of our federal national security framework. However, realizing the Future Intelligence CONOPS 2035-2050 projection assumes addressing the findings. If not, it highly probable intelligence operations will continue to mimic the current reactive posture of today.
The four findings:
— The IC and DoD, created in 1947, continue to function in a primarily reactive posture, using the industrial age processes of the era in which they were created.
— Information & democratization of technology has changed the character of warfare.
— 2018 National Security and Defense Security Strategies address the new character of warfare.
— Immediate investments are required to enable the success and effectiveness of future intelligence operations in 2035-2050 and beyond. Read More
Daily Archives: November 4, 2019
National Security Commission on Artificial Intelligence (NSCAI): Initial Report
The National Security Commission on Artificial Intelligence — which is tasked with researching ways to advance the development of AI for national security and defense purposes — released its initial report to Congress July 31.
The panel has 15 members, led by Chairman Eric Schmidt, the former head of Google’s parent company Alphabet, and Vice Chairman Robert O. Work, a former deputy secretary of defense who served in the Obama administration. Read More
National Security Commission on Artificial Intelligence (NSCAI): Interim Report
In the report, the government-commissioned panel notes many times that China is investing more in AI and is taking advantage of the U.S. to “transfer AI know-how.” The report also says that AI infrastructure within the Department of Defense “is severely underdeveloped.”
The Commission raised concerns about the progress China has made. The report also said the U.S. government still faces enormous work before it can transition AI from “a promising technological novelty into a mature technology integrated into core national security missions.” Read More
12 Must Watch TED Talks on AI
For all, you who are technology lovers, AI enthusiasts, and casual consumers with peaked interest, don’t miss your chance to learn about the newest advancements in artificial intelligence and an opportunity to join the discussion on the ethics, logistics, and reality of super-intelligent machines. Explore the possibilities of super-intelligence improving our world and our everyday lives while you dive into this great list of TED Talks on artificial intelligence. We have compiled a list of the best TED Talks on AI, providing you with the information you seek on AI technological developments, innovation, and the future of AI.
Here are the best TED Talks for anyone interested in AI. Read More
Profiling BGP Serial Hijackers: Capturing Persistent Misbehavior in the Global Routing Table
BGP hijacks remain an acute problem in today’s Internet, with wide-spread consequences. While hijack detection systems are readily available, they typically rely on a priori prefix-ownership information and are reactive in nature. In this work, we take on a new perspective on BGP hijacking activity: we introduce and track the long-term routing behavior of serial hijackers, networks that repeatedly hijack address blocks for malicious purposes, often over the course of many months or even years. Based on a ground truth dataset that we construct by extracting information from network operator mailing lists, we illuminate the dominant routing characteristics of serial hijackers, and how they differ from legitimate networks. We then distill features that can capture these behavioral differences and train a machine learning model to automatically identify Autonomous Systems (ASes) that exhibit characteristics similar to serial hijackers. Our classifier identifies≈900 ASes with similar behavior in the global IPv4 routing table. We analyze and categorize these networks, finding a wide range of indicators of malicious activity, misconfiguration, as well as benign hijacking activity. Our work presents a solid first step towards identifying and understanding this important category of networks, which can aid network operators in taking proactive measures to defend themselves against prefix hijacking and serve as input for current and future detection systems. Read More
From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices
The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Samuel, 1960; Wiener, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the ‘what’ of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)—rather than on practices, the ‘how.’ Awareness of the potential issues is increasing at a fast rate, but the AI community’s ability to take action to mitigate the associated risks is still at its infancy. Our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers apply ethics at each stage of the Machine Learning development pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs. Read More
The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions
The last few years have seen a proliferation of principles for AI ethics. There is substantial overlap between different sets of principles, with widespread agreement that AI should be used for the common good, should not be used to harm people or undermine their rights, and should respect widely held values such as fairness, privacy, and autonomy. While articulating and agreeing on principles is important, it is only a starting point. Drawing on comparisons with the field of bioethics, we highlight some of the limitations of principles: in particular, they are often too broad and high-level to guide ethics in practice. We suggest that an important next step for the field of AI ethics is to focus on exploring the tensions that inevitably arise as we try to implement principles in practice. By explicitly recognising these tensions we can begin to make decisions about how they should be resolved in specific cases, and develop frameworks and guidelines for AI ethics that are rigorous and practically relevant. We discuss some different specific ways that tensions arise in AI ethics, and what processes might be needed to resolve them. Read More