Identifying Nuances in Fake News vs. Satire: Using Semantic and Linguistic Cues

The blurry line between nefarious fake news and protected-speech satire has been a notorious struggle for social media platforms. Further to the efforts of reducing exposure to misinformation on social media, purveyors of fake news have begun to masquerade as satire sites to avoid being demoted. In this work, we address the challenge of automatically classifying fake news versus satire. Previous work have studied whether fake news and satire can be distinguished based on language differences. Contrary to fake news, satire stories are usually humorous and carry some political or social message. We hypothesize that these nuances could be identified using semantic and linguistic cues. Consequently, we train a machine learning method using semantic representation, with a state-of-the-art contextual language model, and with linguistic features based on textual coherence metrics. Empirical evaluation attests to the merits of our approach compared to the language-based baseline and sheds light on the nuances between fake news and satire. As avenues for future work, we consider studying additional linguistic features related to the humor aspect, and enriching the data with current news events, to help identify a political or social message. Read More

#fake, #nlp

The Eighty Five Percent Rule for optimal learning

Researchers and educators have long wrestled with the question of how best to teach their clients be they humans, non-human animals or machines. Here, we examine the role of a single variable, the difficulty of training, on the rate of learning. In many situations we find that there is a sweet spot in which training is neither too easy nor too hard, and where learning progresses most quickly. We derive conditions for this sweet spot for a broad class of learning algorithms in the context of binary classification tasks. For all of these stochastic gradient-descent based learning algorithms, we find that the optimal error rate for training is around 15.87% or, conversely, that the optimal training accuracy is about 85%. We demonstrate the efficacy of this ‘Eighty Five Percent Rule’ for artificial neural networks used in AI and biologically plausible neural networks thought to describe animal learning. Read More

#accuracy, #machine-learning

How to Spy on Your Neighbors With a USB TV Tuner

A TV-tuning USB dongle and free software let you hear the radio signals emitted by computer screens, TVs, smartphones — even keyboards.

“Every device that you own is screaming its name into the infinite void,” said security researcher Melissa Elliott this past Saturday (Aug. 3) at the DEF CON hacker conference in Las Vegas. Read More

#surveillance

The Fantasy of Opting Out

Those who know about us have power over us. Obfuscation may be our best digital weapon.

Consider a day in the life of a fairly ordinary person in a large city in a stable, democratically governed country. She is not in prison or institutionalized, nor is she a dissident or an enemy of the state, yet she lives in a condition of permanent and total surveillance unprecedented in its precision and intimacy. Read More

#surveillance

Future Military Intelligence CONOPS and S&T Investment Roadmap 2035-2050: The Cognitive War

There are four major findings about operations critical to the effectiveness and success of future of intelligence operations in 2035-2050 and beyond. The findings apply broadly not only to military intelligence, but the greater Intelligence Community (IC), the DoD, and by default to several other elements of our federal national security framework. However, realizing the Future Intelligence CONOPS 2035-2050 projection assumes addressing the findings. If not, it highly probable intelligence operations will continue to mimic the current reactive posture of today.

The four findings:

— The IC and DoD, created in 1947, continue to function in a primarily reactive posture, using the industrial age processes of the era in which they were created.
— Information & democratization of technology has changed the character of warfare.
— 2018 National Security and Defense Security Strategies address the new character of warfare.
— Immediate investments are required to enable the success and effectiveness of future intelligence operations in 2035-2050 and beyond. Read More

#dod, #ic

National Security Commission on Artificial Intelligence (NSCAI): Initial Report

The National Security Commission on Artificial Intelligence — which is tasked with researching ways to advance the development of AI for national security and defense purposes — released its initial report to Congress July 31.

The panel has 15 members, led by Chairman Eric Schmidt, the former head of Google’s parent company Alphabet, and Vice Chairman Robert O. Work, a former deputy secretary of defense who served in the Obama administration. Read More

#china-vs-us, #dod, #ic

National Security Commission on Artificial Intelligence (NSCAI): Interim Report

In the report, the government-commissioned panel notes many times that China is investing more in AI and is taking advantage of the U.S. to “transfer AI know-how.” The report also says that AI infrastructure within the Department of Defense “is severely underdeveloped.” 

The Commission raised concerns about the progress China has made. The report also said the U.S. government still faces enormous work before it can transition AI from “a promising technological novelty into a mature technology integrated into core national security missions.” Read More

#china-vs-us, #dod, #ic

12 Must Watch TED Talks on AI

For all, you who are technology lovers, AI enthusiasts, and casual consumers with peaked interest, don’t miss your chance to learn about the newest advancements in artificial intelligence and an opportunity to join the discussion on the ethics, logistics, and reality of super-intelligent machines. Explore the possibilities of super-intelligence improving our world and our everyday lives while you dive into this great list of TED Talks on artificial intelligence. We have compiled a list of the best TED Talks on AI, providing you with the information you seek on AI technological developments, innovation, and the future of AI.

Here are the best TED Talks for anyone interested in AI. Read More

#ted-talks

Profiling BGP Serial Hijackers: Capturing Persistent Misbehavior in the Global Routing Table

BGP hijacks remain an acute problem in today’s Internet, with wide-spread consequences. While hijack detection systems are readily available, they typically rely on a priori prefix-ownership information and are reactive in nature. In this work, we take on a new perspective on BGP hijacking activity: we introduce and track the long-term routing behavior of serial hijackers, networks that repeatedly hijack address blocks for malicious purposes, often over the course of many months or even years. Based on a ground truth dataset that we construct by extracting information from network operator mailing lists, we illuminate the dominant routing characteristics of serial hijackers, and how they differ from legitimate networks. We then distill features that can capture these behavioral differences and train a machine learning model to automatically identify Autonomous Systems (ASes) that exhibit characteristics similar to serial hijackers. Our classifier identifies≈900 ASes with similar behavior in the global IPv4 routing table. We analyze and categorize these networks, finding a wide range of indicators of malicious activity, misconfiguration, as well as benign hijacking activity. Our work presents a solid first step towards identifying and understanding this important category of networks, which can aid network operators in taking proactive measures to defend themselves against prefix hijacking and serve as input for current and future detection systems. Read More

#cyber

From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices

The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Samuel, 1960; Wiener, 1961). However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles—the ‘what’ of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)—rather than on practices, the ‘how.’ Awareness of the potential issues is increasing at a fast rate, but the AI community’s ability to take action to mitigate the associated risks is still at its infancy. Our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers apply ethics at each stage of the Machine Learning development pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs. Read More

#ethics