IoT can be viewed as an ideal opportunity to bridge the worlds of information technology (IT) and operational technology (OT). IoT projects over the last several years have focused on device onboarding and getting data aggregated into the cloud…usually an IT-managed cloud. Data access and analytical insight from that data has been parsed out from IT as a service. The “IT as a service” model is not well positioned to take advantage of the new opportunities presented by edge compute devices and the analytical capabilities in those devices transacted at origination of the data.
Historically, IT and OT data did not intersect. The departmental silos were hardened by “division-of-responsibility” charters that ratified barriers to working together. But today’s ability to deploy analytics at the edge raises the need for the integration of the IT and OT worlds. Read More
Monthly Archives: October 2019
Information Exposure From Consumer IoT Devices
Internet of Things (IoT) devices are increasingly found in every-day homes, providing useful functionality for devices such as TVs,smart speakers, and video doorbells. Along with their benefits come potential privacy risks, since these devices can communicate information about their users to other parties over the Internet. However,understanding these risks in depth and at scale is difficult due to heterogeneity in devices’ user interfaces, protocols, and functionality.
In this work, we conduct a multidimensional analysis of information exposure from 81 devices located in labs in the US and UK. Through a total of 34,586 rigorous automated and manual con-trolled experiments, we characterize information exposure in terms of destinations of Internet traffic, whether the contents of communication are protected by encryption, what are the IoT-device interactions that can be inferred from such content, and whether there are unexpected exposures of private and/or sensitive information (e.g., video surreptitiously transmitted by a recording device). We highlight regional differences between these results, potentially due to different privacy regulations in the US and UK. Last, we compare our controlled experiments with data gathered from an insitu user study comprising 36 participants. Read More
MIT Deep Learning Basics: Introduction and Overview with TensorFlow
As part of the MIT Deep Learning series of lectures and GitHub tutorials, we are covering the basics of using neural networks to solve problems in computer vision, natural language processing, games, autonomous driving, robotics, and beyond.
This blog post provides an overview of deep learning in 7 architectural paradigms with links to TensorFlow tutorials for each. It accompanies the following lecture on Deep Learning Basics as part of MIT course 6.S094. Read More
How Fog Computing is changing the BigData paradigm for IoT device?
The new era of BigData and advances in technology have made significant transitions towards the high functionality of IoT devices. The popularity of IoT devices has led to more easier methods for BigData collection, analysis, and distribution at a rapid rate. According to a report by Statista, by 2020, there will be 30 billion IoT devices worldwide, with this number set to exceed over 75 billion by 2025, Statistically, also, BigData accumulation over IoT devices and networks is clearly visible and to solve this problem, various computing methods are already popular. There are methods like quantum computing, cloud computing, edge/fog computing.
Though Quantum computing has a bright prospect, it has a long way to go, meanwhile, cloud computing is already a popular analytic method among developers and data scientists. In 2014, a new method, ‘fogging’ was first termed at Cisco. Fogging is better known as edge computing/fog computing. Big data analytics tools like Hadoop helps in reducing the cost of storage. This further increases the efficiency of the business. Read More
Blind Spots in AI Just Might Help Protect Your Privacy
Machine learning, for all its benevolent potential to detect cancers and create collision-proof self-driving cars, also threatens to upend our notions of what’s visible and hidden. It can, for instance, enable highly accurate facial recognition, see through the pixelation in photos, and even—as Facebook’s Cambridge Analytica scandal showed—use public social media data to predict more sensitive traits like someone’s political orientation.
Those same machine-learning applications, however, also suffer from a strange sort of blind spot that humans don’t—an inherent bug that can make an image classifier mistake a rifle for a helicopter, or make an autonomous vehicle blow through a stop sign. Those misclassifications, known as adversarial examples, have long been seen as a nagging weakness in machine-learning models. Read More
Intelligence & National Security 2019 — Opening Plenary: Fireside Chat
Google’s ‘Quantum Supremacy’ Isn’t the End of Encryption
Google accidentally made computer science history last week. In recent years the company has been part of an intensifying competition with rivals such as IBM and Intel to develop quantum computers, which promise immense power on some problems by tapping into quantum physics. The search company has attempted to stand out by claiming its prototype quantum processors were close to demonstrating “quantum supremacy,” an evocative phrase referring to an experiment in which a quantum computer outperforms a classical one. One of Google’s lead researchers predicted the company would reach that milestone in 2017.
Friday, news slipped out that Google had reached the milestone. The Financial Times drew notice to a draft research paper that had been quietly posted to a NASA website in which Google researchers describe achieving quantum supremacy. Read More
Harnessing Data at the Speed of War
Decades of parochialism within the U.S. military fostered isolated digital networks that force the user to serve as integrator, squandering organizational energy and intellect. For the past 18 years, the U.S. and our partners have been fighting counterinsurgency and counterterrorism wars in Iraq and Afghanistan. In these theaters, arcane methods of digital collaboration with complicated workarounds became the norm, but our ability to set favorable conditions for operations mitigated the egregious distraction of outdated networks. In other words, we got away with it. We are unlikely to be so fortunate against adversaries like Russia or China that can match or exceed our capabilities. The ability to rapidly synthesize data to inform decision-making across all echelons and domains is necessary to achieve victory.
The services lack the ability to effectively communicate – whether internally, amongst one another, with the intelligence community, or with multinational partners. The DoD needs to establish joint data standards with the goal of creating commonly-accessible data. Without commonly-accessible data, the U.S. military will not realize the potential of 21st-century technologies. We remain too reactive and slow to mitigate risk and seize opportunities,[1] and our function- and service-centric infrastructure prevents digital collaboration. Read More
AI Augmentation: The Real Future of Artificial Intelligence
I love Grammarly, the writing correction software from Grammarly, Inc. As a writer, it has proved invaluable to me time and time again, popping up quietly to say that I forgot a comma, got a bit too verbose on a sentence, or have used too many adverbs. I even sprung for the professional version.
Besides endorsing it, I bring Grammarly up for another reason. It is the face of augmentative AI. Read More
High quality, lightweight and adaptable TTS using LPCNet
We present a lightweight adaptable neural TTS system with high quality output. The system is composed of three separate neural network blocks: prosody prediction, acoustic feature prediction and Linear Prediction Coding Net as a neural vocoder. This system can synthesize speech with close to natural quality while running 3 times faster than real-time on a standard CPU.
The modular setup of the system allows for simple adaptation to new voices with a small amount of data. Read More