In this paper, we investigate “split-second phantom attacks,” a scientific gap that causes two commercial advanced driver-assistance systems (ADASs), Telsa Model X (HW 2.5 and HW 3) and Mobileye 630, to treat a depthless object that appears for a few milliseconds as a real obstacle/object. We discuss the challenge that split-second phantom attacks create for ADASs. We demonstrate how attackers can apply split-second phantom attacks remotely by embedding phantom road signs into an advertisement presented on a digital billboard which causes Tesla’s autopilot to suddenly stop the car in the middle of a road and Mobileye 630 to issue false notifications. We also demonstrate how attackers can use a projector in order to cause Tesla’s autopilot to apply the brakes in response to a phantom of a pedestrian that was projected on the road and Mobileye 630 to issue false notifications in response to a projected road sign. To counter this threat, we propose a countermeasure which can determine whether a detected object is a phantom or real using just the camera sensor. The countermeasure (GhostBusters) uses a “committee of experts” approach and combines the results obtained from four lightweight deep convolutional neural networks that assess the authenticity of an object based on the object’s light, context, surface, and depth. We demonstrate our countermeasure’s effectiveness (it obtains a TPR of 0.994 with an FPR of zero) and test its robustness to adversarial machine learning attacks. Read More
Monthly Archives: October 2020
NLP with CNNs
A step by step explanation, with a Keras implementation of the architecture.
Convolutional neural networks (CNNs) are the most widely used deep learning architectures in image processing and image recognition. Given their supremacy in the field of vision, it’s only natural that implementations on different fields of machine learning would be tried. In this article, I will try to explain the important terminology regarding CNNs from a natural language processing perspective, a short Keras implementation with code explanations will also be provided. Read More
How AI is powering a more helpful Google
Search On… At the heart of Google Search is our ability to understand your query and rank relevant results for that query. We’ve invested deeply in language understanding research, and last year we introduced how BERT language understanding systems are helping to deliver more relevant results in Google Search. Today we’re excited to share that BERT is now used in almost every query in English, helping you get higher quality results for your questions. We’re also sharing several new advancements to search ranking, made possible through our latest research in AI. Read More
VIVO: Surpassing Human Performance in Novel Object Captioning with Visual Vocabulary Pre-Training
It is highly desirable yet challenging to generate image captions that can describe novel objects which are unseen in caption-labeled training data, a capability that is evaluated in the novel object captioning challenge (nocaps). In this challenge, no additional image-caption training data, other than COCO Captions, is allowed for model training. Thus, conventional Vision-Language Pre-training (VLP) methods cannot be applied. This paper presents VIsual VOcabulary pretraining (VIVO) that performs pre-training in the absence of caption annotations. By breaking the dependency of pairedimage-caption training data in VLP, VIVO can leverage large amounts of paired image-tag data to learn a visual vocabulary. This is done by pre-training a multi-layer Transformer model that learns to align image-level tags with their corresponding image region features. To address the unordered nature of image tags, VIVO uses a Hungarian matching loss with masked tag prediction to conduct pre-training.
We validate the effectiveness of VIVO by fine-tuning the pre-trained model for image captioning. In addition, we perform an analysis of the visual-text alignment inferred by our model. The results show that our model can not only generate fluent image captions that describe novel objects, but also identify the locations of these objects. Our single model has achieved new state-of-the-art results on nocaps and surpassed the human CIDEr score. Read More
ServiceNow, IBM to integrate Watson AIOps, IT service management
Under this partnership, the two companies will initially launch software that will use ServiceNow’s IT Service Management historical incident data to train Watson AIOps algorithms
The partnership aims to meld IBM’s Watson AIOps and ServiceNow’s IT Service Management and Operations Management Visibility as enterprises are looking to automate more of the enterprise. Read More
Machine Learning Reference Architectures from Google, Facebook, Uber, DataBricks and Others
Despite the hype surrounding machine learning and artificial Intelligence(AI) most efforts in the enterprise remain in a pilot stage. Part of the reason for this phenomenon is the natural experimentation associated with machine learning projects but also there is a significant component related to the lack of maturity of machine learning architectures. This problem is particularly visible in enterprise environments in which the new application lifecycle management practices of modern machine learning solutions conflicts with corporate practices and regulatory requirements. What are the key architecture building blocks that organizations should put in place when adopting machine learning solutions? The answer is not very trivial but recently we have seen some efforts from research labs and AI data science that are starting to lay down the path of what can become reference architectures for large scale machine learning solutions. Read More
Ex-Google chief: U.S. must do ‘whatever it takes’ to beat China on AI
“We want America to be inventing this stuff,” Eric Schmidt said during POLITICO’s summit on artificial intelligence. “Or at least the West.”
The U.S. needs an urgent national strategy on developing artificial intelligence technology to counter the rising competition from China, said former Google CEO Eric Schmidt, chair of the National Security Commission on Artificial Intelligence. Read More
‘Less Than One’-Shot Learning: Learning N Classes From M<N Samples
Deep neural networks require large training sets but suffer from high computational cost and long training times. Training on much smaller training sets while maintaining nearly the same accuracy would be very beneficial. In the few-shot learning setting, a model must learn a new class given only a small number of samples from that class. One-shot learning is an extreme form of few-shot learning where the model must learn a new class from a single example. We propose the ‘less than one’-shot learning task where models must learn N new classes given only M < N examples and we show that this is achievable with the help of soft labels. We use a soft-label generalization of the k-Nearest Neighbors classifier to explore the intricate decision landscapes that can be created in the ‘less than one’-shot learning setting. We analyze these decision landscapes to derive theoretical lower bounds for separating N classes using M < N soft-label samples and investigate the robustness of the resulting systems. Read More
Deep reinforcement learning, symbolic learning and the road to AGI
Tim Rocktäschel on the TDS podcast
This episode is part of our podcast series on emerging problems in data science and machine learning, hosted by Jeremie Harris. Apart from hosting the podcast, Jeremie helps run a data science mentorship startup called SharpestMinds. Read More
What Does It Take To Scale An AI Company? Founders And Investors Share Their Insights
In recent years, it’s become increasingly clear that Artificial Intelligence (AI) startups can scale to become $1 billion-plus companies. When it comes to innovation at the early-stages, there is a pressing need to differentiate between hype and actual potential for scale and impact. Today, many startups claim to be innovating through the use of AI. Whilst some succeed, others fail to deliver upon their promise. How does one go about cutting through the noise and identifying the AI startups that have the most potential for scale?
Ask four key questions:
- Is the company solving a high-value use case in a specific domain?
- Does the team have deep domain expertise along with access to unique datasets and other assets?
- Does the team have deep technical, AI, and data expertise?
- Does the team have a commercial balance with expertise in selling and working with enterprises?