In a fast-moving world, customers require efficiency and promptness when talking to any company. Here is where chatbots and Intelligent Virtual Assistants (IVAs) come into play.
…According to Grand View Research, the global intelligent virtual assistant market size was valued at USD 3.7 billion in 2019, growing at a Compound Annual Growth Rate (CAGR) of 34.0% over the forecast period. Read More
Monthly Archives: August 2020
Here’s why Apple believes it’s an AI leader—and why it says critics have it all wrong
Apple AI chief and ex-Googler John Giannandrea dives into the details with Ars.
Historically, Apple has not had a public reputation for leading in this area. That’s partially because people associate AI with digital assistants, and reviewers frequently call Siri less useful than Google Assistant or Amazon Alexa. And with ML, many tech enthusiasts say that more data means better models—but Apple is not known for data collection in the same way as, say, Google.
Despite this, Apple has included dedicated hardware for machine learning tasks in most of the devices it ships. Read More
Explainable AI: A guide for making black box machine learning models explainable
In the future, AI will explain itself, and interpretability could boost machine intelligence research. Getting started with the basics is a good way to get there, and Christoph Molnar’s book is a good place to start.
Christoph Molnar is a data scientist and PhD candidate in interpretable machine learning. Molnar has written the book “Interpretable Machine Learning: A Guide for Making Black Box Models Explainable”, in which he elaborates on the issue and examines methods for achieving explainability. Read More
Black Hat 2020: Open-Source AI to Spur Wave of ‘Synthetic Media’ Attacks
The explosion of open-source AI models are lowering the barrier of entry for bad actors to create fake video, audio and images – and Facebook, Twitter and other platforms aren’t ready.
An abundance of deep-learning and open-source technologies are making it easy for cybercriminals to generate fake images, text and audio called “synthetic media”. This type of media can be easily leveraged on Facebook, Twitter and other social media platforms to launch disinformation campaigns with hijacked identities.
At a Wednesday session at Black Hat USA 2020, researchers with FireEye demonstrated how freely-available, open-source tools – which offer pre-trained natural language processing, computer vision, and speech recognition tools – can be used to create malicious the synthetic media. Read More
The hack that could make face recognition think someone else is you
Researchers have demonstrated that they can fool a modern face recognition system into seeing someone who isn’t there.
A team from the cybersecurity firm McAfee set up the attack against a facial recognition system similar to those currently used at airports for passport verification. By using machine learning, they created an image that looked like one person to the human eye, but was identified as somebody else by the face recognition algorithm—the equivalent of tricking the machine into allowing someone to board a flight despite being on a no-fly list. Read More
GPT-3 Written Blog Got 26 Thousand Visitors in 2 Weeks
The future of online media
What does it mean when a computer can write about our problems better than we can?
People have been talking a lot about GPT-3, but more as a novelty than a tool (don’t know what GPT-3 is? look here). Some clever people have even figured out how to get it to generate code from descriptions. Yet, I think that the best use cases lie outside of tech. Read More
Hackers Broke Into Real News Sites to Plant Fake Stories
Over the past few years, online disinformation has taken evolutionary leaps forward, with the Internet Research Agency pumping out artificial outrage on social media and hackers leaking documents—both real and fabricated—to suit their narrative. More recently, Eastern Europe has faced a broad campaign that takes fake news ops to yet another level: hacking legitimate news sites to plant fake stories, then hurriedly amplifying them on social media before they’re taken down.
On Wednesday, security firm FireEye released a report on a disinformation-focused group it’s calling Ghostwriter. The propagandists have created and disseminated disinformation since at least March 2017, with a focus on undermining NATO and the US troops in Poland and the Baltics; they’ve posted fake content on everything from social media to pro-Russian news websites. In some cases, FireEye says, Ghostwriter has deployed a bolder tactic: hacking the content management systems of news websites to post their own stories. They then disseminate their literal fake news with spoofed emails, social media, and even op-eds the propagandists write on other sites that accept user-generated content. Read More
BART for Paraphrasing with Simple Transformers
BART is a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
– BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension –
Don’t worry if that sounds a little complicated; we are going to break it down and see what it all means. To add a little bit of background before we dive into BART, it’s time for the now-customary ode to Transfer Learning with self-supervised models. It’s been said many times over the past couple of years, but Transformers really have achieved incredible success in a wide variety of Natural Language Processing (NLP) tasks. Read More
Evolving IT environments require officials to plan for the next-generation SOCs.
Today’s hybrid IT environments, which incorporate cloud and on-premise infrastructure, demand structural changes to agency security operations centers, or SOCs, to be better able to operate within cyberspace versus simply reacting to it.
SOCs face plenty of challenges: serving the needs of remote and teleworking employees, managing multiple cloud platforms, and dealing with the exploding number of IT-configurable devices on emerging 5G networks. Read More
Can Your AI Differentiate Cats from Covid-19?Sample Efficient Uncertainty Estimation for Deep Learning Safety
Deep Neural Networks (DNNs) are known to make highly overconfident predictions on Out-of-Distribution data. Recent research has shown that uncertainty-aware models, such as, Bayesian Neural Network (BNNs) and Deep Ensembles,are less susceptible to this issue. However research in this area has been largely confined to the big data setting. In this work, we show that even state-of-the-art BNNs and Ensemble models tend to make overconfident predictions when the amount of training data is insufficient. This is especially concerning for emerging applications in physical sciences and healthcare where over-confident and inaccurate predictions can lead to disastrous consequences. To address the issue of accurate uncertainty (or confidence) estimation in the small-data regime, we propose a probabilistic generalization of the popular sample-efficient non-parametric kNN approach. We demonstrate the usefulness of the proposed approach on a real-world application of COVID-19 diagnosis from chest X-Rays by (a) highlighting surprising failures of existing techniques, and (b) achieving superior uncertainty quantification as compared to state-of-the-art. Read More