Many AIs that appear to understand language and that score better than humans on a common set of comprehension tasks don’t notice when the words in a sentence are jumbled up, which shows that they don’t really understand language at all. The problem lies in the way natural-language processing (NLP) systems are trained; it also points to a way to make them better.
Researchers at Auburn University in Alabama and Adobe Research discovered the flaw when they tried to get an NLP system to generate explanations for its behavior, such as why it claimed different sentences meant the same thing. When they tested their approach, they realized that shuffling words in a sentence made no difference to the explanations. “This is a general problem to all NLP models,” says Anh Nguyen at Auburn University, who led the work. Read More
Daily Archives: January 14, 2021
High-Quality Background Removal Without Green Screens
Human matting is an extremely interesting task where the goal is to find any human in a picture and remove the background from it. It is really hard to achieve due to the complexity of the task, having to find the person or people with the perfect contour. … The MODNet background removal technique can extract a person from a single input image, without the need for a green screen in real-time! Read More
How explainable artificial intelligence can help humans innovate
The field of artificial intelligence (AI) has created computers that can drive cars, synthesize chemical compounds, fold proteins and detect high-energy particles at a superhuman level.
However, these AI algorithms cannot explain the thought processes behind their decisions. A computer that masters protein folding and also tells researchers more about the rules of biology is much more useful than a computer that folds proteins without explanation.
Therefore, AI researchers like me are now turning our efforts toward developing AI algorithms that can explain themselves in a manner that humans can understand. Read More
What Happens When AI Has An Overactive Imagination?
Defining enterprise AI: From ETL to modern AI infrastructure
The promise of enterprise AI is built on old ETL technologies, and it relies on an AI infrastructure effectively integrating and processing loads of data. … Effective data integration is critical for enterprise AI. Data is the lifeblood of enterprise AI applications and its extraction and storage must be optimized. Read More
Machine Learning Metadata (MLMD) : A Library To Track Full Lineage Of Machine Learning Workflow
Version control is used to keep track of modifications made in a software code. Similarly, when building machine learning (ML) systems, it is essential to track things, such as the datasets used to train the model, the hyperparameters and pipeline used, the version of tensorflow used to create the model, and many more.
ML artifacts’ history and lineage are very complicated than a simple, linear log. Git can be used to track the code to one extent, but we need something to track your models, datasets, and more. The complexity of ML code and artifacts like models, datasets, and much more requires a similar approach.
Therefore, the researchers have introduced Machine Learning Metadata (MLMD), a standalone library to track one’s entire ML workflow’s full lineage from data ingestion, data preprocessing, validation, training, evaluation, deployment, etc. MLMD also comes integrated with TensorFlow Extended. Read More
Discrete Latent Space World Models for Reinforcement Learning
Sample efficiency remains a fundamental issue of reinforcement learning. Model-based algorithms try to make better use of data by simulating the environment with a model. We propose a new neural network architecture for world models based on a vector quantized-variational autoencoder (VQ-VAE) to encode observations and a convolutional LSTM to predict the next embedding indices. A model-free PPO agent is trained purely on simulated experience from the world model. We adopt the setup introduced by Kaiser et al. (2020), which only allows100Kinteractionswith the real environment, and show that we reach better performance than their SimPLe algorithm in five out of six randomly selected Atari environments, while our model is significantly smaller. Read More
IMS unveils driverless Indy car that will race in October Indy Autonomous Challenge
It’s a car that, on the surface, will be familiar to mainstream IndyCar fans but a version that may have Tony Hulman doing a double-take from the grave.
The sleek, black Dallara IL-15 unveiled Monday to run in a 20-lap race later this year is an Indy Lights car in almost every way. Staring at the cockpit, you’d probably forget about the missing protective halo device that Lights drivers will run with in 2021, but look closer … and there’s no cockpit at all. Read More