OpenAI has disbanded its robotics team after years of research into machines that can learn to perform tasks like solving a Rubik’s Cube. Company cofounder Wojciech Zaremba quietly revealed on a podcast hosted by startup Weights & Biases that OpenAI has shifted its focus to other domains, where data is more readily available.
“So it turns out that we can make a gigantic progress whenever we have access to data. And I kept all of our machinery unsupervised, [using] reinforcement learning — [it] work[s] extremely well. There [are] actually plenty of domains that are very, very rich with data. And ultimately that was holding us back in terms of robotics,” Zaremba said. “The decision [to disband the robotics team] was quite hard for me. But I got the realization some time ago that actually, that’s for the best from the perspective of the company.” Read More
Monthly Archives: July 2021
What could make AI conscious? with Wojciech Zaremba, co-founder of OpenAI
The Computer Scientist Training AI to Think With Analogies
Melanie Mitchell has worked on digital minds for decades. She says they’ll never truly be like ours until they can make analogies.
The Pulitzer Prize-winning book Gödel, Escher, Bach inspired legions of computer scientists in 1979, but few were as inspired as Melanie Mitchell. After reading the 777-page tome, Mitchell, a high school math teacher in New York, decided she “needed to be” in artificial intelligence. She soon tracked down the book’s author, AI researcher Douglas Hofstadter, and talked him into giving her an internship. She had only taken a handful of computer science courses at the time, but he seemed impressed with her chutzpah and unconcerned about her academic credentials.
Mitchell prepared a “last-minute” graduate school application and joined Hofstadter’s new lab at the University of Michigan in Ann Arbor. The two spent the next six years collaborating closely on Copycat, a computer program which, in the words of its co-creators, was designed to “discover insightful analogies, and to do so in a psychologically realistic way.” Read More
New toolkit aims to help teams create responsible human-AI experiences
Microsoft has released the Human-AI eXperience (HAX) Toolkit, a set of practical tools to help teams strategically create and responsibly implement best practices when creating artificial intelligence technologies that interact with people.
The toolkit comes as AI-infused products and services, such as virtual assistants, route planners, autocomplete, recommendations and reminders, are becoming increasingly popular and useful for many people. But these applications have the potential to do things that aren’t helpful, like misunderstand a voice command or misinterpret an image. In some cases, AI systems can demonstrate disruptive behaviors or even cause harm. Read More
Pre-trained deep learning imagery models update (July 2021)
The amount of imagery that’s collected and disseminated has increased by orders of magnitude over the past couple of years. Deep learning has been instrumental in efficiently extracting and deriving meaningful insights from these massive amounts of imagery. Last October, we released pre-trained geospatial deep learning models, making deep learning more approachable and accessible to a wide spectrum of users.
These models have been pre-trained by Esri on large volumes of data, and can be used as-is, or further fine tuned to your local geography, objects of interest or type of imagery. You no longer need huge volumes of training data and imagery, massive compute resources, or the expertise to train such models yourself. With the pre-trained models, you can bring in the raw data or imagery and extract geographical features at the click of a button. Read More
Scientists adopt deep learning for multi-object tracking
Their novel framework achieves state-of-the-art performance without sacrificing efficiency in public surveillance tasks
Implementing algorithms that can simultaneously track multiple objects is essential to unlock many applications, from autonomous driving to advanced public surveillance. However, it is difficult for computers to discriminate between detected objects based on their appearance. Now, researchers at the Gwangju Institute of Science and Technology (GIST) have adapted deep learning techniques in a multi-object tracking framework, overcoming short-term occlusion and achieving remarkable performance without sacrificing computational speed. Read More
Read the Paper
Google’s Supermodel: DeepMind Perceiver is a step on the road to an AI machine that could process anything and everything
The Perceiver is kind-of a way-station on the way to what Google AI lead Jeff Dean has described as one model that could handle any task, and “learn” faster, with less data.
Arguably one of the premiere events that has brought AI to popular attention in recent years was the invention of the Transformer by Ashish Vaswani and colleagues at Google in 2017. The Transformer led to lots of language programs such as Google’s BERT and OpenAI’s GPT-3 that have been able to produce surprisingly human-seeming sentences, giving the impression machines can write like a person.
Now, scientists at DeepMind in the U.K., which is owned by Google, want to take the benefits of the Transformer beyond text, to let it revolutionize other material including images, sounds and video, and spatial data of the kind a car records with LiDAR.
The Perceiver, unveiled this week by DeepMind in a paper posted on arXiv, adapts the Transformer with some tweaks to let it consume all those types of input, and to perform on the various tasks, such as image recognition, for which separate kinds of neural networks are usually developed. Read More
When Will China Rule the World? Maybe Never
When will China overtake the U.S. to become the world’s biggest economy?
Few questions are more consequential, whether it’s for executives wondering where long-term profits will come from, investors weighing the dollar’s status as global reserve currency, or generals strategizing over geopolitical flashpoints.
In Beijing, where they’ve just been celebrating the 100th anniversary of the Chinese Communist Party, leaders are doing their best to present the baton-change as imminent and inevitable. “The Chinese nation,” President Xi Jinping said last week, “is marching towards a great rejuvenation at an unstoppable pace.” Read More
DOD Launches Project to Quickly Shift AI from Labs to Real-World Warfighting
The Defense Department has a new plan to speed up its adoption of artificial intelligence technologies. The A-I and Data Acceleration Initiative – or ADA – formally launched this week. It includes four lines of effort, all designed to make sure DoD isn’t just working with A-I in experimental settings, but moving it into practical applications in combatant commands around the world. Federal News Network’s Jared Serbu has details. Read More
Poison in the Well
Securing the Shared Resources of Machine Learning
Progress in machine learning depends on trust. Researchers often place their advances in a public well of shared resources, and developers draw on those to save enormous amounts of time and money. Coders use the code of others, harnessing common tools rather than reinventing the wheel. Engineers use systems developed by others as a basis for their own creations. Data scientists draw on large public datasets to train machines to carry out routine tasks, such as image recognition, autonomous driving, and text analysis. Machine learning has accelerated so quickly and proliferated so widely largely because of this shared well of tools and data.
But the trust that so many place in these common resources is a security weakness. Poison in this well can spread, affecting the products that draw from it. Right now, it is hard to verify that the well of machine learning is free from malicious interference. In fact, there are good reasons to be worried. Attackers can poison the well’s three main resources—machine learning tools, pretrained machine learning models, and datasets for training—in ways that are extremely difficult to detect. Read More