An OpenAI spinoff has built an AI model that helps robots learn tasks like humans

In the summer of 2021, OpenAI quietly shuttered its robotics team, announcing that progress was being stifled by a lack of data necessary to train robots in how to move and reason using artificial intelligence. 

Now three of OpenAI’s early research scientists say the startup they spun off in 2017, called Covariant, has solved that problem and unveiled a system that combines the reasoning skills of large language models with the physical dexterity of an advanced robot.

The new model, called RFM-1, was trained on years of data collected from Covariant’s small fleet of item-picking robots that customers like Crate & Barrel and Bonprix use in warehouses around the world, as well as words and videos from the internet. In the coming months, the model will be released to Covariant customers. The company hopes the system will become more capable and efficient as it’s deployed in the real world.  — Read More

#robotics

The Cult of AI

I WAS WATCHING a video of a keynote speech at the Consumer Electronics Show for the Rabbit R1, an AI gadget that promises to act as a sort of personal assistant, when a feeling of doom took hold of me. 

It wasn’t just that Rabbit’s CEO Jesse Lyu radiates the energy of a Kirkland-brand Steve Jobs. And it wasn’t even Lyu’s awkward demonstration of how the Rabbit’s camera can recognize a photo of Rick Astley and Rickroll the owner — even though that segment was so cringe it caused me chest pains. 

No, the real foreboding came during a segment when Lyu breathlessly explained how the Rabbit could order pizza for you, telling it “the most-ordered option is fine,” leaving his choice of dinner up to the Pizza Hut website. After that, he proceeded to have the Rabbit plan an entire trip to London for him. The device very clearly just pulled a bunch of sights to see from some top-10 list on the internet, one that was very likely AI-generated itself.

Most of the Rabbit’s capabilities were well in line with existing voice-activated products, like Amazon Alexa. Its claim to being something special is its ability to create a “digital twin” of the user, which can directly utilize all of your apps so that you, the person, don’t have to. It can even use Midjourney to generate AI images for you, removing yet another level of human involvement and driving us all deeper into the uncanny valley.  – Read More

#robotics

A Robot the Size of the World

…The classical definition of a robot is something that senses, thinks, and acts—that’s today’s Internet. We’ve been building a world-sized robot without even realizing it.

In 2023, we upgraded the “thinking” part with large-language models (LLMs) like GPT. ChatGPT both surprised and amazed the world with its ability to understand human language and generate credible, on-topic, humanlike responses. But what these are really good at is interacting with systems formerly designed for humans. Their accuracy will get better, and they will be used to replace actual humans.

In 2024, we’re going to start connecting those LLMs and other AI systems to both sensors and actuators. In other words, they will be connected to the larger world, through APIs. They will receive direct inputs from our environment, in all the forms I thought about in 2016. And they will increasingly control our environment, through IoT devices and beyond.  – Read More

#singularity, #robotics

#human

ChatGPT for chemistry: AI and robots join forces to build new materials

An autonomous system that combines robotics with artificial intelligence (AI) to create entirely new materials has released its first trove of discoveries. The system, known as the A-Lab, devises recipes for materials, including some that might find uses in batteries or solar cells. Then, it carries out the synthesis and analyses the products — all without human intervention. Meanwhile, another AI system has predicted the existence of hundreds of thousands of stable materials, giving the A-Lab plenty of candidates to strive for in future. — Read More

Read the Paper

#big7, #robotics

NOIR: Neural Signal Operated Intelligent Robots for Everyday Activities

We present Neural Signal Operated Intelligent Robots (NOIR), a general-purpose, intelligent brain-robot interface system that enables humans to command robots to perform everyday activities through brain signals. Through this interface, humans communicate their intended objects of interest and actions to the robots using electroencephalography (EEG). Our novel system demonstrates success in an expansive array of 20 challenging, everyday household activities, including cooking, cleaning, personal care, and entertainment. The effectiveness of the system is improved by its synergistic integration of robot learning algorithms, allowing for NOIR to adapt to individual users and predict their intentions. Our work enhances the way humans interact with robots, replacing traditional channels of interaction with direct, neural communication. Project website: this https URL. — Read More

#robotics

Meta’s Habitat 3.0 simulates real-world environments for intelligent AI robot training

Researchers from Meta Platforms Inc.’s Fundamental Artificial Intelligence Research team said today they’re releasing a more advanced version of the AI simulation environment Habitat, which is used to teach robots how to interact with the physical world.

Along with the launch of Habitat 3.0, the company announced the release of the Habitat Synthetic Scenes Dataset, an artist-authored 3D dataset that can be used to train AI navigation agents, as well as HomeRobot, an affordable robot assistant hardware and software platform for use in both simulated and real world environments.

In a blog post, FAIR researchers explained that the new releases represent its ongoing progress into they like to call “embodied AI.” By that, they mean AI agents that can perceive and interact with their environment, share that environment safely with human partners, and communicate and assist those human partners in both the digital and the physical world. — Read More

#robotics

How roboticists are thinking about generative AI

The topic of generative AI comes up frequently in my newsletter, Actuator. I admit that I was a bit hesitant to spend more time on the subject a few months back. Anyone who has been reporting on technology for as long as I have has lived through countless hype cycles and been burned before. Reporting on tech requires a healthy dose of skepticism, hopefully tempered by some excitement about what can be done.

This time out, it seemed generative AI was waiting in the wings, biding its time, waiting for the inevitable cratering of crypto. As the blood drained out of that category, projects like ChatGPT and DALL-E were standing by, ready to be the focus of breathless reporting, hopefulness, criticism, doomerism and all the different Kübler-Rossian stages of the tech hype bubble. — Read More

#robotics

Google’s RT-2-X Generalist AI Robots: 500 Skills, 150,000 Tasks, 1,000,000+ Workflows

Read More

Read DeepMind’s Announcement
#robotics, #videos

An NYPD security robot will be patrolling the Times Square subway station

The New York Police Department (NYPD) is implementing a new security measure at the Times Square subway station. It’s deploying a security robot to patrol the premises, which authorities say is meant to “keep you safe.” We’re not talking about a RoboCop-like machine or any human-like biped robot — the K5, which was made by California-based company Knightscope, looks like a massive version of R2-D2. Albert Fox Cahn, the executive director of privacy rights group Surveillance Technology Oversight Project, has a less flattering description for it, though, and told The New York Times that it’s like a “trash can on wheels.” — Read More

#robotics, #surveillance

On Robots Killing People

The robot revolution began long ago, and so did the killing. One day in 1979, a robot at a Ford Motor Company casting plant malfunctioned—human workers determined that it was not going fast enough. And so twenty-five-year-old Robert Williams was asked to climb into a storage rack to help move things along. The one-ton robot continued to work silently, smashing into Williams’s head and instantly killing him. This was reportedly the first incident in which a robot killed a human; many more would follow.

… Robots—”intelligent” and not—have been killing people for decades. And the development of more advanced artificial intelligence has only increased the potential for machines to cause harm. Self-driving cars are already on American streets, and robotic “dogs” are being used by law enforcement. Computerized systems are being given the capabilities to use tools, allowing them to directly affect the physical world. Why worry about the theoretical emergence of an all-powerful, superintelligent program when more immediate problems are at our doorstep? Regulation must push companies toward safe innovation and innovation in safety. We are not there yet. — Read More

#robotics