ChatGPT for chemistry: AI and robots join forces to build new materials

An autonomous system that combines robotics with artificial intelligence (AI) to create entirely new materials has released its first trove of discoveries. The system, known as the A-Lab, devises recipes for materials, including some that might find uses in batteries or solar cells. Then, it carries out the synthesis and analyses the products — all without human intervention. Meanwhile, another AI system has predicted the existence of hundreds of thousands of stable materials, giving the A-Lab plenty of candidates to strive for in future. — Read More

Read the Paper

#big7, #robotics

NOIR: Neural Signal Operated Intelligent Robots for Everyday Activities

We present Neural Signal Operated Intelligent Robots (NOIR), a general-purpose, intelligent brain-robot interface system that enables humans to command robots to perform everyday activities through brain signals. Through this interface, humans communicate their intended objects of interest and actions to the robots using electroencephalography (EEG). Our novel system demonstrates success in an expansive array of 20 challenging, everyday household activities, including cooking, cleaning, personal care, and entertainment. The effectiveness of the system is improved by its synergistic integration of robot learning algorithms, allowing for NOIR to adapt to individual users and predict their intentions. Our work enhances the way humans interact with robots, replacing traditional channels of interaction with direct, neural communication. Project website: this https URL. — Read More

#robotics

Meta’s Habitat 3.0 simulates real-world environments for intelligent AI robot training

Researchers from Meta Platforms Inc.’s Fundamental Artificial Intelligence Research team said today they’re releasing a more advanced version of the AI simulation environment Habitat, which is used to teach robots how to interact with the physical world.

Along with the launch of Habitat 3.0, the company announced the release of the Habitat Synthetic Scenes Dataset, an artist-authored 3D dataset that can be used to train AI navigation agents, as well as HomeRobot, an affordable robot assistant hardware and software platform for use in both simulated and real world environments.

In a blog post, FAIR researchers explained that the new releases represent its ongoing progress into they like to call “embodied AI.” By that, they mean AI agents that can perceive and interact with their environment, share that environment safely with human partners, and communicate and assist those human partners in both the digital and the physical world. — Read More

#robotics

How roboticists are thinking about generative AI

The topic of generative AI comes up frequently in my newsletter, Actuator. I admit that I was a bit hesitant to spend more time on the subject a few months back. Anyone who has been reporting on technology for as long as I have has lived through countless hype cycles and been burned before. Reporting on tech requires a healthy dose of skepticism, hopefully tempered by some excitement about what can be done.

This time out, it seemed generative AI was waiting in the wings, biding its time, waiting for the inevitable cratering of crypto. As the blood drained out of that category, projects like ChatGPT and DALL-E were standing by, ready to be the focus of breathless reporting, hopefulness, criticism, doomerism and all the different Kübler-Rossian stages of the tech hype bubble. — Read More

#robotics

Google’s RT-2-X Generalist AI Robots: 500 Skills, 150,000 Tasks, 1,000,000+ Workflows

Read More

Read DeepMind’s Announcement
#robotics, #videos

An NYPD security robot will be patrolling the Times Square subway station

The New York Police Department (NYPD) is implementing a new security measure at the Times Square subway station. It’s deploying a security robot to patrol the premises, which authorities say is meant to “keep you safe.” We’re not talking about a RoboCop-like machine or any human-like biped robot — the K5, which was made by California-based company Knightscope, looks like a massive version of R2-D2. Albert Fox Cahn, the executive director of privacy rights group Surveillance Technology Oversight Project, has a less flattering description for it, though, and told The New York Times that it’s like a “trash can on wheels.” — Read More

#robotics, #surveillance

On Robots Killing People

The robot revolution began long ago, and so did the killing. One day in 1979, a robot at a Ford Motor Company casting plant malfunctioned—human workers determined that it was not going fast enough. And so twenty-five-year-old Robert Williams was asked to climb into a storage rack to help move things along. The one-ton robot continued to work silently, smashing into Williams’s head and instantly killing him. This was reportedly the first incident in which a robot killed a human; many more would follow.

… Robots—”intelligent” and not—have been killing people for decades. And the development of more advanced artificial intelligence has only increased the potential for machines to cause harm. Self-driving cars are already on American streets, and robotic “dogs” are being used by law enforcement. Computerized systems are being given the capabilities to use tools, allowing them to directly affect the physical world. Why worry about the theoretical emergence of an all-powerful, superintelligent program when more immediate problems are at our doorstep? Regulation must push companies toward safe innovation and innovation in safety. We are not there yet. — Read More

#robotics

AI-powered drone beats human champion pilots

Having trounced humans at everything from chess and Go, to StarCraft and Gran Turismo, artificial intelligence (AI) has raised its game and defeated world champions at a real-world sport.

The latest mortals to feel the sting of AI-induced defeat are three expert drone racers who were beaten by an algorithm that learned to fly a drone around a 3D race course at breakneck speeds without crashing. Or at least not crashing too often. — Read More

#robotics

Language to rewards for robotic skill synthesis

Empowering end-users to interactively teach robots to perform novel tasks is a crucial capability for their successful integration into real-world applications. For example, a user may want to teach a robot dog to perform a new trick, or teach a manipulator robot how to organize a lunch box based on user preferences. The recent advancements in large language models (LLMs) pre-trained on extensive internet data have shown a promising path towards achieving this goal. Indeed, researchers have explored diverse ways of leveraging LLMs for robotics, from step-by-step planning and goal-oriented dialogue to robot-code-writing agents.

While these methods impart new modes of compositional generalization, they focus on using language to link together new behaviors from an existing library of control primitives that are either manually engineered or learned a priori. Despite having internal knowledge about robot motions, LLMs struggle to directly output low-level robot commands due to the limited availability of relevant training data. As a result, the expression of these methods are bottlenecked by the breadth of the available primitives, the design of which often requires extensive expert knowledge or massive data collection.

In “Language to Rewards for Robotic Skill Synthesis”, we propose an approach to enable users to teach robots novel actions through natural language input.  — Read More

#robotics

Speaking robot: Our new AI model translates vision and language into robotic actions

#robotics