Perforation-type anchors inspired by skin ligament for robotic face covered with living skin

Skin equivalent, a living skin model composed of cells and extracellular matrix, possesses the potential to be an ideal covering material for robots due to its biological functionalities. To employ skin equivalents as covering materials for robots, a secure method for attaching them to the underlying structure is required. In this study, we develop and characterize perforation-type anchors inspired by the structure of skin ligaments as a technique to effectively adhere skin equivalents to robotic surfaces. To showcase the versatility of perforation-type anchors in three-dimensional (3D) coverage applications, we cover a 3D facial mold with intricate surface structure with skin equivalent using perforation-type anchors. Furthermore, we construct a robotic face covered with dermis equivalent, capable of expressing smiles, with actuation through perforation-type anchors. With the above results, this research introduces an approach to adhere and actuate skin equivalents with perforation-type anchors, potentially contributing to advancements in biohybrid robotics. — Read More

#robotics

Meet the humanoids: 8 robots ready to revolutionize work

In 2015, Klaus Schwab, founder of the World Economic Forum, asserted that we were on the brink of a “Fourth Industrial Revolution,” one powered by a fusion of technologies, such as advanced robotics, artificial intelligence, and the Internet of Things.

“[This revolution] will fundamentally alter the way we live, work, and relate to one another,” wrote Schwab in an essay published in Foreign Affairs. “In its scale, scope, and complexity, the transformation will be unlike anything humankind has experienced before.”

The recent surge of developments in AI and robotics — and their deployment into the workforce — seems right in line with his predictions, although almost ten years on. — Read More

#robotics

China’s S1 robot impresses with its ‘human-like’ speed and precision

The era of humanoid robots seems to flourish, with new models being developed and trained at exceptional speeds.

Another Chinese firm making advanced strides in this realm is Astribot. The Senzhen-based subsidiary of Stardust Intelligence is a robotics firm focused on developing AI robot assistants.

In a video released by the firm, its humanoid S1 is seen doing household tasks at an unprecedented pace, which marks a significant advancement for a robot. — Read More

Video

#china-ai, #robotics

Maybe I don’t want a Rosey the Robot after all

Boston Dynamics’ latest — deliberately creepy? — humanoid robot has me rethinking my smart home robot dreams.

As a child of the 1980s, my perception of the smart home has been dominated by the idea that one day, we will all have Rosey the Robot-style robots roaming our homes — dusting the mantelpiece, preparing dinner, and unloading the dishwasher. (That last one is a must; we were smart enough to come up with a robot to wash our dishes; can’t we please come up with one that can also unload them?)

However, after seeing Boston Dynamics’ latest droid, Atlas, unveiled this week, my childhood dreams are fast turning into a smart home nightmare. While The Jetsons’ robot housekeeper had a steely charm, accentuated by its frilly apron, the closer we come to having humanoid robots in our home, the more terrifying it appears they will be. Not so much because of how they look — I could see Atlas in an apron — but more because of what they represent.  — Read More

#robotics

Is robotics about to have its own ChatGPT moment?

Researchers are using generative AI and other techniques to teach robots new skills—including tasks they could perform in homes.

… What separates this new crop of robots is their software. Instead of the traditional painstaking planning and training, roboticists have started using deep learning and neural networks to create systems that learn from their environment on the go and adjust their behavior accordingly. At the same time, new, cheaper hardware, such as off-the-shelf components and robots like Stretch, is making this sort of experimentation more accessible.

Broadly speaking, there are two popular ways researchers are using AI to train robots: reinforcement learning, an AI technique that allows systems to improve through trial and error, to get robots to adapt their movements in new environments, and imitation learning, models learn to perform tasks by, for example, imitating the actions of a human teleoperating a robot or using a VR headset to collect data on a robot.  — Read More

#robotics

Nvidia’s NEW Humanoid Robots STUNS The ENITRE INDUSTRY! (Nvidia Project GROOT)

Read More

#nvidia, #robotics, #videos

Covariant Introduces RFM-1 to Give Robots the Human-like Ability to Reason

The key challenge with traditional robotic automation and automation based on manual programming or specialized learned models is the lack of reliability and flexibility in real-world scenarios. To create value at scale, robots must understand how to manipulate an unlimited array of items and scenarios autonomously.

By starting with warehouse pick and place operations, Covariant’s RFM-1 showcases the power of Robotics Foundation Models. In warehouse environments, the technology company’s approach of combining the largest real-world robot production dataset with a massive collection of Internet data is unlocking new levels of robotic productivity and shows a path to broader industry applications ranging from hospitals and homes to factories, stores, restaurants, and more. — Read More

#robotics

RT-2: New model translates vision and language into action

Robotic Transformer 2 (RT-2) is a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalised instructions for robotic control.

High-capacity vision-language models (VLMs) are trained on web-scale datasets, making these systems remarkably good at recognising visual or language patterns and operating across different languages. But for robots to achieve a similar level of competency, they would need to collect robot data, first-hand, across every object, environment, task, and situation.

In our paper, we introduce Robotic Transformer 2 (RT-2), a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalised instructions for robotic control, while retaining web-scale capabilities. — Read More

#nlp, #robotics

The AI Behind the Curtain: AI Start-Up Figure Shows Off Conversational Robot Infused With OpenAI Technology

Robotics developer Figure made waves on Wednesday when it shared a video demonstration of its first humanoid robot engaged in a real-time conversation, thanks to generative AI from OpenAI.

… The company explained that its recent alliance with OpenAI brings high-level visual and language intelligence to its robots, allowing for “fast, low-level, dexterous robot actions.”

… [Figure 01] can: – describe its visual experience – plan future actions – reflect on its memory – explain its reasoning verbally. — Read More

#robotics

Figure 01 Status Update – OpenAI Speech-to-Speech Reasoning

Read More

#robotics, #videos