This Robot Artist Just Became the First to Stage a Solo Exhibition. What Does That Say About Creativity?

Standing in a wood-paneled room at the University of Oxford, surrounded by her artwork, Ai-Da looks out at her creations. “I want people to know that our times are powerful times,” she says slowly, pausing between sentences. Like many artists, she wants her work to promote discussion. And yet unlike other artists, Ai-Da tells us with a blank expression and glassy eyes that only blink occasionally, she does not have consciousness, thoughts and feelings. At least, not yet.

Ai-Da’s creators bill her as the world’s first robot artist, and she’s the latest AI innovation to blur the boundary between machine and artist; a vision of the future suddenly becoming part of our present. She has a robotic arm system and human-like features, is equipped with facial recognition technology and is powered with artificial intelligence. She is able to analyze an image in front of her, which feeds into an algorithm to dictate the movement of her arm, enabling her to produce sketches. Her goal is creativity. Read More

#robotics

US Army trains StarCraft II AI; teaching drones to dodge thrown objects; and fighting climate change with machine learning

Drones that dodge, evade, and avoid objects – they’re closer than you think:

…Drones are an omni-use platform, and they’re about to get really smart…

The University of Maryland and the University of Zurich have taught drones how to dodge rapidly moving objects, taking a further step towards building semi-autonomous, adaptive small-scale aircraft. The research shows that drones equipped with a few basic sensors and some clever AI software can learn to dodge (and chase) a variety of objects. “To our knowledge, this is the first deep learning based solution to the problem of dynamic obstacle avoidance using event cameras on a quadrotor”, they write. Read More

#robotics

Rock Paper Scissors robot wins 100% of the time

The newest version of a robot from Japanese researchers can not only challenge the best human players in a game of Rock Paper Scissors, but it can beat them — 100% of the time. In reality, the robot uses a sophisticated form a cheating which both breaks the game itself (the robot didn’t “win” by the actual rules of the game) and shows the amazing potential of the human-machine interfaces of tomorrow. Read More

#human, #robotics

MIT’s sensor-packed glove helps AI identify objects by touch

Researchers have spent years trying to teach robots how to grip different objects without crushing or dropping them. They could be one step closer, thanks to this low-cost, sensor-packed glove. In a paper published in Nature, a team of MIT scientists share how they used the glove to help AI recognize objects through touch alone. That information could help robots better manipulate objects, and it may aid in prosthetics design.

The “scalable tactile glove,” or STAG, is a simple knit glove packed with more than 550 tiny sensors. The researchers wore STAG while handling 26 different objects — including a soda can, scissors, tennis ball, spoon, pen and a mug. As they did, the sensors gathered pressure-signal data, which was interpreted by a neural network. The system predicted the objects’ identity on touch alone with up to 76 percent accuracy, and it was able to predict the weight of most objects within about 60 grams. Read More

#human, #robotics

What happens when a machine starts questioning our ways of the world?

Read More

#robotics, #videos

The touchy task of making robots seem human — but not too human

2017 IS POISED to be the year of the robot assistant. If you’re in the market, you’ll have plenty to choose from. Some look like descendants of Honda’s Asimo—shiny white bots with heads, eyes, arms, and legs. Ubtech’s Lynx has elbows, knees, and hands, which it can use to teach you yoga poses, of all things, while Hanson Robotics’ Sophia botapproaches *Ex Machina-*levels of believability. Others, like Amazon’s Alexa and Google Home, have no form. They come baked into simple speakers and desktop appliances. It seems most robot helpers take one of these two shapes: humanoid or monolithic.

Yet, a middle ground is emerging—one with just a hint of anthropomorphism. LG’s Alexa-powered Hub robot has a “body” with a gently nipped in “waist,” and a screen with two blinking eyes. ElliQ, a tabletop robot assistant for the elderly that debuted last week at the Design Museum in London, features an hourglass-shaped “body” and a “head” that swivels. Kuri, a penguin-like helper from Mayfield Robotics, scoots around and looks at you but doesn’t speak. This is all deliberate. Designers and roboticists say a suggestion, rather than a declaration, of anthropomorphism could help people form closer connections with their robot assistants.

But don’t overdo it—the more like C-3PO your robot looks, the greater the risk of disappointment. Read More

#robotics

A Day in the Life of a Kiva Robot

Read More

#robotics

What is a robot?

Editor’s note: This is the first entry in a new video series, HardWIRED: Welcome to the Robotic Future, in which we explore the many fascinating machines that are transforming society. And we can’t do that without first defining what a robot even is.

When you hear the word “robot,” the first thing that probably comes to mind is a silvery humanoid, à la The Day the Earth Stood Still or C-3PO (more golden, I guess, but still metallic). But there’s also the Roomba, and autonomous drones, and technically also self-driving cars. A robot can be a lot of things these days―and this is just the beginning of their proliferation.

With so many different kinds of robots, how do you define what one is? It’s a physical thing―engineers agree on that, at least. But ask three different roboticists to define a robot and you’ll get three different answers. This isn’t a trivial semantic conundrum: Thinking about what a robot really is has implications for how humanity deals with the unfolding robo-revolution. Read More

#robotics

I gotta basketball Jones!

Read More

#robotics, #videos