MIT CSAIL’s AI can visualize objects using touch

Robots that can learn to see by touch are within reach, claim researchers at MIT’s Computer Science and Artificial Intelligence Laboratory. Really. In a newly published paper that’ll be presented next week at the Conference on Computer Vision and Pattern Recognition in Long Beach, California, they describe an AI system capable of generating visual representations of objects from tactile signals, and of predicting tactility from snippets of visual data.

“By looking at the scene, our model can imagine the feeling of touching a flat surface or a sharp edge,” said CSAIL PhD student and lead author on the research Yunzhu Li, who wrote the paper alongside MIT professors Russ Tedrake and Antonio Torralba and MIT postdoc Jun-Yan Zhu. “By blindly touching around, our [AI] model can predict the interaction with the environment purely from tactile feelings. Bringing these two senses together could empower the robot and reduce the data we might need for tasks involving manipulating and grasping objects.” Read More

#gans, #image-recognition