The dream of autonomous vehicles is that they can avoid human error and save lives, but a new European Union Agency for Cybersecurity (ENISA) report has found that autonomous vehicles are “highly vulnerable to a wide range of attacks” that could be dangerous for passengers, pedestrians, and people in other vehicles. Attacks considered in the report include sensor attacks with beams of light, overwhelming object detection systems, back-end malicious activity, and adversarial machine learning attacks presented in training data or the physical world.
“The attack might be used to make the AI ‘blind’ for pedestrians by manipulating for instance the image recognition component in order to misclassify pedestrians. This could lead to havoc on the streets, as autonomous cars may hit pedestrians on the road or crosswalks,” the report reads. “The absence of sufficient security knowledge and expertise among developers and system designers on AI cybersecurity is a major barrier that hampers the integration of security in the automotive sector.” Read More
Tag Archives: Robotics
Ai-Da, the first robot artist to exhibit herself
Ai-Da , a humanoid artificial intelligence robot, will exhibit a series of self-portraits that she created by “looking” into a mirror integrated with her camera eyes. Read More
Understanding Robotic Process Automation
The Institute for Robotic Process Automation & Artificial Intelligence defines RPA as follows,“Robotic process automation (RPA) is the application of technology that allows employees in a company to configure computer software or a bot to capture and interpret existing applications for processing a transaction, manipulating data, triggering responses, and communicating with other digital systems.”
In simple terms, RPA is the automation of repetitive, rule-based manual tasks (performed on windows) by the use of automation agents that can run attended or unattended without making any errors. Read More
Can AI Machine Learning Enable Robot Empathy?
Columbia University AI researchers enable machines to be more human-like.
Artificial intelligence (AI) machine learning is fueling the current commercial boom in automation, and robots are becoming increasingly more sophisticated. In a step forward in endowing robots with human-like behavior, researchers at Columbia University showed how AI machine learning can predict a robot’s future actions by observation and published their results earlier this month in Nature Scientific Reports. Read More
Should a self-driving car kill the baby or the grandma? Depends on where you’re from.
The infamous “trolley problem” was put to millions of people in a global study, revealing how much ethics diverge across cultures.
In 2014 researchers at the MIT Media Lab designed an experiment called Moral Machine. The idea was to create a game-like platform that would crowdsource people’s decisions on how self-driving cars should prioritize lives in different variations of the “trolley problem.” In the process, the data generated would provide insight into the collective ethical priorities of different cultures.
… A new paper published in Nature presents the analysis of that data and reveals how much cross-cultural ethics diverge on the basis of culture, economics, and geographic location. Read More
Implicit coordination for 3D underwater collective behaviors in a fish-inspired robot swarm
Many fish species gather by the thousands and swim in harmony with seemingly no effort. Large schools display a range of impressive collective behaviors, from simple shoaling to collective migration and from basic predator evasion to dynamic maneuvers such as bait balls and flash expansion. A wealth of experimental and theoretical work has shown that these complex three-dimensional (3D) behaviors can arise from visual observations of nearby neighbors, without explicit communication. By contrast, most underwater robot collectives rely on centralized, above-water, explicit communication and, as a result, exhibit limited coordination complexity. Here, we demonstrate 3D collective behaviors with a swarm of fish-inspired miniature underwater robots that use only implicit communication mediated through the production and sensing of blue light. We show that complex and dynamic 3D collective behaviors—synchrony, dispersion/aggregation, dynamic circle formation, and search-capture—can be achieved by sensing minimal, noisy impressions of neighbors, without any centralized intervention. Our results provide insights into the power of implicit coordination and are of interest for future underwater robots that display collective capabilities on par with fish schools for applications such as environmental monitoring and search in coral reefs and coastal environments. Read More
IMS unveils driverless Indy car that will race in October Indy Autonomous Challenge
It’s a car that, on the surface, will be familiar to mainstream IndyCar fans but a version that may have Tony Hulman doing a double-take from the grave.
The sleek, black Dallara IL-15 unveiled Monday to run in a 20-lap race later this year is an Indy Lights car in almost every way. Staring at the cockpit, you’d probably forget about the missing protective halo device that Lights drivers will run with in 2021, but look closer … and there’s no cockpit at all. Read More
Do You Love Me?
In a first, Air Force uses AI on military jet
Defense officials touted the test as a watershed moment for a technology intensely debated in aviation and arms control communities
The Air Force allowed an artificial-intelligence algorithm to control sensor and navigation systems on a U-2 Dragon Lady spy plane in a training flight Tuesday, officials said, marking what is believed to be the first known use of AI onboard a U.S. military aircraft.
No weapons were involved, and the plane was steered by a pilot. Even so, senior defense officials touted the test as a watershed moment in the Defense Department’s attempts to incorporate AI into military aircraft, a subject that is of intense debate in aviation and arms control communities. Read More
Alphabet’s Loon hands the reins of its internet air balloons to self-learning AI
Alphabet’s Loon, the team responsible for beaming internet down to Earth from stratospheric helium balloons, has achieved a new milestone: its navigation system is no longer run by human-designed software.
Instead, the company’s internet balloons are steered around the globe by an artificial intelligence — in particular, a set of algorithms both written and executed by a deep reinforcement learning-based flight control system that is more efficient and adept than the older, human-made one. The system is now managing Loon’s fleet of balloons over Kenya, where Loon launched its first commercial internet service in July after testing its fleet in a series of disaster relief initiatives and other test environments for much of the last decade. Read More