Teaching Robots and Cars to See Around Corners
Teaching Robots and Cars to See Around Corners
In an effort to improve the autonomous systems’ safety, MIT researchers developed a new technology to sense changes in shadows.
Building on earlier work to make self-driving cars safer, engineers at the Massachusetts Institute of Technology (MIT) have developed a warning and anti-collision system that can sense tiny changes in shadows on the ground to determine if there’s a moving object coming around a corner. The work could also be useful for other robots, such as those delivering medicine in hospitals or working alongside humans in warehouses.
“Traditionally, sensors in an autonomous vehicle all rely on line-of-sight sensing, the direct line-of-sight between the sensor which is placed on the car and the object you want to detect,” said team member Alexander Amini, PhD student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). “Any time that line-of-sight is obstructed by a corner or by something else, that’s where we see our system playing a huge role.”
The team’s new work, based on an earlier detection system dubbed ShadowCam, uses sequences of video frames from a camera targeting a static location and detects changes in light intensity over time that indicates whether an object is moving and if so, if it’s moving away or coming closer.
There are multiple steps that all take place within seconds, according to Amini. ShadowCam takes in all that information, classifying each image as a stationary or a dynamic object. If it spots a dynamic image, the system reacts accordingly, meaning guiding the vehicle to stop, slow down, or proceed on course.
Further Reading: Fully Autonomous Drone Helps Gold Miners
“One of the challenges of building a system like this is you have to be robust to a lot of these minor changes and very minor variations in changes,” Amini said. “Having the shadow oriented in different directions and/or coming in at different speeds is a huge challenge.”
“Because the car is dynamic—in other words, moving—the first thing we did was localize an area on the ground close to the shadow, so we can look specifically at a static location and project images on that location,” he said. “The second step is taking that spot on the ground and using computer vision to amplify small changes, very minor changes, in lighting, so that we can detect the shadows.
“For that we use a form of color amplification and motion amplification, so we can look at the shadow like it is an amplified signal,” he added.
The project has its roots in work that initially involved detecting moving shadows from a static wheelchair. Later, MIT professors William Freeman and Antonio Torralba advanced the work for detecting obstacles outside of direct line-of-sight by using color-amplified images allowing the detection of even weakly visible shadows. They successfully tested and demonstrated the feasibility of an earlier version of ShadowCam as a safety mechanism on an autonomous wheelchair.
“The second stage was [detecting moving shadows] while we were moving ourselves,” said Igor Gilitschenski, senior postdoctoral associate on the project team.
The most recent work adapted and advanced ShadowCam once again by incorporating two additional techniques: Image registration, which overlays two or more images of the same scene taken at different times or angles for analysis (a process often used in the medical field); And visual odometry, which determines the position and orientation of a robot by analyzing various camera images (a tech used in Mars exploration by Rovers).
A computer translates the reflections into a 3D representation that provides commands to the vehicle about whether to proceed and how fast. This further-enhanced ShadowCam version was successfully tested on an autonomous car driving around a parking garage, as well as on an autonomous wheelchair navigating hallways.
Further Reading: Robotic Cane to Lean On
The team also demonstrated that the system works faster than another method of detecting objects, Lidar, which has been considered an essential component of a fully robotic car. An acronym for Light Detection and Ranging, Lidar is a method to judge distances by illuminating a target with laser light invisible to the human eye and measuring the reflected light with a sensor with many thousands of pulses each second.
However, Lidar can detect only line-of-sight objects. ShadowCam was 0.72 seconds faster than Lidar in detecting cars turning around pillars in the parking garage during testing. This small time difference could make for a substantial safety improvement for self-driving cars.
The latest results were presented in a paper at the 2019 International Conference on Intelligent Robots and Systems (IROS) in Macau, China.
So far, the system has been tested only indoors where speeds are lower and lighting conditions are more consistent. In one test, the researchers conducted a demonstration in a parking garage, with the headlights turned off to simulate nighttime driving conditions.
Next, the team plans to conduct tests in outdoor settings and under other indoor lighting conditions. “That’s only several of the many potential avenues we’re interested in,” Amini said. “Another is having it robust to various different positions of the light source. That will be challenging in outdoor conditions.
“Our current work was dealing with the indoor where you usually have only a single light source, and it may be a little bit simpler to detect an oncoming car’s lights than in outside situations. In outdoor conditions it will be a lot more challenging.”
Weather is another variable that the team will begin to study. “Detecting shadows in various types of weather is an interesting challenge that comes with moving the algorithm we’ve created to outdoor environments. Indoors you don’t have to worry about things like that,” Amini said.
Gilitschenski added that the work takes inspiration from the way human beings actually drive. For example, humans change their behavior when driving at night, and the team wants ShadowCam to reflect these changes.
Nancy S. Giges is a technology writer.
“Traditionally, sensors in an autonomous vehicle all rely on line-of-sight sensing, the direct line-of-sight between the sensor which is placed on the car and the object you want to detect,” said team member Alexander Amini, PhD student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). “Any time that line-of-sight is obstructed by a corner or by something else, that’s where we see our system playing a huge role.”
The team’s new work, based on an earlier detection system dubbed ShadowCam, uses sequences of video frames from a camera targeting a static location and detects changes in light intensity over time that indicates whether an object is moving and if so, if it’s moving away or coming closer.
There are multiple steps that all take place within seconds, according to Amini. ShadowCam takes in all that information, classifying each image as a stationary or a dynamic object. If it spots a dynamic image, the system reacts accordingly, meaning guiding the vehicle to stop, slow down, or proceed on course.
Further Reading: Fully Autonomous Drone Helps Gold Miners
“One of the challenges of building a system like this is you have to be robust to a lot of these minor changes and very minor variations in changes,” Amini said. “Having the shadow oriented in different directions and/or coming in at different speeds is a huge challenge.”
“Because the car is dynamic—in other words, moving—the first thing we did was localize an area on the ground close to the shadow, so we can look specifically at a static location and project images on that location,” he said. “The second step is taking that spot on the ground and using computer vision to amplify small changes, very minor changes, in lighting, so that we can detect the shadows.
“For that we use a form of color amplification and motion amplification, so we can look at the shadow like it is an amplified signal,” he added.
The project has its roots in work that initially involved detecting moving shadows from a static wheelchair. Later, MIT professors William Freeman and Antonio Torralba advanced the work for detecting obstacles outside of direct line-of-sight by using color-amplified images allowing the detection of even weakly visible shadows. They successfully tested and demonstrated the feasibility of an earlier version of ShadowCam as a safety mechanism on an autonomous wheelchair.
“The second stage was [detecting moving shadows] while we were moving ourselves,” said Igor Gilitschenski, senior postdoctoral associate on the project team.
The most recent work adapted and advanced ShadowCam once again by incorporating two additional techniques: Image registration, which overlays two or more images of the same scene taken at different times or angles for analysis (a process often used in the medical field); And visual odometry, which determines the position and orientation of a robot by analyzing various camera images (a tech used in Mars exploration by Rovers).
A computer translates the reflections into a 3D representation that provides commands to the vehicle about whether to proceed and how fast. This further-enhanced ShadowCam version was successfully tested on an autonomous car driving around a parking garage, as well as on an autonomous wheelchair navigating hallways.
Further Reading: Robotic Cane to Lean On
The team also demonstrated that the system works faster than another method of detecting objects, Lidar, which has been considered an essential component of a fully robotic car. An acronym for Light Detection and Ranging, Lidar is a method to judge distances by illuminating a target with laser light invisible to the human eye and measuring the reflected light with a sensor with many thousands of pulses each second.
However, Lidar can detect only line-of-sight objects. ShadowCam was 0.72 seconds faster than Lidar in detecting cars turning around pillars in the parking garage during testing. This small time difference could make for a substantial safety improvement for self-driving cars.
The latest results were presented in a paper at the 2019 International Conference on Intelligent Robots and Systems (IROS) in Macau, China.
So far, the system has been tested only indoors where speeds are lower and lighting conditions are more consistent. In one test, the researchers conducted a demonstration in a parking garage, with the headlights turned off to simulate nighttime driving conditions.
Next, the team plans to conduct tests in outdoor settings and under other indoor lighting conditions. “That’s only several of the many potential avenues we’re interested in,” Amini said. “Another is having it robust to various different positions of the light source. That will be challenging in outdoor conditions.
“Our current work was dealing with the indoor where you usually have only a single light source, and it may be a little bit simpler to detect an oncoming car’s lights than in outside situations. In outdoor conditions it will be a lot more challenging.”
Weather is another variable that the team will begin to study. “Detecting shadows in various types of weather is an interesting challenge that comes with moving the algorithm we’ve created to outdoor environments. Indoors you don’t have to worry about things like that,” Amini said.
Gilitschenski added that the work takes inspiration from the way human beings actually drive. For example, humans change their behavior when driving at night, and the team wants ShadowCam to reflect these changes.
Nancy S. Giges is a technology writer.