Insider Brief
- A new sensor fusion system developed at Duke University enables quadruped robots to navigate complex terrain using a combination of vision, vibration, and touch.
- WildFusion integrates data from cameras, LiDAR, microphones, tactile sensors, and inertial measurement units to create a continuous 3D reconstruction of surroundings, even in visually obstructed environments.
- Backed by DARPA and the U.S. Army Research Laboratory, the system is designed for applications in forests, disaster zones, and remote terrain, with future enhancements to include thermal and humidity sensors.
A new navigation system developed at Duke University could help robots traverse forests and other rough terrain with the sensory awareness of a human hiker.
Researchers have created WildFusion, a multimodal sensing framework that allows quadruped robots to fuse vision, touch, and vibration data into a unified model of their surroundings, according to Duke University. The system, which combines data from cameras, LiDAR, contact microphones, tactile sensors, and inertial measurement units, was tested in real-world environments including North Carolina’s Eno River State Park. The results will be presented at the 2025 IEEE International Conference on Robotics and Automation.
“Think of it like solving a puzzle where some pieces are missing, yet you’re able to intuitively imagine the complete picture,” explained Boyuan Chen, the Dickinson Family Assistant Professor of Mechanical Engineering and Materials Science, Electrical and Computer Engineering, and Computer Science at Duke University, said in a statement. “WildFusion’s multimodal approach lets the robot ‘fill in the blanks’ when sensor data is sparse or noisy, much like what humans do.”
WildFusion builds on the principle that robots, like people, benefit from combining multiple sensory inputs to assess complex environments, researchers noted. As the robot walks, microphones detect the subtle acoustic differences between surfaces, while tactile sensors capture foot pressure and inertial units track stability. A deep learning model processes and blends these inputs to generate a continuous 3D reconstruction of the surrounding terrain, even in areas where vision is obstructed.
“WildFusion opens a new chapter in robotic navigation and 3D mapping,” said Chen. “It helps robots to operate more confidently in unstructured, unpredictable environments like forests, disaster zones and off-road terrain.
“Typical robots rely heavily on vision or LiDAR alone, which often falter without clear paths or predictable landmarks,” said Yanbaihui Liu, the lead student author and a second-year Ph.D. student in Chen’s lab. “Even advanced 3D mapping methods struggle to reconstruct a continuous map when sensor data is sparse, noisy or incomplete, which is a frequent problem in unstructured outdoor environments. That’s exactly the challenge WildFusion was designed to solve.”
While traditional robotic systems depend heavily on visual inputs, which can fail in low light or cluttered settings, WildFusion’s integrated sensing approach allows robots to predict traversability and adapt their movements based on incomplete or noisy data. This is particularly valuable in environments where visual information is limited or misleading, such as thick forests or disaster zones.
Future work will explore the addition of new sensor types such as thermal or humidity detectors and extend the framework to new applications, including infrastructure inspection and off-road exploration. The researchers say the modular design of WildFusion could support rapid integration into other robotic platforms.
The project received support from the Defense Advanced Research Projects Agency (DARPA) and the U.S. Army Research Laboratory. With WildFusion, the Duke team aims to push robotic mobility closer to human-like adaptability in the face of unpredictable terrain.