Search
Close this search box.

Researchers Create Virtual Rodent to Unlock Mysteries of Animal Motor Control

rodent brain imagery
rodent brain imagery

Researchers Create Virtual Rodent to Unlock Mysteries of Animal Motor Control

Insider Brief
  • Researchers have developed a “virtual” rodent to better understand how real animals control their bodies with such precision and agility.
  • Researchers say they relied on sophisticated artificial intelligence techniques to create a robot rat brain.
  • The work could have practical implications for human, particularly in providing a path to advance neuroscience and build better robots.

In a recent study, researchers have developed a “virtual” rodent to better understand how real animals control their bodies with precision and agility. The team, led by scientists at a prominent research institute, leveraged deep reinforcement learning to train an artificial neural network (ANN) to mimic the behaviors of real rats, offering new insights into the neural mechanisms behind animal movement.

Although this may give you low-key Frankenstein vibe, the scientists tell us that the work could lead to a myriad of future research ideas and, eventually, lead to practical applications humans, for example, advances in neuroscience.

The study, detailed in Nature, outlines the creation of a biomechanically realistic rat model controlled by an ANN. This virtual rodent, developed using the MuJoCo physics engine, faithfully replicates the diverse range of movements seen in real rats. By comparing the neural activity recorded from live rats with the network activity of the virtual counterpart, the researchers were able to uncover significant parallels between artificial and biological control systems.

Deep Reinforcement Learning at the Core

Central to the success of the virtual rodent is the use of deep reinforcement learning, according to the researchers, who came from a variety of institutions, including Fauna Robotics, Harvard University, DeepMind, Google, Reality Labs at Meta, University College London.

This advanced machine learning technique was employed to train the ANN to perform various natural behaviors by mimicking real rat movements. The researchers collected extensive data, including 3D kinematic measurements from real rats, and used this data to fine-tune the virtual model. The ANN learned to generate actions, such as joint torques, required to achieve desired body configurations, effectively implementing an inverse dynamics model.

The study’s findings reveal that the neural activity in the sensorimotor striatum and motor cortex of rats is better predicted by the virtual rodent’s network activity than by any observable features of the real rat’s movements. This suggests that these brain regions are involved in implementing inverse dynamics, a critical aspect of motor control that specifies the actions needed to achieve a desired state based on the current state.

Implications for Neuroscience and Beyond

The implications of this research extend far beyond the confines of motor neuroscience. By demonstrating how physical simulation of biomechanically realistic virtual animals can help interpret neural activity and relate it to theoretical principles of motor control, the study opens new avenues for exploring how animals and humans control their movements. The approach could also inform the development of more sophisticated brain-machine interfaces and improve our understanding of motor disorders.

Challenges and Future Directions

Despite its success, the study acknowledges several challenges in modeling the neural control of movement with such richness and realism. One significant hurdle is the scarcity of high-fidelity 3D kinematic measurements and tools to simulate animal bodies accurately. To overcome these challenges, the researchers developed a comprehensive processing pipeline called MIMIC (Motor IMItation and Control), which integrates 3D pose estimation and a skeletal model compatible with physical simulation.

The researchers also highlighted the importance of robustness in their models, noting that biological control systems must handle neural noise and other sources of variability. By changing the network’s natural variation, the team found that the virtual rodent’s control system could achieve robust performance across various behaviors, consistent with optimal feedback control principles.

This study represents a step forward in the field of virtual neuroscience, demonstrating the potential of artificial controllers in understanding complex biological systems. The researchers believe that their approach could be further refined and expanded to model other aspects of animal and human behavior. Future iterations of the virtual rodent could incorporate more brain-inspired network architectures, providing even deeper insights into the neural mechanisms underlying movement.

As the researchers continue to refine their models, they hope to explore how different variables, such as feedback delays and body morphology, influence neural activity and behavior. This could lead to the development of more accurate and efficient neural networks for a variety of applications, from neuroscience research to the creation of advanced robotic systems.

Researchers include Diego Aldarondo from Fauna Robotics, the Department of Organismic and Evolutionary Biology and Center for Brain Science at Harvard University, and DeepMind, Google; Jesse D. Marshall from Reality Labs at Meta and the Department of Organismic and Evolutionary Biology and Center for Brain Science at Harvard University; Leonard Hasenclever, Yuval Tassa, Greg Wayne, and Matthew Botvinick from DeepMind, Google, with Botvinick also affiliated with the Gatsby Computational Neuroscience Unit at University College London; and Ugne Klibaite, Amanda Gellis, and Bence P. Ölveczky from the Department of Organismic and Evolutionary Biology and Center for Brain Science at Harvard University.

Acknowledgements

The study was supported by various NIH grants, with technical assistance from the Harvard Research Computing team. The researchers also expressed gratitude to their colleagues for their support and feedback throughout the project.