Insider Brief
- NVIDIA has launched its Jetson AGX Thor module, delivering 7.5x more AI compute, 3x CPU performance, and 2x memory over Jetson Orin for real-time, on-device reasoning.
- Early adopters include Agility Robotics, Boston Dynamics, Galbot, and Advantech, applying Thor across humanoid robots, logistics, healthcare, and industrial automation.
- Research labs at Stanford, Carnegie Mellon, and the University of Zurich are using Thor to advance navigation, planning, and edge reasoning for autonomous systems.
Nvidia has released its Jetson AGX Thor module, promising physical AI developers the compute muscle to run large AI models and process massive streams of sensor data directly on-device, the company announced.
Jetson Thor will promises to bring about robots that can sense, plan, and act in real time across labs, factories, hospitals, and warehouses, with Nvidia emphasizing it will open up new possibilities for developers of humanoid robots.
The performance leap over the previous Jetson generation is stark, the company said in its announcement on Monday. Jetson Thor delivers 7.5 times more AI compute, more than triple the CPU performance, and double the memory of Jetson Orin.
That boost allows roboticists to fuse multimodal sensor inputs—from cameras and LiDAR to radar and ultrasound—and feed them into AI models without the latency of cloud dependence. The company said the payoff is real-time reasoning at the edge, a capability that opens new doors for humanoid platforms and industrial automation alike.
From Warehouses to Humanoids
Agility Robotics is one of the early adopters. Its humanoid robot Digit, already performing logistics tasks in commercial settings, will migrate from Jetson Orin to Jetson Thor in its next generation. According to both companies, the added compute will give Digit sharper perception and faster decision-making, allowing it to take on more complex workflows such as stacking and palletizing. Nvidia said Boston Dynamics is also integrating Jetson Thor into Atlas, its humanoid platform, tapping server-class AI performance to support high-bandwidth sensing and advanced motion control.
Also announcing integration of Jetson Thor on Monday was China-based Galbot and Tawain’s Advantech has launched the MIC-742-AT Robotics Development Kit powered by Jetson Thor and Holoscan, enabling next-generation robots to perform real-time, low-latency sensing, reasoning, and action at the edge.
Beyond humanoids, Nvidia said Jetson Thor will accelerate development of surgical assistants, delivery robots, smart tractors, and industrial manipulators—any application where rich sensor data and split-second inference can make the difference between smooth performance and system failure.
Built for Generative Reasoning Models
Jetson Thor is engineered for the newest class of AI models: large transformers, vision-language models, and vision-language-action models that combine reasoning with multimodal perception, the company noted. By supporting frameworks such as Cosmos Reason, DeepSeek, Llama, Gemini, Qwen, and robotics-specific systems like Nvidia’s Isaac GR00T N1.5, Jetson Thor provides a foundation for developers to build sophisticated physical AI agents that act and adapt in real-world environments.
The module is also optimized for speculative decoding and FP4 precision, enabling higher throughput with less energy use. Future software updates are expected to squeeze out even greater efficiency, the company noted.
Research at the Edge
Universities, including Stanford University, University of Zurich and Carnegie Mellon, are adotping Jetson Thor for advanced robotics projects, Nvidia pointed out. Carnegie Mellon’s Robotics Institute, researchers plan to use the new module to power fleets of autonomous robots for triage and search-and-rescue in unstructured environments. By upgrading from Jetson Orin, the team expects to boost perception models, expand sensor fusion, and improve edge reasoning. Similar efforts are underway at Stanford and the University of Zurich, where labs are pushing navigation and planning research with Thor’s real-time compute.
“We can only do as much as the compute available allows,” Sebastian Scherer, an associate research professor at the university and head of the AirLab, said in a statement. “Years ago, there was a big disconnect between computer vision and robotics because computer vision workloads were too slow for real-time decision-making — but now, models and computing have gotten fast enough so robots can handle much more nuanced tasks.”




