5% off all items, 10% off clearance with code FESTIVE

Free Shipping for orders over ₹999

support@thinkrobotics.com | +91 8065427666

NVIDIA Jetson AGX Thor Developer Kit Review: Blackwell-Powered Physical AI for the Next Generation of Robots

NVIDIA Jetson AGX Thor Developer Kit Review: Blackwell-Powered Physical AI for the Next Generation of Robots

NVIDIA Jetson AGX Thor Developer Kit Review 2025 – ThinkRobotics

The NVIDIA Jetson AGX Thor Developer Kit became generally available in August 2025. It is the most powerful Jetson ever built, and it marks a clear architectural turning point for the platform. Where previous Jetson kits were general-purpose edge AI computers, the AGX Thor was designed from the ground up for a specific and demanding purpose: running generative AI models on physical robots in real time.

In the last 90 days of sales data, the AGX Thor ranks fourth by revenue. For a product that only became available in the second half of 2025, that volume of orders reflects serious institutional interest. This is not a hobbyist purchase. The teams buying it are building advanced robotic systems for the years ahead.

Blackwell GPU 2070 FP4 TFLOPS 128 GB LPDDR5X MIG Support Isaac GR00T Native $3,499 USD

What Is in the Box

The Jetson AGX Thor Developer Kit ships with the Jetson T5000 module mounted on a reference carrier board, a 140W power supply, a Wi-Fi 6E module, a 1TB NVMe SSD preloaded with Ubuntu 24.04 LTS via JetPack 7, and a quick start guide. The included SSD is a WD/SanDisk SN5000S, as confirmed by ServeTheHome's teardown review.

Unlike the AGX Orin Developer Kit, storage is included from the start. The 1TB drive provides enough headroom for large model weights, training data, and containers without an immediate upgrade purchase. The 140W power supply is also included.

Key Specifications

NVIDIA Jetson AGX Thor Developer Kit (Jetson T5000) — Full Spec Sheet
AI Performance (FP4)
2070 TFLOPS
7.5x more than AGX Orin
AI Performance (FP8)
1035 TFLOPS
Full Transformer Engine support
GPU
Blackwell, 2560 cores
96 Tensor Cores, up to 7 MIG partitions
CPU
14-core Neoverse V3AE
Arm, comparable to Ryzen AI 7 / M4 Mini
RAM
128 GB LPDDR5X
Unified pool, no static partition
Memory Bandwidth
276 GB/s measured
273 GB/s rated, 256-bit bus
Storage
1 TB NVMe SSD
Included, WD/SanDisk SN5000S
Networking
4x 25 GbE QSFP
Multi-GbE RJ45, Wi-Fi 6E, BT
Power / Price
40W to 130W
$3,499 USD / Rs. 3,11,410 excl. GST

What Makes the Blackwell GPU Different

The AGX Orin used an Ampere GPU and achieved 275 TOPS of INT8 performance. The AGX Thor uses NVIDIA's Blackwell GPU architecture and introduces FP4 precision, a new lower-precision datatype that roughly doubles throughput over INT8 for models that support it. At FP4, the Thor delivers 2070 TFLOPS, which is where NVIDIA's 7.5x performance claim over the Orin originates.

Multi-Instance GPU (MIG): Why It Matters for Robotics

MIG allows the GPU to be partitioned into up to seven isolated slices, each running a separate model independently. This is a genuinely significant change for robotics system design. A robot can now run four distinct AI models simultaneously on one chip, with no context-switching overhead between them.

🦿
Locomotion Control
Dedicated GPU partition for movement, balance, and gait control
Grasp Planning
Independent slice for dexterous manipulation and force estimation
👁️
Perception
Vision-language model for scene understanding and object recognition
🧠
VLA Policy
Vision-Language-Action model for task reasoning and instruction following
🎙️
Speech / NLU
Conversational AI and natural language understanding for operator commands
🗺️
Navigation
SLAM, path planning, and obstacle avoidance running in parallel

The 128GB of unified LPDDR5X memory is a shared pool across CPU and GPU with no static partitioning. This matches the memory architecture of Apple Silicon: the full 128GB is available to any workload regardless of whether it runs on the CPU or GPU.

Real-World Performance from Hands-On Reviews

💬
LLM Inference: Llama 3.1 8B and Llama 3.3 70B
ServeTheHome measured 149.1 tokens per second on Llama 3.1 8B versus NVIDIA's expected 150.8, which is closely on target. For Llama 3.3 70B, HotHardware reports throughput in the low double-digit tokens per second range. Hackster.io independently measured approximately 10.56 tokens per second via SGLang. Reviewers consistently note this is not the primary design use case: the platform is a robotics supercomputer, not an Ollama server.
~10-15 tok/s at 70B
🤖
Robotics Pipelines: MIG Multi-Model Concurrent Workloads
Hardware engineers at Analog Devices have described using isolated GPU slices for locomotion, grasp planning, perception, and VLA policies simultaneously on Jetson Thor, noting this simplifies the functional decomposition of a complete robotic system across independent model pipelines. This is the workload Thor was specifically designed for, and benchmarks consistently outpace the AGX Orin across every robotics model tested.
Full concurrent multi-model
CPU Performance: Near AMD Ryzen AI 7 / Mac Mini M4
ServeTheHome benchmarked CPU multi-threaded performance and placed it near an AMD Ryzen AI 7 350 or Mac Mini M4, which is considered appropriate given the platform's primary GPU focus. The 14-core Neoverse V3AE is a capable processor for control systems, ROS 2 node execution, and sensor preprocessing without competing for GPU resources.
Near Ryzen AI 7 / M4 Mini
🌡️
Cooling and Fan Noise
The large cooling solution produces what reviewers describe as a moderate swishing sound rather than aggressive fan noise, which matters for lab, office, and clinical deployments. The heatsink design is notably unconventional: the cooling system occupies its own side of the chassis rather than covering the top, which gives the unit a distinctive form factor compared to previous Jetson kits.
Moderate swishing, not loud

Who Is Already Using the Jetson AGX Thor

The list of early adopters confirmed at the AGX Thor general availability announcement in August 2025 reflects the scale of institutional adoption across robotics, healthcare, and industrial automation.

Boston Dynamics Figure Agility Robotics Franka Robotics NEURA Robotics Amazon Robotics Richtech Robotics
LEM Surgical XRlabs Medtronic
Caterpillar Hexagon Meta

LEM Surgical uses NVIDIA Isaac for Healthcare and Cosmos Transfer to train the autonomous arms of its Dynamis surgical robot. XRlabs uses Thor and Isaac for Healthcare to guide surgeons with real-time AI analysis through surgical scopes. Franka Robotics uses the GR00T N model to power its dual-arm manipulator. NEURA Robotics launched a Gen 3 humanoid at CES 2026 powered by Jetson Thor.

Software and Ecosystem

The AGX Thor runs JetPack 7, based on Ubuntu 24.04 LTS. This is the first Jetson platform to ship on Ubuntu 24.04, bringing updated package versions that reduce the need to compile dependencies from source for most standard AI frameworks.

  • Isaac GR00T: The primary reason most buyers at this price point are choosing the Thor over the AGX Orin. GR00T N1.5 and N1.6 are vision-language-action models that allow robots to learn from human demonstrations, generalize tasks across environments, and reason about language instructions during operation. The AGX Thor is the reference compute platform for GR00T.
  • NVIDIA Isaac: Complete robotics platform covering perception, manipulation, navigation, and Omniverse-based simulation.
  • NVIDIA Holoscan: Real-time sensor processing for surgical systems, industrial cameras, and high-frequency sensor data ingestion via the new Holoscan Sensor Bridge.
  • NVIDIA Metropolis: Visual AI agents for smart-city and industrial-monitoring applications.
  • NVIDIA Cosmos Reason: Vision-language model for building video analytics AI agents at the edge. The Video Search and Summarization blueprint runs on Thor for edge video intelligence applications.
  • ROS 2 Humble: Runs natively on Ubuntu 24.04 with JetPack 7, integrating cleanly into the standard robotics software stack.

Container workflow note: Container support is functional but relies primarily on Docker and NVIDIA's Jetson-containers project. Teams with Kubernetes, Podman, or systemd-based container workflows will encounter friction and may need to adapt configurations or build dependencies from source. Reviewers noted that documentation for non-Docker runtimes is fragmented as of initial release, with some important repositories having moved locations.

Primary Use Cases

The Jetson AGX Thor is purpose-built for a narrower but more demanding set of applications than the AGX Orin.

  • Humanoid robotics: The combination of 2070 FP4 TFLOPS, MIG support, 128GB unified memory, and Isaac GR00T integration makes Thor the only Jetson capable of running full humanoid robot stacks with simultaneous locomotion control, dexterous manipulation, multimodal perception, and language reasoning.
  • Surgical robotics and medical devices: LEM Surgical and XRlabs are both in production or development with Thor-powered systems. The 100GB of total Ethernet bandwidth from the 4x 25 GbE ports supports the high-speed sensor feeds required by surgical systems.
  • Industrial automation and inspection: Thor with Holoscan Sensor Bridge is designed to ingest high-speed data from cameras, LiDARs, IMUs, and encoders for real-time processing pipelines in demanding environments.
  • Agricultural robotics: Noted explicitly by NVIDIA, particularly for systems that need to identify, navigate to, and manipulate individual crops using visual and spatial reasoning.
  • Agentic AI at the edge: The Video Search and Summarization blueprint enables teams to build video analytics agents that reason over camera feeds without sending data to the cloud.

What Reviewers and Developers Are Saying

The AGX Thor received substantial hands-on coverage from HotHardware, ServeTheHome, Hackster.io, and TechRadar at its August 2025 launch. The feedback reflects a platform with strong robotics performance and a few practical rough edges.

If you want to run very large AI models in a friendly multi-tasking environment using NVIDIA's software stack, the Jetson AGX Thor Developer Kit is a great new tool for your toolchest. The good news is that it handles all of those tasks with style and aplomb. And the device will likely get even better over time as NVIDIA continues to refine and update its software stack with additional edge AI capabilities.
HotHardware, hands-on review, Aug 2025 Positive
Going to sell like hotcakes. If you are building high-end next-generation robotics, this is the platform you want to do it on. We found performance came close to matching NVIDIA's claims, including 149.1 tokens per second on Llama 3.1 8B versus the expected 150.8.
ServeTheHome, hands-on review, Aug 2025 Positive
This is a robotics supercomputer, not an Ollama computer. This is meant for robotics. If you are interested in LLM inference instead of VLAs or robotics, check out the DGX Spark. The AGX Thor is a bit slower at running LLMs since it is not designed for it. On the other hand it has 2x the FP4 AI performance of the DGX Spark due to its integrated NVIDIA Deep Learning Accelerators, so AI models for robots will be way faster on this.
ServeTheHome community comments, Aug 2025 Important context
The fan noise is more of a pleasant swishing sound rather than the roar of my Dyson. Isaac GR00T workflows work correctly out of the box. The platform performs well for LLM inference when run from source.
Amazon verified reviewer, late 2025 Positive
Documentation for container support is fragmented, with important repositories having moved locations and dependencies that users of container runtimes other than Docker need to build from source. That is a real friction point for teams with existing Kubernetes or systemd container workflows.
Amazon verified reviewer, late 2025 Known rough edge
While running FP4 models on Jetson Thor may not yet be supported in SGLang and vLLM, it is a feature that may be added soon. With the NVIDIA Jetson AGX Thor Developer Kit, you have an edge AI powerhouse for robotics, multimodal AI, and generative applications.
Hackster.io getting-started guide, Aug 2025 Software note

Pricing: India and Global

USA (NVIDIA MSRP)
$3,499 USD
Official NVIDIA price
India (one listed price)
Rs. 3,18,990 incl. GST
Reference market pricing

Why buy from ThinkRobotics? ThinkRobotics is an authorized NVIDIA distributor in India, which means manufacturer warranty, authentic hardware, and local technical support. For institutional buyers, enterprise customers, and research institutions, ThinkRobotics can provide custom volume pricing and advice on deployment configurations.

How It Compares

AGX Thor vs Jetson AGX Orin 64GB ($1,999)
Thor for VLA and humanoids
The AGX Orin 64GB delivers 275 TOPS with 64GB unified memory at up to 60W. The AGX Thor delivers 2070 FP4 TFLOPS with 128GB unified memory at up to 130W, with MIG support, FP4 precision, and Isaac GR00T compatibility. For teams building generative AI robotic systems or humanoid robots, the Thor is the required platform. For teams running computer vision pipelines, multi-sensor fusion, or standard LLM inference without VLA requirements, the AGX Orin 64GB remains a cost-effective and capable choice at $1,500 less.
AGX Thor vs NVIDIA DGX Spark (approx. $3,000+)
DGX Spark for pure LLM work
The DGX Spark uses the same Blackwell architecture but is positioned as a desktop AI development machine rather than a robotics computer. It delivers around 1000 TOPS FP4, roughly half that of Thor's compute. The Thor has twice the FP4 AI performance of the DGX Spark due to its built-in NVIDIA Deep Learning Accelerators. If the goal is running LLMs and cloud-to-edge AI workflows, the DGX Spark is purpose-built. If the goal is physical AI and robotics, the Thor is the correct choice.
AGX Thor vs Qualcomm Robotics RB3 Gen 2
Thor for generalist robots
Qualcomm's robotics platforms offer competitive CPU performance and very low power draw, making them well-suited for lightweight embedded systems. They lack NVIDIA's CUDA ecosystem, TensorRT pipeline, MIG support, and the Isaac GR00T foundational model stack. For teams building generalist robots that use VLA models and generative reasoning, the AGX Thor has no direct comparable competitor at this time.

Who Should Buy This?

🦾
Humanoid Robot Teams
The only Jetson capable of running full humanoid stacks with concurrent locomotion, manipulation, perception, and language reasoning. Used by Boston Dynamics, Figure, Franka, and NEURA Robotics in production.
🏥
Surgical and Medical Robotics
LEM Surgical and XRlabs are both deploying Thor-powered systems. The 4x 25 GbE ports and Holoscan Sensor Bridge support the high-speed, low-latency sensor feeds required by surgical systems.
🏭
Advanced Industrial AI
Companies deploying edge AI for manufacturing, logistics, and infrastructure inspection where generative reasoning and multi-sensor fusion go beyond what the AGX Orin can sustain at production speed.
🎓
Universities and AI Research Labs
Research groups working on physical AI, VLA models, and generalist robotics will find the Thor the reference platform for Isaac GR00T and the most capable Jetson available for frontier robotics research.

Practical Notes Before You Buy

1TB NVMe SSD includedUnlike the AGX Orin, you do not need to purchase storage separately. First boot is ready out of the box.
140W power supply includedNo separate PSU purchase required for standard operation.
Ubuntu 24.04 preloadedJetPack 7 ships on Ubuntu 24.04 LTS, with updated package versions that reduce source-compilation work.
Isaac GR00T works out of the boxReviewers confirm GR00T N1.5 and N1.6 workflows run correctly immediately after setup without custom configuration.
No PCIe expansion slotUnlike the AGX Orin Developer Kit, the AGX Thor does not include a PCIe expansion slot under the magnetic cover. Plan your I/O needs accordingly.
Container workflows: Docker onlyNon-Docker runtimes (Kubernetes, Podman, systemd) require extra configuration work. NVIDIA's Jetson-containers project is the primary supported path.
JetPack 6 pipelines need rebuildingTeams migrating from JetPack 6 (Ubuntu 22.04) will need to rebuild and validate container configurations for JetPack 7 (Ubuntu 24.04).
Higher power band than AGX OrinThe 40W to 130W TDP is higher than the Orin's 15W to 60W range. Consider thermal design implications for battery-powered deployments.
Our Verdict

The NVIDIA Jetson AGX Thor Developer Kit is the right platform for teams that have outgrown what the AGX Orin can offer, specifically because their AI workloads now involve generative reasoning, VLA models, multi-model concurrent pipelines on a single device, or humanoid robot development. The Blackwell GPU, 128GB unified memory pool, MIG support, and Isaac GR00T integration collectively define a new category of robotic compute. Boston Dynamics, Figure, Franka, Amazon Robotics, and LEM Surgical are building production systems on it. That adoption record reflects a capable platform, not just a specification announcement. The price of $3,499 is significant. The documentation for non-Docker container environments is a known rough edge. But for teams whose robotics applications genuinely require this level of compute, there is currently no comparable alternative at this price point.

★★★★★
Editor's Pick: Best Physical AI Platform for Advanced Robotics (2025)

Frequently Asked Questions

The kit ships with a 1TB NVMe SSD preloaded with Ubuntu 24.04 LTS via JetPack 7. This is different from the AGX Orin Developer Kit, which ships without onboard storage beyond eMMC. You can get started with the Thor immediately after unboxing without purchasing additional storage.

MIG allows the Blackwell GPU to be divided into up to seven isolated partitions, each of which can run a separate AI model independently. In a robotic system, this means you can assign one partition to locomotion control, another to grasp planning, another to perception, and another to a VLA policy, all running simultaneously on the same chip. Previous Jetson platforms required context-switching between models, which added latency. MIG eliminates that overhead for multi-model robotic pipelines.

Yes. The AGX Thor is the reference compute platform for NVIDIA Isaac GR00T. JetPack 7 includes the necessary CUDA and TensorRT stack, and GR00T N1.5 and N1.6 models are available through the NVIDIA Isaac GR00T developer portal. Multiple reviewers confirmed that GR00T workflows run correctly out of the box. Teams building humanoid or generalist robots can begin using GR00T workflows directly on the developer kit without custom setup beyond the standard NVIDIA software installation.

For standard computer vision pipelines, multi-sensor fusion, high-frame-rate object detection, or LLM inference with models up to 13B parameters, the AGX Orin 64GB at $1,999 remains a capable and cost-effective option. The Thor's primary advantages over the Orin come from MIG support, FP4 precision for large generative models, Isaac GR00T compatibility, and the 128GB unified memory pool. If your workload does not require any of these, the AGX Orin 64GB is a more practical starting point at $1,500 less.

Yes. ThinkRobotics is an authorized NVIDIA distributor in India and supports enterprise, research institution, and university purchases of the AGX Thor. For organizations evaluating the platform for humanoid robotics programs, advanced AI research, or surgical robotics development, the team can assist with volume pricing, deployment guidance, and integration advice. Contact ThinkRobotics directly for bulk or institutional pricing on the AGX Thor Developer Kit.

Shop the Jetson AGX Thor in India

Authorized NVIDIA distributor. Manufacturer warranty, local support, and competitive pricing on the most powerful Jetson ever built.

Post a comment