The World’s Smallest AI Supercomputer for Embedded and Edge Systems. Now with Cloud-Native Support.
NVIDIA®Jetson Xavier™NX brings supercomputer performance to the edge in a small form factor system-on-module (SOM). Up to 21 TOPS of accelerated computing delivers the horsepower to run modern neural networks in parallel and process data from multiple high-resolution sensors—a requirement for full AI systems.
Jetson Xavier NX now features cloud-native support that lets developers build and deploy high-quality, software-defined features on embedded and edge devices. Pre-trained AI models fromNVIDIA NGCand theNVIDIA Transfer Learning Toolkitgive you a faster path to trained and optimized AI networks, while containerized deployment to Jetson devices allows flexible and seamless updates. Jetson Xavier NX accelerates the NVIDIA software stack with more than 10X the performance of its widely adopted predecessor, Jetson TX2.
Smaller than a credit card (70x45 mm), the energy-efficient Jetson Xavier NX module delivers server-class performance—21 TOPS (at 15 W) or up to 14 TOPS (at 10 W). It can run multiple modern neural networks in parallel and process data from multiple high-resolution sensors—a requirement for full AI systems. Available cloud-native support makes it easier than ever to develop and deploy AI software to edge devices.
Jetson Xavier NX is production-ready and supports all popular AI frameworks. This opens the door for embedded edge-computing devices that demand increased performance to support AI workloads but are constrained by size, weight, power budget, or cost.
Features
Size:
XAVIER PERFORMANCE. NANO SIZE.
At 70 mm x 45 mm, Jetson Xavier NX packs the power of an NVIDIA Xavier SoC into a module the size of aJetson Nano. This small module combines exceptional performance and power advantages with a rich set of IOs—from high-speed CSI and PCIe to low-speed I2Cs and GPIOs. Take advantage of the small form factor, sensor-rich interfaces, and big performance to bring new capability to all your embedded AI and edge systems.
Performance:
POWERFUL 21 TOPS AI PERFORMANCE
Jetson Xavier NX delivers up to 21 TOPS, making it ideal for high-performance compute and AI in embedded and edge systems. You get the performance of 384 NVIDIA CUDA®Cores, 48 Tensor Cores, 6 Carmel ARM CPUs, and two NVIDIA Deep Learning Accelerators (NVDLA) engines. Combined with over 51GB/s of memory bandwidth, video encoded, and decode, these features make Jetson Xavier NX the platform of choice to run multiple modern neural networks in parallel and process high-resolution data from multiple sensors simultaneously.
Power:
INCREDIBLE POWER EFFICIENCY
Jetson Xavier NX supports multiple power modes, including low-power modes for battery-operated systems, and delivers up to 14 TOPs for AI applications in as little as 10 W. This leaves more of your power budget for sensors and peripherals, while still letting you use the entire NVIDIA software stack. You now have the performance to run all modern AI networks and frameworks with accelerated libraries for deep learning, computer vision, computer graphics, multimedia, and more.
Use left/right arrows to navigate the slideshow or swipe left/right if using a mobile device
Choosing a selection results in a full page refresh.
Press the space key then arrow keys to make a selection.
Shopping Cart
Logging you in
{"id":6618960592982,"title":"Nvidia Jetson Xavier NX SOM","handle":"nvidia-jetson-xavier-nx-som","description":"\u003cp data-mce-fragment=\"1\"\u003e\u003cstrong data-mce-fragment=\"1\"\u003eCOMPACT, POWERFUL PERFORMANCE AT THE EDGE\u003c\/strong\u003e\u003c\/p\u003e\n\u003cp data-mce-fragment=\"1\"\u003e\u003cspan data-mce-fragment=\"1\"\u003eThe World’s Smallest AI Supercomputer for Embedded and Edge Systems. Now with Cloud-Native Support.\u003c\/span\u003e\u003c\/p\u003e\n\u003cp data-mce-fragment=\"1\"\u003eNVIDIA\u003csup data-mce-fragment=\"1\"\u003e®\u003c\/sup\u003e\u003cspan data-mce-fragment=\"1\"\u003e \u003c\/span\u003eJetson Xavier\u003csup data-mce-fragment=\"1\"\u003e™\u003c\/sup\u003e\u003cspan data-mce-fragment=\"1\"\u003e \u003c\/span\u003eNX brings supercomputer performance to the edge in a small form factor system-on-module (SOM). Up to 21 TOPS of accelerated computing delivers the horsepower to run modern neural networks in parallel and process data from multiple high-resolution sensors—a requirement for full AI systems.\u003c\/p\u003e\n\u003cp data-mce-fragment=\"1\"\u003eJetson Xavier NX now features cloud-native support that lets developers build and deploy high-quality, software-defined features on embedded and edge devices. Pre-trained AI models from\u003cspan data-mce-fragment=\"1\"\u003e \u003c\/span\u003e\u003ca data-mce-fragment=\"1\" href=\"https:\/\/www.nvidia.com\/en-in\/gpu-cloud\/\" data-mce-href=\"https:\/\/www.nvidia.com\/en-in\/gpu-cloud\/\"\u003eNVIDIA NGC\u003c\/a\u003e\u003cspan data-mce-fragment=\"1\"\u003e \u003c\/span\u003eand the\u003cspan data-mce-fragment=\"1\"\u003e \u003c\/span\u003e\u003ca data-mce-fragment=\"1\" href=\"https:\/\/developer.nvidia.com\/transfer-learning-toolkit\" target=\"_blank\" data-mce-href=\"https:\/\/developer.nvidia.com\/transfer-learning-toolkit\"\u003eNVIDIA Transfer Learning Toolkit\u003c\/a\u003e\u003cspan data-mce-fragment=\"1\"\u003e \u003c\/span\u003egive you a faster path to trained and optimized AI networks, while containerized deployment to Jetson devices allows flexible and seamless updates. Jetson Xavier NX accelerates the NVIDIA software stack with more than 10X the performance of its widely adopted predecessor, Jetson TX2.\u003c\/p\u003e\n\u003cp\u003eSmaller than a credit card (70x45 mm), the energy-efficient Jetson Xavier NX module delivers server-class performance—21 TOPS (at 15 W) or up to 14 TOPS (at 10 W). It can run multiple modern neural networks in parallel and process data from multiple high-resolution sensors—a requirement for full AI systems. Available cloud-native support makes it easier than ever to develop and deploy AI software to edge devices.\u003c\/p\u003e\n\u003cp\u003eJetson Xavier NX is production-ready and supports all popular AI frameworks. This opens the door for embedded edge-computing devices that demand increased performance to support AI workloads but are constrained by size, weight, power budget, or cost.\u003c\/p\u003e\n\u003ch5 data-mce-fragment=\"1\"\u003eFeatures\u003c\/h5\u003e\n\u003cp data-mce-fragment=\"1\"\u003eSize:\u003c\/p\u003e\n\u003cp data-mce-fragment=\"1\"\u003eXAVIER PERFORMANCE. NANO SIZE.\u003c\/p\u003e\n\u003cp data-mce-fragment=\"1\"\u003eAt 70 mm x 45 mm, Jetson Xavier NX packs the power of an NVIDIA Xavier SoC into a module the size of a\u003cspan data-mce-fragment=\"1\"\u003e \u003c\/span\u003e\u003ca data-mce-fragment=\"1\" href=\"https:\/\/www.nvidia.com\/en-in\/autonomous-machines\/embedded-systems\/jetson-nano\/\" data-mce-href=\"https:\/\/www.nvidia.com\/en-in\/autonomous-machines\/embedded-systems\/jetson-nano\/\"\u003eJetson Nano\u003c\/a\u003e. This small module combines exceptional performance and power advantages with a rich set of IOs—from high-speed CSI and PCIe to low-speed I2Cs and GPIOs. Take advantage of the small form factor, sensor-rich interfaces, and big performance to bring new capability to all your embedded AI and edge systems.\u003c\/p\u003e\n\u003cp data-mce-fragment=\"1\"\u003ePerformance: \u003c\/p\u003e\n\u003cp data-mce-fragment=\"1\"\u003ePOWERFUL 21 TOPS AI PERFORMANCE\u003c\/p\u003e\n\u003cp data-mce-fragment=\"1\"\u003eJetson Xavier NX delivers up to 21 TOPS, making it ideal for high-performance compute and AI in embedded and edge systems. You get the performance of 384 NVIDIA CUDA\u003csup data-mce-fragment=\"1\"\u003e®\u003c\/sup\u003e\u003cspan data-mce-fragment=\"1\"\u003e \u003c\/span\u003eCores, 48 Tensor Cores, 6 Carmel ARM CPUs, and two NVIDIA Deep Learning Accelerators (NVDLA) engines. Combined with over 51GB\/s of memory bandwidth, video encoded, and decode, these features make Jetson Xavier NX the platform of choice to run multiple modern neural networks in parallel and process high-resolution data from multiple sensors simultaneously.\u003c\/p\u003e\n\u003cp data-mce-fragment=\"1\"\u003ePower:\u003c\/p\u003e\n\u003cp data-mce-fragment=\"1\"\u003eINCREDIBLE POWER EFFICIENCY\u003c\/p\u003e\n\u003cdiv data-mce-fragment=\"1\" class=\"description color-black body-text\"\u003e\n\u003cp data-mce-fragment=\"1\"\u003eJetson Xavier NX supports multiple power modes, including low-power modes for battery-operated systems, and delivers up to 14 TOPs for AI applications in as little as 10 W. This leaves more of your power budget for sensors and peripherals, while still letting you use the entire NVIDIA software stack. You now have the performance to run all modern AI networks and frameworks with accelerated libraries for deep learning, computer vision, computer graphics, multimedia, and more.\u003c\/p\u003e\n\u003ch5 data-mce-fragment=\"1\"\u003eSpecifications\u003c\/h5\u003e\n\u003ctable cellspacing=\"0\" cellpadding=\"0\" border=\"1\" align=\"center\"\u003e\n\u003ctbody\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eAI Performance\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e21 TOPS\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eGPU\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e384-core NVIDIA Volta\u003csup\u003e™\u003c\/sup\u003e\u003cspan\u003e \u003c\/span\u003eGPU with 48 Tensor Cores\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eCPU\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e6-core NVIDIA Carmel ARM\u003csup\u003e®\u003c\/sup\u003ev8.2 64-bit CPU\u003cbr\u003e6MB L2 + 4MB L3\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eMemory\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e8 GB 128-bit LPDDR4x\u003cbr\u003e51.2GB\/s\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eStorage\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e16 GB eMMC 5.1\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003ePower\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e10 W|15 W\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003ePCIe\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e1 x1 (PCIe Gen3) + 1 x4 (PCIe Gen4), total 144 GT\/s*\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eCSI Camera\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003eUp to 6 cameras (24 via virtual channels)\u003cbr\u003e14 lanes (3x4 or 6x2) MIPI CSI-2\u003cbr\u003eD-PHY 1.2 (up to 30 Gbps)\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eVideo Encode\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e2x 4Kp30 | 6x 1080p60 | 14x 1080p30 (H.265 \u0026amp; H.264)\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eVideo Decode\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e2x 4Kp60 | 4x 4Kp30 | 12x 1080p60 | 32x 1080p30 (H.265)\u003cbr\u003e2x 4Kp30 | 6x 1080p60 | 16x 1080p30 (H.264)\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eDisplay\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e2 multi-mode DP 1.4\/eDP 1.4\/HDMI 2.0\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eDL Accelerator\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e2x NVDLA Engines\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eVision Accelerator\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e7-Way VLIW Vision Processor\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eNetworking\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e10\/100\/1000 BASE-T Ethernet\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eMechanical\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e69.6 mm x 45 mm\u003cbr\u003e260-pin SO-DIMM connector\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003c\/tbody\u003e\n\u003c\/table\u003e\n\u003c\/div\u003e","published_at":"2022-12-06T11:12:01+05:30","created_at":"2021-08-20T17:01:32+05:30","vendor":"ThinkRobotics","type":"Single Board Computers","tags":["AI","AI GPU","GPU","Jetson GPU","Jetson module","Jetson Nano Case","jetson xavier","JT-SOM","NVDA","nvidia jetson","NVIDIA Jetson Xavier Case","NVIDIA-COM","NX Module","NX SOM","SBC1","Xavier NX"],"price":4899999,"price_min":4899999,"price_max":5899999,"available":true,"price_varies":true,"compare_at_price":5999999,"compare_at_price_min":5999999,"compare_at_price_max":5999999,"compare_at_price_varies":false,"variants":[{"id":39799560142934,"title":"8GB","option1":"8GB","option2":null,"option3":null,"sku":"SBC1102-MOD8","requires_shipping":true,"taxable":true,"featured_image":null,"available":true,"name":"Nvidia Jetson Xavier NX SOM - 8GB","public_title":"8GB","options":["8GB"],"price":4899999,"weight":100,"compare_at_price":5999999,"inventory_management":"shopify","barcode":"39799560142934","requires_selling_plan":false,"selling_plan_allocations":[]},{"id":39799560175702,"title":"16GB","option1":"16GB","option2":null,"option3":null,"sku":"SBC1102-MOD16","requires_shipping":true,"taxable":true,"featured_image":null,"available":true,"name":"Nvidia Jetson Xavier NX SOM - 16GB","public_title":"16GB","options":["16GB"],"price":5899999,"weight":100,"compare_at_price":5999999,"inventory_management":"shopify","barcode":"39799560175702","requires_selling_plan":false,"selling_plan_allocations":[]}],"images":["\/\/thinkrobotics.com\/cdn\/shop\/products\/Image1.png?v=1629459334"],"featured_image":"\/\/thinkrobotics.com\/cdn\/shop\/products\/Image1.png?v=1629459334","options":["Memory Size"],"media":[{"alt":"NVIDIA Jetson Xavier NX SOM Online","id":21199597699158,"position":1,"preview_image":{"aspect_ratio":1.0,"height":1000,"width":1000,"src":"\/\/thinkrobotics.com\/cdn\/shop\/products\/Image1.png?v=1629459334"},"aspect_ratio":1.0,"height":1000,"media_type":"image","src":"\/\/thinkrobotics.com\/cdn\/shop\/products\/Image1.png?v=1629459334","width":1000}],"requires_selling_plan":false,"selling_plan_groups":[],"content":"\u003cp data-mce-fragment=\"1\"\u003e\u003cstrong data-mce-fragment=\"1\"\u003eCOMPACT, POWERFUL PERFORMANCE AT THE EDGE\u003c\/strong\u003e\u003c\/p\u003e\n\u003cp data-mce-fragment=\"1\"\u003e\u003cspan data-mce-fragment=\"1\"\u003eThe World’s Smallest AI Supercomputer for Embedded and Edge Systems. Now with Cloud-Native Support.\u003c\/span\u003e\u003c\/p\u003e\n\u003cp data-mce-fragment=\"1\"\u003eNVIDIA\u003csup data-mce-fragment=\"1\"\u003e®\u003c\/sup\u003e\u003cspan data-mce-fragment=\"1\"\u003e \u003c\/span\u003eJetson Xavier\u003csup data-mce-fragment=\"1\"\u003e™\u003c\/sup\u003e\u003cspan data-mce-fragment=\"1\"\u003e \u003c\/span\u003eNX brings supercomputer performance to the edge in a small form factor system-on-module (SOM). Up to 21 TOPS of accelerated computing delivers the horsepower to run modern neural networks in parallel and process data from multiple high-resolution sensors—a requirement for full AI systems.\u003c\/p\u003e\n\u003cp data-mce-fragment=\"1\"\u003eJetson Xavier NX now features cloud-native support that lets developers build and deploy high-quality, software-defined features on embedded and edge devices. Pre-trained AI models from\u003cspan data-mce-fragment=\"1\"\u003e \u003c\/span\u003e\u003ca data-mce-fragment=\"1\" href=\"https:\/\/www.nvidia.com\/en-in\/gpu-cloud\/\" data-mce-href=\"https:\/\/www.nvidia.com\/en-in\/gpu-cloud\/\"\u003eNVIDIA NGC\u003c\/a\u003e\u003cspan data-mce-fragment=\"1\"\u003e \u003c\/span\u003eand the\u003cspan data-mce-fragment=\"1\"\u003e \u003c\/span\u003e\u003ca data-mce-fragment=\"1\" href=\"https:\/\/developer.nvidia.com\/transfer-learning-toolkit\" target=\"_blank\" data-mce-href=\"https:\/\/developer.nvidia.com\/transfer-learning-toolkit\"\u003eNVIDIA Transfer Learning Toolkit\u003c\/a\u003e\u003cspan data-mce-fragment=\"1\"\u003e \u003c\/span\u003egive you a faster path to trained and optimized AI networks, while containerized deployment to Jetson devices allows flexible and seamless updates. Jetson Xavier NX accelerates the NVIDIA software stack with more than 10X the performance of its widely adopted predecessor, Jetson TX2.\u003c\/p\u003e\n\u003cp\u003eSmaller than a credit card (70x45 mm), the energy-efficient Jetson Xavier NX module delivers server-class performance—21 TOPS (at 15 W) or up to 14 TOPS (at 10 W). It can run multiple modern neural networks in parallel and process data from multiple high-resolution sensors—a requirement for full AI systems. Available cloud-native support makes it easier than ever to develop and deploy AI software to edge devices.\u003c\/p\u003e\n\u003cp\u003eJetson Xavier NX is production-ready and supports all popular AI frameworks. This opens the door for embedded edge-computing devices that demand increased performance to support AI workloads but are constrained by size, weight, power budget, or cost.\u003c\/p\u003e\n\u003ch5 data-mce-fragment=\"1\"\u003eFeatures\u003c\/h5\u003e\n\u003cp data-mce-fragment=\"1\"\u003eSize:\u003c\/p\u003e\n\u003cp data-mce-fragment=\"1\"\u003eXAVIER PERFORMANCE. NANO SIZE.\u003c\/p\u003e\n\u003cp data-mce-fragment=\"1\"\u003eAt 70 mm x 45 mm, Jetson Xavier NX packs the power of an NVIDIA Xavier SoC into a module the size of a\u003cspan data-mce-fragment=\"1\"\u003e \u003c\/span\u003e\u003ca data-mce-fragment=\"1\" href=\"https:\/\/www.nvidia.com\/en-in\/autonomous-machines\/embedded-systems\/jetson-nano\/\" data-mce-href=\"https:\/\/www.nvidia.com\/en-in\/autonomous-machines\/embedded-systems\/jetson-nano\/\"\u003eJetson Nano\u003c\/a\u003e. This small module combines exceptional performance and power advantages with a rich set of IOs—from high-speed CSI and PCIe to low-speed I2Cs and GPIOs. Take advantage of the small form factor, sensor-rich interfaces, and big performance to bring new capability to all your embedded AI and edge systems.\u003c\/p\u003e\n\u003cp data-mce-fragment=\"1\"\u003ePerformance: \u003c\/p\u003e\n\u003cp data-mce-fragment=\"1\"\u003ePOWERFUL 21 TOPS AI PERFORMANCE\u003c\/p\u003e\n\u003cp data-mce-fragment=\"1\"\u003eJetson Xavier NX delivers up to 21 TOPS, making it ideal for high-performance compute and AI in embedded and edge systems. You get the performance of 384 NVIDIA CUDA\u003csup data-mce-fragment=\"1\"\u003e®\u003c\/sup\u003e\u003cspan data-mce-fragment=\"1\"\u003e \u003c\/span\u003eCores, 48 Tensor Cores, 6 Carmel ARM CPUs, and two NVIDIA Deep Learning Accelerators (NVDLA) engines. Combined with over 51GB\/s of memory bandwidth, video encoded, and decode, these features make Jetson Xavier NX the platform of choice to run multiple modern neural networks in parallel and process high-resolution data from multiple sensors simultaneously.\u003c\/p\u003e\n\u003cp data-mce-fragment=\"1\"\u003ePower:\u003c\/p\u003e\n\u003cp data-mce-fragment=\"1\"\u003eINCREDIBLE POWER EFFICIENCY\u003c\/p\u003e\n\u003cdiv data-mce-fragment=\"1\" class=\"description color-black body-text\"\u003e\n\u003cp data-mce-fragment=\"1\"\u003eJetson Xavier NX supports multiple power modes, including low-power modes for battery-operated systems, and delivers up to 14 TOPs for AI applications in as little as 10 W. This leaves more of your power budget for sensors and peripherals, while still letting you use the entire NVIDIA software stack. You now have the performance to run all modern AI networks and frameworks with accelerated libraries for deep learning, computer vision, computer graphics, multimedia, and more.\u003c\/p\u003e\n\u003ch5 data-mce-fragment=\"1\"\u003eSpecifications\u003c\/h5\u003e\n\u003ctable cellspacing=\"0\" cellpadding=\"0\" border=\"1\" align=\"center\"\u003e\n\u003ctbody\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eAI Performance\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e21 TOPS\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eGPU\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e384-core NVIDIA Volta\u003csup\u003e™\u003c\/sup\u003e\u003cspan\u003e \u003c\/span\u003eGPU with 48 Tensor Cores\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eCPU\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e6-core NVIDIA Carmel ARM\u003csup\u003e®\u003c\/sup\u003ev8.2 64-bit CPU\u003cbr\u003e6MB L2 + 4MB L3\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eMemory\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e8 GB 128-bit LPDDR4x\u003cbr\u003e51.2GB\/s\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eStorage\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e16 GB eMMC 5.1\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003ePower\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e10 W|15 W\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003ePCIe\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e1 x1 (PCIe Gen3) + 1 x4 (PCIe Gen4), total 144 GT\/s*\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eCSI Camera\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003eUp to 6 cameras (24 via virtual channels)\u003cbr\u003e14 lanes (3x4 or 6x2) MIPI CSI-2\u003cbr\u003eD-PHY 1.2 (up to 30 Gbps)\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eVideo Encode\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e2x 4Kp30 | 6x 1080p60 | 14x 1080p30 (H.265 \u0026amp; H.264)\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eVideo Decode\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e2x 4Kp60 | 4x 4Kp30 | 12x 1080p60 | 32x 1080p30 (H.265)\u003cbr\u003e2x 4Kp30 | 6x 1080p60 | 16x 1080p30 (H.264)\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eDisplay\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e2 multi-mode DP 1.4\/eDP 1.4\/HDMI 2.0\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eDL Accelerator\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e2x NVDLA Engines\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eVision Accelerator\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e7-Way VLIW Vision Processor\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eNetworking\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e10\/100\/1000 BASE-T Ethernet\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003ctr\u003e\n\u003ctd class=\"tableCLdata\"\u003eMechanical\u003c\/td\u003e\n\u003ctd class=\"tableCRdata\" colspan=\"2\"\u003e69.6 mm x 45 mm\u003cbr\u003e260-pin SO-DIMM connector\u003c\/td\u003e\n\u003c\/tr\u003e\n\u003c\/tbody\u003e\n\u003c\/table\u003e\n\u003c\/div\u003e"}