
When Sima.ai needed to deploy their advanced AI inference accelerators in real-world edge computing environments, they faced a common challenge in the embedded systems industry: their powerful system-on-module required a production-ready carrier board that didn't exist. Off-the-shelf solutions lacked the specific interfaces, power management, and thermal characteristics their edge AI deployment demanded.
Think Robotics partnered with Sima.ai to design and manufacture custom SOM carrier boards that transformed their AI accelerator modules into complete, deployable edge computing solutions. This case study examines how custom hardware development enabled a breakthrough AI technology to reach commercial deployment.
Understanding the Edge AI Hardware Challenge
Sima.ai had developed an innovative AI accelerator hardware designed specifically for edge inference, running trained neural networks locally on devices rather than in cloud data centers. Their technology promised dramatically lower latency, reduced bandwidth requirements, and greater privacy than cloud-based AI processing.
The core technology existed as a system-on-module carrier board —a compact assembly containing the AI processor, memory, and essential support circuitry. However, SOM integration into actual products requires a carrier board that provides power regulation, external interfaces, thermal management, and connectivity to sensors and other system components.
Standard development boards offered by the SOM manufacturer served laboratory evaluation but lacked the features, reliability, and form factors needed for commercial deployment. Sima.ai needed a custom carrier board design that would enable their technology to function in diverse edge computing hardware scenarios, from retail analytics systems to industrial inspection equipment.
The design challenge extended beyond simply connecting pins and routing signals. Edge computing systems must operate reliably across varied environments, manage thermal loads without active cooling in many applications, provide robust power management for battery- and solar-powered installations, and deliver predictable, real-time performance for latency-sensitive AI inference tasks.
Requirements Analysis and Architecture Planning
Think Robotics began with comprehensive requirements analysis, working closely with Sima.ai's engineering team and target customers to understand how the carrier boards would be used. This requirements phase proved critical because attempting to design generic solutions typically results in compromised products that serve no application particularly well.
Three distinct use case categories emerged. Retail and commercial applications needed compact form factors, multiple camera inputs, network connectivity, and the ability to operate in typical indoor environments: industrial deployments required ruggedized construction, wider operating temperature ranges, industrial communication protocols, and vibration resistance. Mobile and battery-powered applications demanded aggressive power optimization and diverse connectivity options.
Rather than forcing a single carrier board to address all scenarios, Think Robotics proposed a family of application-specific designs that share common elements but are optimized for their respective use cases. This approach allowed each board to excel in its target environment while maintaining design efficiency through shared components and architectural patterns.
The embedded system design architecture placed the Sima.ai SOM at the center, with carefully designed power-delivery networks ensuring a clean, stable voltage across all operating conditions. High-speed interfaces connected cameras and sensors, while both wired and wireless networking provided data connectivity. Expansion connectors enabled customers to add application-specific peripherals without redesigning the board.
According to research from Stanford University's AI Lab, modular hardware architectures that separate compute modules from application-specific carrier boards reduce time-to-market by 40 to 60 percent compared to fully integrated designs while providing greater deployment flexibility.
Power Management and Thermal Design
AI accelerator hardware presents significant power management challenges. Inference workloads create highly dynamic power draws, with rapid transitions between low-power idle and high-current, compute-intensive states. The custom PCB design needed to deliver peak currents exceeding 15 amperes while maintaining voltage regulation within tight tolerances to prevent processor instability.
Think Robotics implemented sophisticated power budget management using multi-phase switching regulators that shared load across multiple power stages. This approach improved efficiency while reducing thermal stress on individual components. Careful component selection identified regulators with fast transient response characteristics essential for handling sudden load changes in embedded AI systems.
Battery-powered applications received additional power optimization. The boards incorporated power gating that completely disconnected unused circuitry, reducing idle power consumption to under 500 milliwatts. Intelligent power sequencing ensured components were powered up in the correct order while providing controlled shutdown during battery depletion to prevent data corruption.
Thermal design for high-performance edge computing proved equally demanding. The Sima.ai AI accelerator could generate over 20 watts during sustained inference, which represents substantial heat in compact embedded form factors. Think Robotics designed custom heatsink interfaces that conducted heat from the SOM to larger thermal masses on the carrier board, then to chassis-mounted heatsinks in deployed systems.
Thermal simulations validated designs before prototyping. Computational fluid dynamics modeling predicted temperature distributions under various operating conditions and cooling configurations. This analysis identified potential hot spots and guided the placement of temperature-sensitive components away from high heat areas. The result was production-ready carrier board designs that operated reliably even when inferencing continuously at full capacity.
The National Institute of Standards and Technology's research on edge computing thermal management emphasizes that proper thermal design from the outset prevents costly redesigns and field failures that plague many AI hardware deployments.
Interface Design and Connectivity
Edge AI applications require diverse sensor inputs and communication interfaces. Think Robotics implemented multiple high-speed camera interfaces supporting both MIPI CSI-2 and USB 3.0 connections. This dual interface approach accommodated the broadest range of industrial and consumer cameras. Four independent camera channels enabled multi-view applications like 3D reconstruction and 360-degree monitoring without requiring external video switches.
Network connectivity included Gigabit Ethernet for reliable wired connections, WiFi 6 for wireless deployment flexibility, and optional 4G/5G cellular modems for remote installations. Having multiple networking paths proved valuable for edge AI deployment, given the significant variation in network conditions across installation sites. The boards could automatically fail over between network types to maintain connectivity.
Industrial applications received RS-485 and CAN bus interfaces for connecting to factory automation equipment and industrial sensors. GPIO expansion headers provided digital I/O for controlling external devices and reading discrete sensors. This comprehensive interface set enabled the carrier boards to integrate smoothly into existing infrastructure rather than requiring complete system redesigns.
The embedded AI system architecture prioritized flexibility. Customers deploying in retail environments might use WiFi and USB cameras, while industrial customers would rely on Ethernet and MIPI interfaces. The same hardware platform served both scenarios through different connector populations and software configurations.
Prototyping and Validation Testing
Custom hardware development matured through iterative prototyping that validated both functionality and reliability. Think Robotics produced initial prototype boards using quick-turn PCB services, enabling rapid design verification and allowing Sima.ai to begin software development while hardware was refined.
Early prototypes revealed several design improvements. Initial power supply noise affected high-speed signal integrity, and careful PCB layout changes and additional filtering resolved these issues. Thermal hot spots appeared under sustained loads, so revised copper pours and heatsink interfaces improved thermal performance. USB 3.0 signal integrity marginally failed compliance testing, but impedance-controlled routing adjustments brought signals within specifications.
Each prototype iteration incorporated lessons from the previous version. By the fourth revision, the boards met all functional requirements and passed environmental testing protocols. This iterative refinement approach proved far more efficient than attempting a perfect design from the outset, as practical validation revealed issues that simulations and analyses might miss.
Environmental testing subjected boards to temperature cycling, humidity exposure, vibration, and shock loads representing real deployment conditions. Industrial-grade designs underwent more stringent testing than commercial boards, including validation of operation from negative 40 to positive 85 degrees Celsius, 95 percent relative humidity, and mechanical stresses from vehicle mounting and factory automation equipment.
Think Robotics' prototyping services significantly accelerated the development timeline. Having in-house capabilities for rapid iteration meant design changes could be validated within days rather than weeks.
Manufacturing Transition and Supply Chain
Transitioning from prototype to volume production required establishing reliable manufacturing processes and supply chains. Think Robotics partnered with contract manufacturers experienced in embedded computing assembly to ensure consistent quality at production scale.
Component selection emphasized long-term availability. Edge computing products often remain in production for years, requiring replacement parts throughout their lifecycle. Think Robotics specified components with broad market adoption and multiple sourcing options rather than exotic parts that might become obsolete quickly.
The decision to use the SOM vs. a custom board was made early in the project. Using Sima.ai's existing SOM reduced development risk and shortened time-to-market. The carrier board approach allowed rapid deployment while maintaining future flexibility for fully custom designs if production volumes justified the additional engineering investment.
Quality testing for production boards included automated functional tests that validated all interfaces and checked electrical parameters. Each board underwent burn-in testing, which involves extended operation at elevated temperatures, to eliminate infant mortality failures before customer shipment. This rigorous quality process ensured reliable hardware for demanding edge AI applications.
According to data from the U.S. Department of Commerce data on electronics manufacturing, proper burn-in testing reduces field failure rates by 60 to 80 percent for complex embedded systems, making it essential for mission-critical deployments.
Deployment Results and Edge AI Applications
The custom carrier boards enabled Sima.ai to deploy its AI accelerators across diverse applications. Retail analytics systems used the boards to power smart cameras analyzing customer behavior in stores. Industrial quality inspection systems performed real-time defect detection on manufacturing lines. Transportation applications provided AI-powered driver-assistance features for commercial vehicles.
The hardware acceleration AI capabilities delivered significant performance improvements over CPU-based inference. Neural networks that required 200 milliseconds per frame on embedded CPUs executed in under 10 milliseconds on Sima.ai's accelerator, enabling real-time processing of 60-plus frames per second video streams. This performance enabled applications that were previously impossible due to edge computing constraints.
Power efficiency proved equally impressive. The optimized carrier board design enabled battery-powered devices to operate for extended periods while continuously running AI inference. Some applications achieved multi-day battery life while processing video streams, demonstrating performance that would drain batteries in hours with conventional GPU acceleration.
The edge inference hardware platform's flexibility enabled rapid market expansion. New applications could deploy quickly by configuring the existing carrier board design rather than requiring complete hardware redesigns. This time-to-market advantage proved critical in fast-moving AI markets.
Think Robotics continues supporting Sima.ai through ongoing hardware optimization, new carrier board variants for emerging applications, and manufacturing support as production volumes scale. Our collaboration demonstrates how custom embedded systems design enables breakthrough technologies to reach commercial deployment.
Lessons in SOM Integration and Custom Hardware
This project reinforced several principles applicable to system-on-module integration and edge AI hardware development. Match the carrier board design to actual deployment requirements rather than creating generic solutions. Application-specific optimization delivers better performance, reliability, and cost-effectiveness than a compromised one-size-fits-all approach.
Thermal and power management are critical for high-performance edge computing. Underestimating these challenges leads to unreliable systems that fail in field deployment. Proper analysis, simulation, and validation testing during development prevent costly problems later.
Iterative prototyping with real-world validation identifies issues that pure analysis misses. Plan for multiple prototype cycles rather than expecting initial designs to be perfect. Each iteration improves the design and builds confidence in production readiness.
The carrier board design services Think Robotics provided went beyond simple engineering. Understanding the target applications, deployment environments, and business constraints shaped design decisions as much as technical requirements. Successful custom hardware development requires this holistic perspective.
Conclusion
Enabling edge AI deployment required more than connecting a SOM to a circuit board. Success demanded a deep understanding of AI accelerator requirements, careful attention to power and thermal management, comprehensive interface design, rigorous validation testing, and manufacturing discipline.
Think Robotics specializes in the development of custom hardware for demanding applications. Our expertise spans embedded system architecture, high-speed interface design, power optimization, and manufacturing transition. Whether your project involves AI accelerators, industrial controllers, or specialized instrumentation, we design hardware that meets your exact requirements and transitions smoothly to production.
The Sima.ai partnership demonstrates how custom carrier board design transforms innovative core technologies into deployable commercial products. The proper hardware foundation enables breakthrough capabilities to reach customers who benefit from advanced edge computing solutions.