Virtualizer Native Execution Accelerates Software Defined Product Development for Arm-based Solutions

The automotive market is seeing the rise of new electrical/electronic (E/E) architectures driven by factors such as electrification, advanced driver assistance systems (ADAS) and autonomy, improved user experience through sophisticated in-vehicle infotainment (IVI) systems, and AI-based assistants. Millions of lines of software need to be developed, and tested for function, safety and security. The architecture trend is moving from domain to zone controllers, reducing the number of electronic control units (ECUs) to a few larger centralized compute nodes powered by high-end processors. The Arm® processor architecture, popular in high-end mobile application processors, is now firmly established in automotive MCUs and SoCs. Semiconductor suppliers have moved away from proprietary architectures, especially for high-performance central compute SoCs (HPCs) and, to a lesser extent, zone controllers. This growing software complexity combined with this ECU consolidation means more complex multi-layer software stacks sharing the same hardware, pushing the performance requirements for virtualization higher.

Other markets such as data centers also exhibit increased software complexity and a rapid increase in on-chip compute capacity. This can be observed in compute servers with hundreds of on-chip (Arm) processors, AI accelerators, and advanced network processors. The Arm architecture is also gaining significant traction, with Arm-based general-purpose compute SoCs and servers now accessible in the open market and through cloud offerings by hyperscalers. Notable examples include Grace (NVIDIA), Graviton (AWS), Cobalt (Microsoft), and AmpereOne® / Ampere® Ultra® (Ampere) CPUs. Similarly, virtualizing these devices necessitates increased computational power.