Achieving “Native Parity” with Ampere® for Better, Faster Software-Defined Vehicle Development
2023년 01월호 지면기사  / written by Joe Speed, Head of Edge, Ampere



By starting today with cloud native platforms from Ampere, we can solve the challenges around parity while simultaneously providing the scale needed for the cloud-based workloads powering software-defined vehicles. With these two advantages, the speed, quality, and efficiency of software-defined vehicle development can move dramatically past what was possible with legacy x86 platforms.


written by Joe Speed, Head of Edge, Ampere

한글로 보기: 더 우수하고 빠른 SDV 개발 위해 Ampere로 네이티브 패리티 달성 (autoelectronics.co.kr)





As the automotive industry moves to electrification, autonomous driving and software-defined vehicles, the computer architecture used in cars is changing to meet the transformation. As part of this shift, vehicles are moving to multi-core Arm64 computers. However, most of the software powering these vehicles today is still being developed on legacy x86 architectures. 

Because the software development is not being done on the same computer architecture that’s running the vehicle, a number of challenges exist. Migrating back and forth between the architectures costs time and money and creates more potential for inaccuracies. 

Thus, as the vehicles of the future become closer to realization, the industry should shift their software development strategy so that the development and test environment also take place on Arm64 computers. This allows them to achieve “native parity” with the vehicle, making the development of software faster, easier to create, and less error prone.

But native parity isn’t enough on its own. High-performance cloud workloads are the backbone of software-defined vehicles. So while native parity is an essential first step, the hardware getting us there must also be “cloud native”, offering the scale needed for high performance and capacity. 



Ampere Cloud Native Advantage for Native Parity

Ampere solves both of these challenges with its Cloud Native Processors, which have up to 128 cores and are used by many cloud providers and server OEMs. Utilizing the Arm ISA, these Ampere® Altra® family processors provide environmental parity with the multi-core Arm embedded systems known as ECUs (Electronic Control Units) used in vehicles, but at a much larger scale – up to 1,024 cores per server – and with better energy efficiency than x86 processors on a performance per watt basis. Using Ampere processors in conjunction with automotive ECUs to construct a complete end-to-end environment provides both native parity and a cloud native architecture.
 





To quantify how native parity can improve test quality and speed, Ampere engineers benchmarked the performance of an example test workload on an emulated vehicle ECU with a Cortex A-72 Arm64 processor. Even merely running within a QEMU emulator on Ampere Altra Max resulted in a 2.6x speed-up versus running on x86. Furthermore, instead of using emulation, running that same continuous integration (CI) test workload in virtual machines with QEMU KVM paravirtualization and docker OCI containers, produced a staggering 100x more CI tests per hour on Ampere Altra Max compared to running arm64 QEMU emulator on x86, and a reduction in energy consumption, supporting efforts to reduce the carbon footprint of vehicle software testing. 

 





These improvements in CI testing have large implications for developer productivity. The faster tests run to completion results in more test iterations. Using a more accurate platform increases the quality of testing. Leveraging native toolchains ensures the usage of tools with the most development and the most frequent updates. And, eliminating cross-compilation and its inefficiencies reduces the carbon footprint. 



Real-World Results with Apex.AI

A prime example of how this can be implemented is the Ampere customer, Apex.AI. They develop safe, certified, developer-friendly, and scalable software for automakers with investors that include Daimler Truck, AGCO, Continental, ZF, and more. Apex.AI sees software and platform complexities increasing as the industry moves to the software-defined vehicle. Using heterogeneous platforms across workstreams creates even more challenges due to the need to manage different application domains, different RTOSes, different test strategies, and constant requirement changes. These complexities slow down the development process, delaying validation and increasing the risk of finding issues only at final integration when it is too late to fix them on schedule. Moving to native parity with Ampere Cloud Native Processors helps address a set of these issues that have contributed to automotive software delivery quality problems and delays.

Over the past year, Apex.AI has introduced end-to-end native parity in the DevOps environment that produces their functionally safe ISO 26262 ASIL D automotive products. Their CI/CD server runs on Arm64-based processors in the cloud, their CI testing in virtualized automotive ECUs runs on Ampere® Altra® servers, and their verification testing runs on a farm of automotive Arm64 ECUs. Apex.AI’s development process supports developers using both Arm-based MacBook Pro laptops and Ampere Altra Developer Platform workstations. For each task, native parity has been established.

 






One of the benefits of achieving native parity using Ampere Altra is improved software build speed and energy efficiency. As seen earlier, Ampere’s Cloud Native Processors deliver rapid compilation times and they were designed for workloads that scale. In addition to compilation, building code requires linking and thanks to a tool called mold, that was developed by the author of LLD, the linker used by the LVVM compiler, this step is also able to take advantage of Ampere’s 128-core Cloud Native Processors. Mold is a highly parallelized linker that is built into GCC 12 and works with other development tools as well. The end result is an optimized build environment on Ampere.

Just like other parts of the workstream, CI testing runs faster on Ampere processors because it eliminates the emulation required on x86 platforms. With fully automated build-to-deploy testing on ECUs and R&D vehicles, Apex.AI is now running tens of thousands of software tests on the cloud followed by on-premises testing of virtual and physical vehicle ECUs. By running numerous virtual ECUs on Ampere Altra, Apex.AI can run many unit tests in parallel and testing on target is much simpler.

Besides speed, the other benefit of native parity with Ampere is improved CI testing accuracy for Apex.AI by using a test architecture that closely resembles the vehicle ECU. QEMU is a popular open-source software tool for hardware platform emulation and virtualization. It is used by Apex.AI and others in the automotive industry to enable software testing at a speed and scale that is not possible using a farm of physical automotive ECUs. By creating a large number of virtual ECUs, QEMU allows Apex.AI to test early and test often instead of deferring such testing to one “big bang” closer to the end of the development process. Testing via emulation on x86 processors can miss issues because of the differences in the architectures. By implementing large scale testing using virtual ECUs on Ampere Altra processors, Apex.AI has found that most software quality issues are discovered and fixed much earlier in the process.

Due to these strong advantages, Apex.AI has changed their default approach from running cross-platform to running with native parity using Ampere processors. Plus, they have gained the added benefit of enhanced ease of use. For example, running natively eliminates the complexity of needing to use sysroot to cross compile and of introducing new elements to the non-native tool chain. Adding packages takes less time, dependencies are more clear, and the automation is simpler. It also ends the need for cross-platform debugging. This is a major improvement since it is much easier to test the natively compiled binaries on both the host and target. Setting up the environment for GNU Debugger is also much easier and the CI pipeline is simplified.

According to Anup Pemmaiah, Apex.AI Engineering Manager, this approach has simplified development and deployment of Apex.AI software on many target vehicle platforms as part of their software-defined vehicle solution. Automotive development tools such as Apex.AI’s Platform Automation Tool (PAT) play an important role in achieving their desired workflow, which Anup Patel, Apex.AI Tech Lead says "simplifies the platform coordination and prevents "big bang" late integration."



How to Make the Shift to Native Parity

How can one start the move to native parity and also take advantage of cloud native architectures? Start with leveraging multi-architecture Kubernetes so that the core of the CI/CD pipeline has native parity while providing time to transition from legacy x86 software tools in the existing tool chain. Use automotive software products that already support native parity such as Apex.OS and Apex.Middleware, which enable developers using Ampere Altra Developer Platform workstations to move from development to testing to the vehicle with native parity end-to-end. Ampere solutions are available from many server vendors like HPE and Gigabyte, and from cloud providers such as Azure, Google, Oracle, Tencent, and Equinix.

Below are some examples of available Ampere-based systems for native parity: a developer platform workstation, a build server, an entire DevOps Farm in a single server, and a single server than can run both MLOps (Machine Learning Operations) and simulation workloads.


 





By starting today with cloud native platforms from Ampere, we can solve the challenges around parity while simultaneously providing the scale needed for the cloud-based workloads powering software-defined vehicles. With these two advantages, the speed, quality, and efficiency of software-defined vehicle development can move dramatically past what was possible with legacy x86 platforms.



<저작권자 © AEM. 무단전재 및 재배포, AI학습 이용 금지>


  • 100자평 쓰기
  • 로그인


  • 세미나/교육/전시

TOP