Zendar Charts the Third Path for the Autonomous Era with AI Radar
INTERVIEW
Sunil Thomas
CBO of Zendar
Waymo is too expensive, and Tesla is too risky. Between these two industry giants shaping the future of autonomous driving, Silicon Valley startup Zendar is carving out a third path - one powered by AI radar.
Harnessing the principles of astrophysics and machine learning, Zendar’s Semantic Spectrum Radar AI achieves ten times higher resolution and precision and delivers tenfold computational efficiency compared to traditional point-cloud radar systems, fundamentally redefining the perception architecture of autonomous vehicles.
This technology does more than enhance a vehicle’s “eyes”; it presents OEMs with a new architecture that unites cost efficiency and safety, breaking through the stagnation and “unreasonable risks” that have long constrained the autonomous driving industry.
AEM met Sunil Thomas, CBO of Zendar, at The Autonomous to learn more.
by Sang Min Han@autoelectronics.co.kr
한글로보기
What inspired the founding of Zendar, and what was the company’s initial vision? What limitations in the autonomous driving industry at the time motivated its creation?
Thomas Zendar was founded by individuals from the Mapping and radio astronomy industries who recognized a critical stagnation in automotive radar technology. For a decade, the industry had seen only incremental improvements, failing to keep pace with the demands of autonomous driving. The founders envisioned a quantum leap in radar performance by applying advanced principles from astrophysics and machine learning, thereby addressing these limitations and fundamentally reshaping the future of autonomous vehicles.
What were the key turning points and technical breakthroughs that brought Zendar to its current AI radar technology? What was the biggest challenge you faced in the early stages?
Thomas Zendar's journey to its current AI radar technology was marked by persistent research and significant breakthroughs. Initially, the team explored Synthetic Aperture Radar (SAR) principles, but quickly realized the industry's inability to effectively utilize SAR output. This pivotal insight led them to pursue a distributed radar approach, drawing inspiration from radio astronomy where researchers combine telescopes across continents to create a massive virtual antenna. By coherently combining multiple simple radars, Zendar achieved a remarkable 10x increase in radar resolution, a substantial leap forward.However, the biggest challenge in the early stages was the industry's entrenched reliance on point cloud and object clustering as the primary perception output for automotive radar. Zendar's groundbreaking research revealed that the radar spectrum data contains vastly more useful information than the simplified point cloud interface, which discards crucial details. This realization steered the team towards a fundamentally new approach: leveraging the full radar spectrum and advanced neural network modeling, which ultimately culminated in their innovative semantic Spectrum radar AI.
Neural network processing of radar spectrum
From Astrophysics to Autonomy:
Zendar’s Vision for a New Radar Paradigm
In your talk, you mentioned the concept of “unreasonable risk.” What specific problems in the autonomous driving industry does this refer to, and why do you think these issues are not being adequately addressed?
Thomas The concept of "unreasonable risk" in the autonomous driving industry refers to two distinct yet critical problems: the cost-prohibitive nature of the "sensor-rich" path and the inherent unreliability of the "camera-only" approach.
The sensor-rich path, while not inherently risky from a safety perspective, becomes economically unsustainable at scale due to its high cost. This severely limits the volume of vehicles deployed, thereby hindering the generation of the massive data flywheel essential for expanding the Operational Design Domain (ODD). By failing to address cost, this approach inadvertently stifles progress and scalability.
Conversely, the camera-only approach embodies "unreasonable risk" directly. Cameras are fundamentally flawed sensors, highly susceptible to environmental factors like adverse weather and sun glare. Their limited range also prevents robotaxis from reliably operating at highway speeds. This inherent unreliability, coupled with the inability to address these fundamental sensor limitations, creates an unacceptable level of risk for widespread autonomous deployment.
Between Waymo and Tesla:
Solving the Unreasonable Risk
You criticized both Waymo’s “safety-at-any-cost” approach and Tesla’s “camera-centric minimalism.” What do you see as the fundamental limitations of these two approaches, and what lessons should the industry take from them?
Thomas Zendar critically assesses both Waymo's "safety-at-any-cost" and Tesla's "camera-centric minimalism" as having fundamental limitations that impede the scalable and reliable deployment of autonomous driving.
The "safety-at-any-cost" approach, while prioritizing safety, is inherently unsustainable due to its exorbitant capital expenditure. This high cost prevents the creation of a large data flywheel, which is crucial for continuous improvement and expansion of autonomous capabilities. The key lesson here is that safety, while paramount, must be achieved within a framework that allows for economic scalability.
Tesla's "camera-only" approach, on the other hand, is fundamentally limited by the inherent flaws of camera sensors. These systems can never achieve the requisite reliability for highway speeds, as they are vulnerable to common scenarios like stopped traffic, occluded vehicles, and sudden cut-ins, in addition to environmental challenges like sun glare and adverse weather. The critical lesson for the industry is that relying solely on a single, flawed sensor type introduces unacceptable levels of risk and limits the operational domain of autonomous vehicles.
Zendar goes beyond traditional point-cloud radar by combining spectrum-level data with AI. How does this approach overcome the limitations of conventional radar? And how does your technology compare or complement existing 4D imaging radar? Do you see potential for integration or replacement?
Thomas Zendar's innovative approach of combining spectrum-level data with AI fundamentally transcends the limitations of conventional point-cloud radar. Traditional radar systems discard a wealth of crucial information by reducing complex radar signals to simplified point clouds. Zendar's semantic spectrum radar AI, however, directly processes the rich, detailed radar spectrum data, allowing for a far more comprehensive and accurate understanding of the environment. This enables superior object detection, classification, and tracking, overcoming the inherent data loss and perceptual ambiguities of conventional radar.
When comparing Zendar's technology to existing "4D imaging radar" (referring to multi-chip cascaded arrangements), while 4D imaging radar does offer improved resolution, it still falls short in two critical areas:
Insufficient Resolution: Current imaging radars lack the vertical resolution necessary to reliably detect static objects at a distance. For instance, distinguishing a parked truck from a bridge at 150 meters requires a vertical resolution of less than 1 degree, a benchmark that existing imaging radars fail to meet.
Cost-Benefit Imbalance: 4D imaging radars are significantly more expensive than single-chip radars. OEMs are reluctant to adopt them for high-volume sectors because the added cost does not adequately justify the incremental benefits, hindering their widespread integration.
Zendar's technology, therefore, not only surpasses conventional radar but also addresses the key shortcomings of current 4D imaging radar. While there might be potential for integration in certain niche applications, Zendar's cost-effective and high-performance AI radar presents a compelling case for widespread adoption, potentially replacing less efficient and more expensive solutions in the long term by offering superior perception at a market-acceptable price point.
Sensor fusion is crucial for achieving full autonomy. What specific synergies can Zendar’s AI radar create when combined with cameras, LiDAR, or other sensors?
Thomas Zendar's AI radar creates powerful and specific synergies when combined with other sensors, particularly cameras, to achieve a robust and cost-effective perception system. We are actively collaborating with OEMs to fuse their camera perception output with Zendar's semantic spectrum output, recognizing the complementary strengths of each sensor.A common sensor configuration in mid-priced cars is 1V5R (one camera, five radars). While a single camera provides accurate object positioning up to approximately 30 meters, its reliability diminishes significantly beyond this range. Zendar's approach involves fusing the camera and semantic radar output for the initial 30 meters, where camera data is strong. Beyond this range, the radar output takes precedence, actively correcting the positional errors inherent in camera-based perception. This intelligent fusion leverages the strengths of both sensors: the camera provides rich visual detail up close, while Zendar's radar ensures precise and reliable long-range perception, even in challenging conditions. Together, this combination forms a highly cost-effective and exceptionally reliable sensor suite, addressing the limitations of each individual sensor and paving the way for safer autonomous driving.
You mentioned that AI-based radar perception can be 10 times more cost-efficient than camera-based systems. How could this cost advantage influence vehicle sensor architecture and OEM design strategies? From a cost, safety, and performance perspective, what choices do you expect OEMs to make?
Thomas Zendar's AI-based radar perception, being 10 times more cost-efficient than camera-based systems in terms of compute, is poised to fundamentally reshape vehicle sensor architecture and OEM design strategies. This significant cost advantage will be a primary driver in OEM decision-making, particularly as the ADAS industry transitions from rules-based to AI-based approaches.From a cost perspective, OEMs will be compelled to prioritize solutions that enable high-volume ADAS adoption. The limited uptake of expensive L3 cars clearly demonstrates that high costs hinder data collection and overall cost reduction. Zendar's solution, enabling a 1V5R sensor system with an AI backbone to be implemented in a 20 TOP (Tera Operations Per Second) SoC, compared to the 200-500 TOP SoCs required for current L2+ systems, offers a dramatic reduction in compute cost. This allows OEMs to achieve advanced capabilities like Autopilot, Navigate on Autopilot (NoA), and ultimately L3 without incurring additional hardware costs.From a safety perspective, the shift to AI-based systems necessitates massive data volumes and continuous feedback loops, which can only be generated by a large fleet of vehicles. The cost-effectiveness of Zendar's solution directly facilitates this, enabling wider deployment and thus more data for safer, more robust AI models. OEMs will choose solutions that allow them to build a scalable and continuously improving safety framework.
From a performance perspective, customers now demand a seamless and reliable ADAS experience, moving beyond a fragmented array of rules-based functions. Zendar's radar-centric perception, fused with camera data, offers the most cost-effective and reliable perception system. OEMs will opt for technologies that deliver consistent, high-performance capabilities, extending from L2+ to L3/L4 vehicles and eventually to robotaxis, all while managing costs effectively.In essence, Zendar anticipates OEMs will make choices that prioritize a cost-effective path to scalable, AI-based ADAS, recognizing that this is the only way to meet evolving customer expectations for safety and performance without prohibitive expenses.
Radar AI Scene Understanding
Radar AI output in highway scene
Radar AI output Urban scene
Redefining Perception:
How AI Radar Powers the SDV Era
What have been the most notable projects or pilot programs where Zendar’s technology has been applied so far? Could you share key feedback you’ve received from customers or partners? (Given your presence in Germany, are you collaborating with companies such as Continental?)
Thomas Zendar is actively engaged in numerous projects and pilot programs with customers across Europe and Asia. While specific details of these collaborations remain confidential due to ongoing discussions, the feedback we've received is consistently positive and highly encouraging. Our partners firmly believe that Zendar's Semantic Spectrum technology empowers them to deliver more advanced capabilities, such as hands-free driving on highways, at a cost point that is both competitive and appealing to the broader market. This validates our core mission of providing high-performance, cost-effective solutions for autonomous driving. Regarding collaborations in Germany, while we are present in the region and actively engaging with various industry players, we are unable to disclose specific partner names like Continental at this stage.
Radar-based perception still lacks clear regulations and standards. How is Zendar contributing to standardization and ecosystem development? Are there any examples of collaboration with global OEMs or Tier-1 suppliers?
Thomas While radar-based perception, much like camera systems, currently operates without fully established regulations and standards, Zendar's primary focus is on a different, yet equally impactful, contribution to the ecosystem. Our core objective is to bring the most advanced and effective perception technology to the market at the lowest possible price point. By achieving this, we aim to demonstrate the clear benefits and capabilities of our AI radar, which will, in turn, naturally influence the direction of future industry standards. Our contributions are less about direct standardization efforts and more about setting a new benchmark for performance and cost-efficiency that the market will ultimately gravitate towards. Currently, we are not actively focused on building a specific standard around our technology, but rather on proving its undeniable value through superior performance and economic viability.
Over the next 5 - 10 years, what are Zendar’s main technological and business goals? In the era of Software-Defined Vehicles (SDVs), what role do you hope Zendar will play?
Thomas Over the next 5 - 10 years, Zendar's main technological and business goals are centered on solidifying our position as the leader in cost-effective and reliable perception systems for autonomous driving. We firmly believe that a radar-centric perception system, intelligently fused with camera data, represents the optimal approach. The immediate 2-3 years will be dedicated to perfecting this reliability and bringing L2+ vehicles equipped with our technology onto the road. We are highly confident that this foundational technology can be seamlessly extended to L3/L4 vehicles and ultimately to the demanding requirements of robotaxis.In the transformative era of Software-Defined Vehicles (SDVs), Zendar envisions playing a pivotal role as the enabler of truly intelligent and adaptable perception. Our AI radar, with its inherent compute efficiency and superior data utilization, will become a cornerstone of SDV architectures. We anticipate that Zendar's technology will empower OEMs to rapidly deploy and continuously update advanced ADAS and autonomous driving features through software, leveraging our cost-effective perception backbone to unlock new levels of safety, performance, and functionality in the evolving landscape of automotive technology.
.jpg)
Lastly, is there any message you would like to share with the Korean automotive industry and technology community?
Thomas To the esteemed Korean automotive industry and technology community, Zendar extends a message of innovation and partnership. We believe that the future of autonomous driving hinges on a paradigm shift towards highly reliable and cost-effective perception systems. Zendar's AI radar, with its unique ability to extract rich semantic information from the radar spectrum, offers a proven path to achieving this. We are eager to collaborate with Korean OEMs and technology leaders to integrate our groundbreaking solutions, accelerate the development of advanced ADAS and autonomous vehicles, and collectively shape a safer, more efficient, and more accessible future for mobility. We are confident that our technology can empower the Korean automotive industry to lead the charge in this exciting new era of intelligent transportation.
AEM(오토모티브일렉트로닉스매거진)
<저작권자 © AEM. 무단전재 및 재배포 금지>