Flying cars will be faster than level 5
Interview with Louay Eldada of Quanergy
2019년 07월호 지면기사  /  written|Sangmin

한글로 보기


Interview with Louay Eldada of Quanergy

We recently met with Louay Eldada, CEO of Quanergy, who has completed preparations to mass-produce 1 million solid state lidar sensors a year, and heard about the role of lidar in future mobility.


Q. Tell us about Quanergy, its key milestone, its position in the market.
A. Quanergy is a leading provider of LiDAR (Light Detection and Ranging) sensors and smart sensing solutions that is best known within the automotive industry for its achievements in the development of solid-state LiDAR for the use of autonomous vehicles. In addition to the automotive industry, Quanergy’s suite of LiDAR sensors are also used across various markets including security, smart spaces, industrial automation, robotics, drones, agriculture, mining and more as well as 3D terrestrial and aerial mapping.

Quanergy’s solid state LiDAR (S3) is based on an optical phased array (OPA). Its OPA-based LiDAR is the only LiDAR on the market that can deliver performance, reliability and cost at the same time. In 2017, the S3 won the coveted CES “Best in Innovation” Award.

Quanergy’s LiDAR sensors are developed and produced in its manufacturing facility in Sunnyvale, CA, USA. Since its initial opening in late 2017, the facility has doubled in size and recently met requirements of automotive standards AEC-Q100, ISO16750 and IATF16949:2016, which gives the company the necessary foundation for automotive-grade solid-state LiDAR production. The facility has the capacity to produce 1 million solid-state sensors every year.

At present, Quanergy has raised $180 million and has more than 20 partnerships with major automotive companies including Hyundai, Mercedes-Benz, Jaguar and Renault Nissan.

Q. Pls. tell us about the opportunities and prospects of LiDAR for in general automotive, shuttle such as MaaS and other transportation.
A. LiDAR’s use in self-driving vehicles is what the technology is most known for, and for good reason: when self-driving cars reach mass market, most experts believe LiDAR is what will get them there. But beyond cars, there are other modes of transportation that can adopt and benefit from this technology. For example, shuttles are excellent use cases for LiDAR because they allow for multiple people to be transported at one time, ultimately decreasing issues such as traffic congestion and pollution. Regardless of the integration, there is a critical need for integrated sensors to work together and ultimately make mass Level 5 automation possible.

That said, we are still years away from seeing Level 5 autonomous vehicles on public roads. Due to numerous road regulations and technological developments needed to get vehicles acquainted with terrain, it is more likely we will see autonomously flying cars before we see them on freeways. Of course, flying cars will have their obstacles to work through, but there will be nearly no physical infrastructure needed to bring them to our skies (lanes are software-defined) and they will have virtually no chaos to deal with (no pedestrians, debris, etc.), making them much easier to implement.

Q. Companies like Tesla have emphasized cameras, and recently, new 3D radar or ultrasonic sensors are emerging.  However, in the development of self-driving, is LiDAR already a necessary skill and center of sensor set?
A. Most automakers and technology companies agree that the safest and most capable autonomous vehicle systems will have LiDAR as the primary sensor in the sensor suite. 

Cameras are 2D, require ambient lighting, and have low range accuracy and low resolution in the distance due to the perspective effect.  Radar sensors are 1D, giving precise range but no lateral and vertical information, and they work mainly on dense objects that are moving.  Millimeter wave radar are sometimes called 3D radar compared to standard radar, but they still have a spot size that is too large in the distance, resulting in a resolution that is too low to detect, classify and analyze the behavior of objects.  High resolution radar sensors have the same issues as standard radar with the detection of objects that are not dense (human bodies vs. metal vehicles or concrete overpasses) and are not moving.

Only LiDAR is truly 3D for the highest capability in detection/classification/behavioral analysis, maintains high resolution in the distance thanks to a collimated or minimally divergent laser beam, requires no ambient lighting (the signal it senses is the signal it sends, being an active device, as opposed to cameras which are passive devices), and senses based on shape (being a time-of-flight sensor) not color (as in the case of cameras), so it can distinguish between separate objects that have similar color and overlap in the field of view (such as a white truck against a bright sky, a situation that can confuse cameras, as happened in an unfortunate fatal accident in a vehicle with an autopilot system that did not use LiDAR).

Q. When referring to the LiDAR sensor, it is not always possible to work in conditions like bad weather. However, many companies are showing that they are self-driving even in snowy conditions. Is it to overcome in terms of software?  When will it be possible for all-weather autonomous driving at the standard passenger car level?
A. We design our LiDAR to best deal with various weather conditions. The optics, the electronics and the signal processing all contribute to our LiDAR's superior detection capability under sub-optimal weather. The laser beam in our sensor is designed to be (1) large enough as to not be blocked by individual rain drops and (2) well collimated as to maintain a high energy density over a long distance. Advanced signal processing techniques are utilized to analyze the laser signal characteristics and improve the range. 

However, after all, LiDAR is ultimately an optical line-of-sight device. The presence of aerosol in the air, e.g., solid particles as in snow or smog and liquid particles as in rain and fog, will degrade the LiDAR detection range to a certain extent.  Only in extremely heavy fog conditions (when you cannot see your own hand) the LiDAR is ineffective, which is why radar (at least one low cost unit looking forward) will always be part of the sensor suite.  That is however a rare situation where no one should be driving, and the radar is good enough to get the vehicle to a safe stop on the side of the road or off the highway. 

Q. Do you collaborate with any hardware company, software algorithm company, autonomous mobile company, electronics, test solution provider, etc. to make the best LiDAR product?
A. In order to make the best LiDAR product, Quanergy collaborates with numerous companies including Sensata, TSMC, Samsung, Analog Devices, Xilinx, EPC, NXP, Nvidia and Cisco.

Q. Please tell us Quanergy’s core competitiveness, differentiation for autonomous transportation. If you look at the homepage, you seem to be the best in the world in 6 aspects. Would you explain the main features, performance, and specifications?
A. The key to Quanergy’s success in the autonomous transportation industry is our solid-state LiDAR technology. Solid-state LiDAR is much smaller, far less expensive, and immensely more reliable as it has no moving parts on any scale. It also allows for fine angular resolution and tight range accuracy in sensing obstacles.  In addition, it has a minimum range of zero, meaning there are no blind spots, and it has the ability to selectively zoom in on objects detected to classify them and analyze their behavior. When reaching Level 5 autonomy, AVs will need the most accurate, reliable, and safe technology, such as Quanergy’s.

Quanergy’s LiDAR sensors are developed and produced in its manufacturing facility in Sunnyvale, CA, USA. Since its initial opening in late 2017, the facility has doubled in size and recently met requirements of automotive standards AEC-Q100, ISO16750 and IATF16949:2016, which gives the company the necessary foundation for automotive-grade solid-state LiDAR production. The facility has the capacity to produce 1 million solid-state sensors every year.

Q. In particular, in terms of pricing and volume, how did Quanergy overcome this challenge? Do you see how much the price can fall?
A. Quanergy designed all the chips in its solid state LiDARs to be compatible with mature CMOS silicon microfabrication technology, the lowest cost and most scalable manufacturing platform.

Q. Depending on the autonomous driving level and type of service, different Quanergy products are available and are likely to pair with other sensors. How is it?
A. An optimal sensor suite for all autonomous driving levels should include LiDAR, video, and radar sensors.  For level 3 and above, systems must have LiDAR as their primary sensor in order to deliver the highest level of capability and the safest system.  A primary sensor is the main sensor used for (1) perception, (2) localization and (3) navigation.

Q.  What is the mid to long term plan for Quanergy?
A. Quanergy has a rich product roadmap.  Using its CMOS silicon platform, Quanergy will continue to increase the level of integration, until the entire LiDAR is a single packaged chip stack, and can fit in a cell phone.

Q. We have heard about cooperation between Korean companies and Quanergy.   Could you tell us about your business case to date and your expectations for the future?
A. Quanergy has a long history of working with Korean companies including Samsung and Hyundai-Kia.  Our relationships with our Korean partners continue to grow.


AEM_Automotive Electronics Magazine

<저작권자(c)스마트앤컴퍼니. 무단전재-재배포금지>

  • 100자평 쓰기
  • 로그인