Recent Posts

Smart, safe, comfortable

Follow

Follow X

Sign up   |   Log in

  • Minimum length of 8 characters

    Automotive vision: The IP guys get in on the act

    A battle is joined in the most demanding of embedded applications

    By Majeed Ahmad | July 22, 2017

    ARM’s Mali-C71 image signal processor (ISP) is making waves in the automotive space where the DSP-core duo of Tensilica and CEVA have been active players in embedding computer vision into ADAS system-on-chip (SoC) designs. The question is: Will ARM – in its quest to extend its dominance into a new application space– swamp their boats? To some extent that will depend on which of the three solutions is technologically superior

    ARM fields the first automotive-grade ISP at a time when the number of cameras in vehicles is rising, and sensor fusion technologies are getting better every day. The Mali-C71 vision processor is heavily focused on two critical requirements in ADAS and autonomous car applications: dynamic range and reliability.

    Dynamic range plays a vital role in detecting all elements of the scene that camera captures while reliability is intertwined with critical automotive features such as ASIL compliance and functional safety. The Mali-C71 processor boasts ultra-wide dynamic range (UWDR) of 24 stops while it processes camera pixels and removes undesired elements like noise.

    Here, it’s worth mentioning that the best-quality DSLR cameras feature around 15 stops. Next, Mali-C71 facilitates low latency and advanced error detection using more than 300 dedicated fault detection circuits and features system-level certifications that include ISO 26262, ASIL D, and IEC 61508 SIL3.

    Mali-C71 also puts the image sensor data through 15 stages of refinement and correction before showing the image on display. That’s twice as many stages as in smartphone ISPs; it’s even higher than DSLR camera designs.

    Mali-C71 block diagram

    Figure 1. This is how Mali-C71 simultaneously runs vision engine and renders an image on display via a single pipeline.

    What’s new about Mali-C71

    ARM’s new ISP simultaneously generates data that can be rendered on display while it processes the data for use by the computer vision engine. For that, it creates a single piece of hardware IP that carries out two functions via a single imaging pipeline (Figure 1).

    However, unlike ARM’s Mali-C71 ISP, which takes photonic data from the image sensor and processes the raw pixels into high-quality images for display, the current vision processor solutions from Tensilica and CEVA take a different approach.

    For instance, Tensilica’s C5 vision processor (Figure 2) first enhances input from the camera with computational imaging algorithms, and then neural network-based recognition algorithms perform object detection and recognition.

    It accelerates all neural network computational layers and frees up the main vision/imaging DSP to run image enhancement applications independently while the C5 vision DSP runs inference tasks.

    Ten silica C5 diagram

    Figure 2. Tensilica’s C5 vision processor runs all neural network layers in the DSP itself.

    The C5 core—unveiled earlier this year—is optimized for ADAS, radar, lidar, and sensor fusion applications via high-availability neural network computation capabilities. The vision processor is based on a specialized DSP with an instruction set that reduces the cycle count of the major embedded vision algorithms.

    Tensilica C5 vs. CEVA XM6

    CEVA claims that unlike Tensilica’s software-based approach for handling neural networks, it has adopted more of a hardware-software approach that responds faster and consumes less power.

    In other words, it has attached a hardware accelerator to imaging DSP while splitting the neural network code between running some network layers on the DSP and offloading convolutional layers to the accelerator.

    CEVA has recently licensed its imaging and vision platform to ON Semiconductor for ADAS applications. The IP supplier claims that its fifth-generation vision processor—XM6 shown in Figure 3—boosts the performance of ADAS applications by 3x as compared to the predecessor XM4 vision processor.

    CEVA XM6 diagram

    Figure 3. The CEVA XM6 vision processor performs 512 MACs per second with its convolutional neural network (CNN) accelerator.

    Boosting clarity and ensuring reliability are insatiable demands for automotive camera systems. In comparison to other embedded applications, automotive vision needs processors of considerable computational capabilities. The fact that we have a new player in the embedded vision processor space proves that the semiconductor IP players aren’t daunted by the challenge.