level 3 autonomous driving with cameras – Corriere.it within the year

    0
    17

    Our technology works, and we have already amply demonstrated this. Jack Weast, vice president of Mobileye’s Autonomous Vehicle Standards division, responds firmly to those who say (and there are more and more in the automotive industry) that we will have to wait decades before we have true autonomous driving. As the videos we published show – he continues – we have already been able to make an autonomous car work perfectly in Munich, where we have no engineers or developers and where we had not conducted any tests before. Thanks to our system, in a few days the vehicle was able to move in a complex environment, guided by a system based on cameras.

    Advertisement

    Mobileye: level 3 autonomous driving with cameras by the end of the year

    Advertisement

    Present at CES in Las Vegas – the largest electronics fair in the world, which this year takes place in digital version until January 14 due to the pandemic – the Israeli company Mobileye has twenty years of research and development in the field of computer vision to its credit, and was acquired by the giant Intel in 2017 for the beauty of 15 billion dollars. To date, its technological solutions are already present in the Advanced Driver Assistance Systems (ADAS) of approximately 60 million vehicles around the world.

    Advertisement

    By 2021 the company promises to launch on the market a solution for level 3 autonomous driving based on cameras and called Supervision, while simultaneously developing and experimenting a level 4 system, based on lidar and radar, ready to debut already in 2025. The destiny of the two systems is to arrive to overlap each other when the second, more complex, is mature and can act as a redundant backup of the first, to ensure an unprecedented level of security and efficiency, as Amnon Shashua, president and CEO of Mobileye, told his speech at CES.

    Mobileye: level 3 autonomous driving with cameras by the end of the year

    Advertisement

    an approach based on what Shashua calls Trinit, because it uses Road Experience Management (or REM) mapping technology, the Responsibility-Sensitive Safety driving process (or RSS, on which the choices driving artificial intelligence depend) and the two already mentioned sensor subsystems separate (chamber plus radar and lidar). In particular, the technology called REM (whose development started 5 years ago) constitutes a strong differentiating factor compared to its competitors, both because it provides Mobileye systems with a highly detailed mapping of the roads in the world, and for how it succeeds: 60 million cars scattered across all continents that collect data for us – explains Weast – in practice, a crowdsourcing of information carried out so far over a billion kilometers, to which an average of seven more are added every day.

    The trick is finding an easy way, in agreement with manufacturers, to capitalize on the way ADAS systems read roads and recognize signs, objects or obstacles in every part of the world, from Japan to the United States. An infinitely complex puzzle of information, made up of very small pieces (every kilometer traveled generates very few kilobytes of data) that Mobileye collects and reconstructs to make its own maps. The latter are not only in high definition, but are also rich in semantic information, which gives meaning to objects, to allow the guidance system to understand the surrounding environment.

    Our ability to give semantic value to the elements that make up a road, from signs to intersections to strips, allows us to be quickly operational everywhere – explains Jack Weast – as well as the possibility of adjusting the Responsibility-Sensitive Safety system simply by changing parameters allows us to adapt the guide from country to country, for example by making it more “assertive” in Italy and more “conservative” in Germany.

    Advertisement

    One aspect, the latter, is crucial for the scalability of Mobileye technologies, so much so that the company has chosen to develop open and open source solutions, to allow manufacturers to contribute to the evolution of the system. Meanwhile, the experimentation will continue in cities such as Detroit, Tokyo, Shanghai, Paris and (soon also) New York City. Starting today with the commercialization of a system based only on 360 computer vision (and therefore on cameras and hardware with relatively low costs) then guarantees the economic sustainability of the project, giving the company time to develop Level 4 while it remains on the market. .

    In short, all this really means that soon we will stop driving? The fact that the automobile was invented doesn’t mean that people have stopped riding, Weast observes amused. And then he concludes: I believe instead that we will finally be able to choose: whether to drive the car ourselves because there is a beautiful sunny day and we are along a road without traffic, or whether to let it do it by itself in city traffic, while we use differently, and better, our time.

    January 13, 2021 (change January 13, 2021 | 16:00)

    Advertisement

    © REPRODUCTION RESERVED

    Advertisement

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here