Page 29 - EE Times Europe November 2021 final
P. 29

SPECIAL REPORT: ARTIFICIAL INTELLIGENCE















                                                       Engineering Trust in


                                                       the Era of AI 2.0


                                                       By George Leopold
                                                      W         ith artificial intelligence concluding its hype cycle, engineers

                                                                and researchers are learning more about what we know and
                                                                don’t know about its potential and untapped promises.
                                                                  Clearly, skeptics warn, we need to look, test, and validate
                                                       before leaping into an AI-centric future. Hence, there is growing awareness
                                                       and research focused on emerging disciplines like “AI safety” that seek to
                                                       identify and ultimately anticipate the causes of unintended behavior in
                                                       machine-learning and other autonomous systems.
                                                        Some early steps will help, including a recent U.S. regulatory order
                                                       requiring mandatory reporting of crashes involving ADAS vehicles. (We note
                                                       the reference to “crashes” rather than “accidents,” a fraught term in this
                                                       context, as it is exculpatory.)
                                                        In this AI Special Report, we examine the engineering challenges and
                                                       the unintended consequences of entrusting control of our machines to
                                                       algorithms. One conclusion is that we remain a long way from using AI in
                                                       mission-critical systems that must work 99.999% of the time.
                                                        Achieving such levels of reliability, safety, and, ultimately, trust requires
                                                       relentless testing, technical standards, and what researcher Helen Toner of
                                                       Georgetown University’s Center for Security and Emerging Technology calls
                                                       “engineering discipline.”
                                                        Another issue is allocation of scarce engineering resources as the list of
                                                       companies designing AI chips soars. The latest is Tesla, which unveiled its
                                                       Dojo D1 chip for training neural networks during its recent AI Day event.
                                                       While accelerating the training of neural networks for ADAS applications is
                                                       indeed a requirement, the vertically integrated carmaker’s AI chip appears
                                                       to have been motivated by pride of ownership.
                                                        “With so many companies building AI chips, why build your own?” notes
                                                       Kevin Krewell, principal analyst with Tirias Research. The growing list of
                                                       companies working independently on applying AI to autonomous driving
                                                       amounts to a “staggering” amount of duplication and waste, Krewell adds.
                                                        Automotive applications are pushing the limits of AI technology and
                                                       may be among the first to deploy the resulting machine-learning models in
                                                       hazardous settings. Before that happens, however, those machines must be
                                                       as close to foolproof as engineers can make them.
                                                         As our colleague Egil Juliussen notes, the prevailing notion of AI implies
                                                       the technology is analogous to human intelligence. As we discuss, AI shall
                                                       remain a misnomer until engineers can imbue machines with the common
                                                       sense learned by a toddler based on real-world trial and error. ■
                                                       George Leopold is a technology writer and an EE Times contributing editor.
                                                       This article was originally published on EE Times and may be viewed at
                                                       bit.ly/3zJ7mN5r.
            IMAGE: SHUTTERSTOCK
   24   25   26   27   28   29   30   31   32   33   34