Page 28 - 23_EETE_03
P. 28

28 EE|Times EUROPE



         AUTOMOTIVE SAFETY
        Can We Trust AI in Safety-Critical Systems?


        By Sally Ward-Foxton


                  rtificial intelligence, famously, is a black-box solution. While neural
                  networks are designed for specific applications, the training process
                  produces millions or billions of parameters without our having much
        A understanding of exactly which parameter means what.
          Are we comfortable with that level of not knowing how AI works when the
        target application is a safety-critical system?

          “I think we are,” Neil Stroud, vice presi-  CoreAVI works with. While determinism can
        dent of marketing and business development   be improved with the right training, CoreAVI
        at CoreAVI, told EE Times in an interview.   also analyzes trained AI models to strip out
        “Safety is a probability game. [Systems are]   and recompile any nondeterministic parts.
        never going to be 100% safe—there’s always   “Part of the onus is on the model developer
        going to be some corner case, and the same is   to come up with a robust model that does the   Texas Instruments’ Miro Adzan
        true with AI.”                      job it’s supposed to,” Stroud said, adding that
          While some familiar concepts from func-  if developers write proprietary algorithms,   the program and take the necessary action—
        tional safety can be applied to AI, some can’t.  that often adds to the complexity.  just as any non-AI-powered parts of the
          “With AI, even if probability says you’re   Another technique is to test-run a   program would be handled, Stroud said.
        going to get the same answer, you may get   particular AI inference many times to find   CoreAVI works with GPU suppliers to build
        there a different way, so you can’t lock-  the worst-case execution time, then allow   safety-critical drivers and libraries for GPUs,
        step it,” Stroud said, referring to the classic   that much time when the inference is run   originally for graphics in aircraft cockpit dis-
        technique where the same program is run   in deployment. This helps AI become a   plays, but increasingly for GPU acceleration
        in parallel on identical cores to cross-check   repeatable, predictable component of a   of AI in avionics, automotive and industrial
        results.                            safety-critical system. An inference that runs   applications. The company is one of several
          There are ways, however, to make AI   longer than the worst-case time would be   driving an effort toward an industry-standard
        inference deterministic enough for demand-  handled by system-level mitigations, such as   API for safety-critical GPU applications as
        ing applications like the avionics systems   watchdogs, that catch long-running parts of   part of the Vulkan standard, which is designed
                                                                                to allow safety certification and code portabil-
                                                                                ity to different hardware.
                                                                                  Stroud cited CoreAVI customer
                                                                                Airbus Defence and Space’s use of AI in fully
                                                                                autonomous air-to-air refueling systems as
                                                                                an example of how AI can—and does—work
                                                                                in even the most safety-critical applications
                                                                                today.
                                                                                ORGANIC MISMATCH
                                                                                Miro Adzan, general manager for ADAS
                                                                                at Texas Instruments (TI), conceded the
                                                                                challenge of certifying safety-critical ADAS
                                                                                systems when we don’t know exactly how the
                                                                                AI arrives at an answer. “Functional safety
                                                                                with ISO 26262 is about understanding what
                                                                                is happening; it’s about determining with a
                                                                                certain probability that a certain outcome will
                                                                                happen,” Adzan said. “Now, if we talk about
                                                                                artificial intelligence, just by the nature of
                                                                                artificial intelligence and how it works, this
                                                                                is exactly what is not happening. … I think
                                                                                there’s an organic mismatch between these
                                                                                two. And that’s the challenge.”
                                                                                  ISO 26262 certification relies on determin-
                                                                                ing probabilities of failure. In some cases,
                                                                                the standard will suggest how this should be
        CoreAVI’s middleware (GPU drivers and safety-critical libraries) is designed for   done, making it easier to prove compliance.
        demanding systems like avionics and automotive driver-assistance systems.    But there’s another way to do it, Adzan said.
        (Source: CoreAVI)                                                         “There’s a specific subsection in the ISO

        MARCH 2023 | www.eetimes.eu
   23   24   25   26   27   28   29   30   31   32   33