Page 40 - EE Times Europe Magazine | April2019
P. 40

40 EE|Times EUROPE — Boards & Solutions Insert

        Let’s Talk Edge Intelligence


          Also, he said, edge AI provides the ability         cific properties and behaviors. In addition, some new sensors will only
        to drive value out of the exploding amount            become possible through the embedded AI — for instance, environmen-
        of IoT data in a more resource-efficient and          tal sensors for liquids and gases. There are many reasons for endpoint AI.
        thus sustainable way, which is paramount              Intelligent data usage and reduction or fast real-time local reactions are
        in times of climate change.                           obvious ones. Data privacy and security are others. Massive sensor raw
          Bier of the Edge AI and Vision Alliance             data can be processed where it is generated, whereas intensive compute
        said application requirements would drive             tasks remain in the cloud. Recent advances in lowest-power neural com-
        the need for edge AI in five key areas:               puting — for example, edge TPUs, neuromorphic technologies — shift
          •  Bandwidth. Even with 5G, there may               this boundary in favor of the edge and endpoint nodes.”
           not be sufficient bandwidth to send all   Jeff Bier, Edge AI and   Imagination Technologies’ Grant said, “To our mind, it’s obvious to
           of your raw data up to the cloud.  Vision Alliance  put as much intelligence as possible in the edge, and then software
          •  Latency. Many applications require               optimization can be used during the lifetime of the device.” He likened
           faster response time than you can get from the cloud.  the approach to the games console industry model: A vendor releases a
          •  Economics. Even if a given application can technically use the   new console that later is optimized with software updates over the life
           cloud in terms of bandwidth and latency, it may be more economi-  of the hardware.
           cal to perform AI at the edge.                       Adding a neural network accelerator to a system-on-chip (SoC) is not
          •  Reliability. Even if a given application can technically use the cloud   significant from a cost or size viewpoint, said Grant, “so the opportuni-
           in terms of bandwidth and latency, the network connection to the   ties to speed up at the edge are really dramatic.”
           cloud is not always reliable, and the application may need to run   Arm’s Bergey said, “As heterogeneous compute becomes ubiquitous
           regardless of whether it has this connection. In such cases, edge AI   throughout infrastructure, it is critical that we are able to identify
           is needed. An example is a face-recognition door lock; if the network   where it makes most sense to process data, and this will vary from
           connection is down, you still want your door lock to work.            application to application and may even
          •  Privacy. Even if a given application can technically use the cloud   change based on time of the day. The mar-
           in terms of bandwidth, latency, reliability, and economics, there     ket requires solutions that will enable the
           may be many applications that demand local processing for privacy     handing off of different roles to different
           reasons. An example is a baby monitor or bedroom security camera.     layers of AI in order to gain the kind of
                                                                                 overall insight that drives real business
        HOW SMART SHOULD THE EDGE BE?                                            transformation. At the edge, AI is set to play
        This might seem an obvious question, but the reality is that you need    a dual role. At a network level, it could be
        to be pragmatic, and each application is unique.                         used to analyze the flow of data for network
          “Smart is generally not the limiting factor; the limit is memory   Andrew Grant,    prediction and network function manage-
        capacity,” said NXP’s Levy. “In practice, memory limits the size of the   Imagination    ment, intelligently distributing that data to
        machine-learning model that can be deployed, especially in the MCU   Technologies  wherever it makes most sense at the time,
        domain. And to go one level deeper here, a machine-learning model for,   whether that’s the cloud or elsewhere.”
        say, a vision-based application will require more processing power and   Adesto’s Intrater said that “the decision of how much ‘smarts’ should
        more memory. Again, processing power is more of a factor when real-  be put at the edge is dependent on the specific application and how
        time response is required.                            much latency it can handle (not much for real-time mission-critical
          “An example I give is a microwave oven with an internal camera to   applications), what the power envelope is (very small for battery-
        determine what kind of food was put in: A 1- or 2-second response time   operated devices), and security and privacy concerns, as well as
        could be sufficient, thereby enabling the use of something like an NXP   whether there is an internet connection. Even with an internet connec-
        i.MX RT1050. The amount of memory will dictate the size of the model,   tion, you wouldn’t want to send everything to the cloud for analytics,
        which in turn dictates the number of food classes the machine can rec-  because of the bandwidth expense. The division of the smarts across
        ognize. But what if food is inserted that isn’t recognized? Now go to the   the edge and the cloud is about balancing all these concerns.”
        gateway or the cloud to figure out what it is, then use that information   He continued, “You could also do AI on a local edge server, and of
        to allow the smart edge device to retrain.            course, training and analytics are often done in the cloud. Often, it is not
          “To directly answer the question about how much ‘smart’ to include, it   a straightforward decision of where AI happens; the smarts are often dis-
        all boils down to tradeoffs of performance, accuracy, cost, energy. To add   tributed, with some happening in the cloud and some in the edge device.
        to this, we are also working on an application that uses auto-encoders   A typical AI system has such a split between which AI is done locally and
        for another form of ML: anomaly detection. In short, auto-encoders are
        quite efficient, and one example we implemented took only 3 Kbytes and
        did inferencing in 45 to 50 µs — easily the job of an MCU.”
          Infineon’s Furtner echoed the pragmatic approach. “Edge AI is heav-
        ily constrained concerning energy consumption, space, and cost,” he
        said. “In this case, the question is not how much smartness we should
        put into the edge but how much smartness we can afford in the edge.
        And the follow-up question would be, which of the known AI tech-
        niques can be slimmed down in a way that they are sufficiently ‘tiny’ to
        be applied in the edge?
          “So certainly, power consumption limits the amount of endpoint
        intelligence. These endpoints are often powered by small batteries or
        even depend on energy harvesting. Data transmission costs a lot of
        energy, too.”                                         Adesto Technologies is exploring in-memory AI computing with
          Consider a smart sensor, Furtner said: “For local AI to function   RRAM technology, in which individual memory cells serve as both
        properly under these circumstances, it has to be optimized for its spe-  storage elements and compute resources. (Image: Adesto Technologies)

        APRIL 2020 | www.eetimes.eu
   35   36   37   38   39   40   41   42   43   44   45