Page 41 - EE Times Europe Magazine | April2019
P. 41
EE|Times EUROPE — Boards & Solutions Insert 41
Let’s Talk Edge Intelligence
which is done remotely. Alexa and Siri are a good example, where there paradigm, the matrices of the deep neural networks become arrays
are algorithms in the device for voice/keyword recognition, and then the of NVM [non-volatile memory] cells, and the weights of the matrices
interactions from there take place in the cloud.” become the conductance of the NVM cells. The dot product operations
are done by summing up the current that results from applying the
ENABLING TECHNOLOGIES input voltages on to the RRAM cells.
The Edge AI and Vision Alliance’s Bier sees “many key enabling tech- “Since there is no need to move the weights between the compute
nologies” for edge AI. “Perhaps the most obvious is high-performance, resource and the memory, this model can achieve an unrivaled combi-
energy-efficient, inexpensive processors that are good at running AI nation of power efficiency and scalability.”
algorithms, but there are many others,” he said. “Some of the most
important are software tools, to enable efficient use of these proces- OUR TAKE
sors, and cloud platforms, to aggregate metadata from edge devices and We see a clear distinction between edge AI and endpoint AI: The end-
to manage the provisioning and maintenance of edge devices.” point is the point at which the physical world interfaces with the digital
As you’d expect, most of the companies we spoke to provide a range world. But the way the edge is defined is very elastic. Some vendors say
of devices and intellectual property for edge AI. Infineon provides that everything that is not in the data center is the edge; some define
sensors, actuators, and microcontrollers, including neural network endpoints as a subset of the edge.
accelerators and hardware-security modules for the IoT, said Furtner. Ultimately, the definition is irrelevant. It comes down to the applica-
“Power efficiency, safety, and security are part of our key competencies. tion and how much intelligence one can practically place at the endpoint
With our portfolio, we help to link the real with the digital world, offer- or at the edge. And that involves tradeoffs among memory availability,
ing secure, robust, and energy-efficient AI solutions for the edge.” performance needs, cost, and energy consumption. Those decisions will
Ni at Xilinx said that taking AI edge products to market is non-trivial: determine how much inferencing and analysis can be done at the edge,
Engineers need to blend machine-learning technologies with conventional how many neural network accelerators are needed, whether this is part of
algorithms like sensor fusion, computer vision, and signal transformation. a system-on-chip or whether it sits with a CPU, GPU, or DSP. This is not
“Optimizing all the workloads to meet end-to-end responsiveness requires to forget new, innovative ways of looking at solving the challenge, using
an adaptable domain-specific architecture that allows programmability techniques like in-memory computing and AI.
in both hardware and software,” he said. “Xilinx SoCs, FPGAs, and ACAPs There is broad consensus on this: How much intelligence you put in
provide such adaptable platforms to allow continuous innovation while depends on your application and requires a pragmatic approach based
meeting the end-to-end product requirements.” on the available resources. ■
NXP’s enabling technologies include hardware and software. “There
are customers that use our low-end Kinetis or LPC MCUs for some
smart functionality,” said Levy. “It starts to get more interesting at our
i.MX RT crossover processor level, where we provide integrated MCUs
with Cortex-M7s running at 600 to 1,000 MHz. Our new RT600 includes
an M33 and HiFi4 DSP, whereby we enable medium-performance
machine learning by running in a heterogeneous mode, using the DSP Embedded
to accelerate various components of a neural network.
“Moving way up the spectrum, our latest i.MX 8M Plus combines Switches
four A53s with a dedicated neural processing unit [NPU] that delivers
2.25 TOPS and two orders of magnitude more inference performance and Routers
(and runs less than 3 W). This high-end NPU is critical for applications
such as real-time speech recognition (i.e., NLP), gesture recognition, Secure and trusted
and live video face and object recognition.” for mission-critical applications
From the software perspective, Levy said that NXP provides its eIQ
machine-learning software development environment to enable open-
source ML technologies across the NXP portfolio, from i.MX RT to
i.MX 8 apps processors and beyond. “With eIQ, we give customers the
option to deploy ML on the compute unit of their choice: CPU, GPU, Hardened for extreme Best in-class security
DSP, or NPU. You’ll even see heterogeneous implementations that run a environments SWaP-C for data, video, and
optimized
voice communications
voice application like keyword detection on the DSP, a face recognition
on the GPU or CPU, and a high-performance video application on the
NPU, or any combination thereof.” Developer kits available to Cisco IOS® XE operating
Arm’s Bergey said, “As we move toward a world of 1 trillion IoT enable partners and system system with
integrators to build custom
programmable APIs,
devices, we’re facing an infrastructural and architectural challenge solutions with proven Cisco® powerful management
technology
tools, and built-in security
that’s greater than ever before — and, as such, the technology we need
to answer this great opportunity is constantly evolving. At Arm, our
focus is on providing highly configurable, scalable solutions that meet ESS3300
the performance and power requirements to enable AI everywhere.” A high-speed ESR6300
Gigabit embedded
small-form-factor
For AI at the edge, Adesto provides enabling technologies including embedded switch hardware router
for high-throughput
ready for action
ASICs with AI accelerators, NOR flash memory for storing the weights applications
in an AI chip used for voice and image recognition, and smart edge
servers that connect legacy and new data to cloud analytics such as
IBM Watson and Microsoft Azure. To learn more: https://www.cisco.com/go/embedded
“We are also exploring in-memory AI computing with our RRAM Reach us at: ask-embedded-bd@cisco.com
[resistive RAM] technology, where individual memory cells serve as
both storage elements and compute resources,” said Intrater. “In this
www.eetimes.eu | APRIL 2020

