Page 24 - EETimes Europe June 2021
P. 24

24 EE|Times EUROPE

           Tools Move Up the Value Chain to Take the Mystery Out of Vision AI


           deal with outdated software, as it provides instant access to the latest   tion. The complete model is packaged with pre-processing steps, neural
           version of the Intel Distribution of OpenVINO toolkit and compatible   network weights, and classification code in a single C++ library that can
           edge hardware.                                        be included in the embedded software.
             And third, it offers access to application-specific performance bench-
           marks in an easy-to-compare, side-by-side format.     GOING TO A HIGHER LEVEL OF ABSTRACTION
             (A tutorial on running object-detection models using Intel DevCloud   Another approach being offered by vendors is to reduce development
           for the Edge is available on our sister site Embedded at    time by offering module-based systems and enabling design at a higher
           https://bit.ly/3fpDJbQ.)                              level of abstraction. Xilinx said that its new system-on-module (SOM)
                                                                 approach can shave up to nine months off the development time for
           BUILD A MODEL IN THE CLOUD, SEE WHAT HAPPENS LIVE     vision systems by addressing the rising complexity in vision AI as well
           Another approach is to feed data into a cloud platform to visualize and   as challenges for implementing AI at the edge.
           create training models and deploy them on embedded devices. Edge   Xilinx recently announced the first product in its new portfolio of
           Impulse does just that, offering a cloud-based development environment   SOMs: the Kria K26 SOM, specifically targeting vision-AI applications
           that aims to make it simple to add machine learning on edge devices   in smart cities and smart factories, along with an out-of-the-box–ready,
           without requiring a Ph.D. in machine learning, according to the company.   low-cost development kit, the Kria KV260 AI-vision starter kit.
             Its platform enables users to import image data collected from the   Chetan Khona, director of industrial, vision, and healthcare at
           field, quickly build classifiers to interpret that data, and deploy models   Xilinx, said at the press briefing to launch the new module family,
           back to production low-power devices. A key to the Edge Impulse web   “Production-ready systems are important for rapid deployment [of
           platform is the ability to view and label all the acquired data, cre-  embedded-vision AI]. Customers are able to save up to nine months
           ate pre-processing blocks to augment and transform data, visualize   in development time by using a module-based design rather than a
           the image dataset, and classify and validate models on training data   device-based design.” He added that with the starter kit, users can get
           straight from the user interface.                     started within an hour, “with no FPGA experience needed.” Users con-
             Because it can be quite hard to build a computer-vision model from   nect the camera, cables, and monitor; insert the programmed microSD
           scratch, Edge Impulse uses a process of transfer learning to make it   card and power up the board; and then can select and run an acceler-
           easier and faster to train models. This involves piggybacking on a well-  ated application of their choice.
           trained model and retraining only the upper layers of a neural network,   The Kria SOM portfolio couples the hardware and software platform
           leading to much more reliable models that train in a fraction of the   with production-ready vision-accelerated applications. These turnkey
           time and work with substantially smaller datasets.    applications eliminate all the FPGA hardware design work; software
             With the model designed, trained, and verified, it is then possible to   developers need only integrate their custom AI models and application
           deploy this model back to the device. The model can then run on the   code and optionally modify the vision pipeline — using familiar design
           device without an internet connection, with all its inherent benefits,   environments, such as TensorFlow, Pytorch, or Café frameworks as well
           such as minimizing latency, and run with minimum power consump-  as C, C++, OpenCL, and Python programming languages.








































           Xilinx’s Kria systems-on-module provide pre-built hardware with helpful utilities to allow developers to drop in their differentiation using
           their preferred design environment. (Source: Xilinx)

           JUNE 2021 | www.eetimes.eu
   19   20   21   22   23   24   25   26   27   28   29