Page 32 - EETimes Europe June 2021
P. 32

32 EE|Times EUROPE



           OPINION | EMBEDDED VISION
                                                                                   The most obvious of these are frameworks
           Embedded Vision                                                         such as TensorFlow or PyTorch and libraries
                                                                                   like OpenCV.
                                                                                     But widely used task-specific neural net-
           at the Tipping Point                                                    works, such as Yolov4 or Google Inception,
                                                                                   have changed the game. No longer do most
                                                                                   developers design a neural network; rather,
                                                                                   they pick a free off-the-shelf neural network
           By Phil Lapsley, BDTI                                                   and train it for their task. (Of course, to train
                                                                                   a neural network, you need data. Depending
                                                                                   on your application, this may represent a
                               A TECHNOLOGY REACHES a tipping point when it hits three   challenging data-collection project, although
                               milestones: First, it becomes technically feasible to accomplish   there is an increasing number of open-source
                               important tasks with it. Second, it becomes cheap enough to use   datasets available, as well as techniques to
                               for those tasks. And third, critically, it becomes sufficiently easy for   augment your data or reduce the amount of
                               non-experts to build products with it. Passing those milestones is a   data you need.)
                               great indicator that a technology is poised to spread like wildfire. At   These building-block libraries and tools
                               this year’s Embedded Vision Summit, held as a virtual event in May,   may be chip-vendor–specific. An example is
                               we saw clear evidence that embedded vision has reached this point.  Nvidia’s DeepStream SDK, which simplifies
                                Embedded vision passed the first two milestones a while back. A   the creation of video analytics pipelines.
           huge part of putting the technical feasibility milestone in the rearview mirror was the advent of   Although DeepStream is tied to Nvidia’s
           deep neural networks, which revolutionized the tasks that vision could do. Suddenly, classifying   Jetson processors, it’s a great example of
           images or detecting objects in messy real-world scenes was possible, in some cases with accuracy   a vendor’s providing something closer to
           surpassing that of humans. To be sure, it wasn’t easy, but it was doable.  a complete solution (as opposed to “just
             Moore’s Law, market economics, and domain-specific architectural innovation took care of   silicon”). BDTI and Tryolabs recently built a
           the second milestone. Today, for US$4.99, you can buy a tiny ESP32-CAM board that has a dual-  face-mask–detection smart camera product
           core, 240-MHz processor and a 2-MP camera module with an on-board image signal processor   using DeepStream and YoloV4. 1
           and JPEG encoder; it’s a squeeze to do computer vision on it, but it’s certainly possible, and it’s   Second is the availability of tools spe-
           tough to beat the price. If you have more money to spend, your options widen significantly. For   cifically designed to simplify the process
           example, US$99 will get you an Nvidia Jetson Nano Developer Kit with a quad-core 1.4-GHz   of creating embedded-vision and edge-AI
           CPU, a 128-core Maxwell GPU, and 4 Gbytes of memory — more than enough to do some serious   systems. A great example is Edge Impulse,
           embedded-vision processing.                                             whose tools ease development of embedded
             Best of all, new processors show up monthly and at all price, power, and performance points,   machine-learning and -vision systems. For
           often with specialized architectures that boost performance on computer-vision and neural-   instance, the Edge Impulse platform can be
           network inference tasks. Examples include new offerings from Xilinx, Cadence, and Synaptics.  used to train and program an image-
             It’s that pesky third milestone, ease of use, that’s been the rub. Sure, deep learning radically   recognition neural network for that US$4.99
           changed what vision systems were capable of, but you needed to be a ninja to be able to design   ESP32-CAM mentioned above.
           neural networks, gather the data needed, and train them, to say nothing of then having to imple-  Similarly, for beefier processors, Intel’s
           ment them on a resource-constrained embedded system. But that’s really changed in the last few   DevCloud for the Edge and OpenVINO tools
           years. Two big shifts have driven that change.                          aim to make vision far easier to implement at
             First is that you don’t have to build embedded-vision systems from scratch anymore, thanks    the edge.
           to the widespread availability of high-quality, well-supported tools and libraries for vision.   Think back to the 1990s, when wireless
                                                                                   communications was the “new new thing.”
                                                                                   To start with, it was expensive magic that
                                                                                   required a team of RF wizards to make
                                                                                   happen. But it reached the tipping point, and
                                                                                   today, anyone can buy RF modules for a few
                                                                                   dollars to enable wireless communications in
                                                                                   an embedded product. In the process, literally
                                                                                   billions of wireless units have been shipped
                                                                                   with correspondingly huge economic impact.
                                                                                     Embedded vision is at a similar tipping
                                                                                   point, and the Embedded Vision Summit is a
                                                                                   great place to watch it happen in real time. ■

                                                                                   REFERENCE
                                                                                   1  A Mask Detection Smart Camera Using the Nvidia
          IMAGE: SHUTTERSTOCK                                                      Phil Lapsley is a co-founder of consulting
                                                                                   Jetson Nano: System Architecture and Developer
                                                                                   Experience. Embedded Vision Summit, May 25,
                                                                                   2021. bit.ly/2QXncDv


                                                                                   firm BDTI and one of the organizers of the



           JUNE 2021 | www.eetimes.eu                                              Embedded Vision Summit (bit.ly/3ege1a3).
   27   28   29   30   31   32   33   34   35   36   37