Page 54 - EE Times Europe November 2021 final
P. 54
54 EE|Times EUROPE
OPINION | NEUROMORPHIC COMPUTING By human standards, 3 ns probably doesn’t
feel like much. In AI, it’s a lot. Remember, in
The Status of AI at the edge-to-cloud AI, we’re talking about hun-
dreds or thousands of kilometers.
So system designers are faced with a few
Edge? It’s Complicated workarounds. To accelerate edge-to-cloud
and solve the speed of light over distance at
3-µs/km latency problem, we can reduce the
distance. Moving the edge closer to the
By Rob Telson, BrainChip Holdings Ltd. cloud/data center — well, that’s not too
feasible, as, existentially, edge devices are in
the field, doing their jobs wherever they’re
“Edge AI” is something of a misnomer. Most smart devices, IoT, needed. That leaves us with moving the pro-
and other edge implementations don’t actually process data at the cessing closer to where the data originates:
edge. Edge devices aren’t like smartphones or tablets, equipped with the edge device.
a processor and storage and software, able to perform compute tasks.
What most would call edge AI is cloud-based AI processing of data AI at the edge is a thorny
collected at the edge. The results are then sent back to the device
and often back to the cloud for further processing, aggregation, and set of problems: limited
centralization.
Edge devices are largely “dumb” in that sense. Your smart processing resources,
home-automation gadget isn’t terribly smart. It records your command, from your voice, from small storage capacities,
its companion app, or from a setting you’ve configured previously. Perhaps it does some minor
preprocessing. It then sends the command over its internet connection to a physical server in a insufficient memory, security
data center somewhere, and it waits for its instruction to turn the light on or off.
This edge-to-cloud workflow, obviously, takes time. Fortunately, edge-to-cloud works well for concerns, limited physical
many applications. Unfortunately, AI is not one of them. space on devices.
AI at the edge — true AI at the edge, meaning running neural networks on the smart device
itself — is a thorny problem, or set of problems: limited processing resources, small storage
capacities, insufficient memory, security concerns, electrical power requirements, limited physi- One way to do that is to deploy multiple
cal space on devices. Another major obstacle to designing edge devices capable of AI processing data centers and data closets, also known as
at the edge is excessive cost. Few consumers could afford to upgrade to smart light bulbs if each lights-out data centers. Simply build data
one cost the equivalent of an iPhone. centers/closets close to the edge devices (in
But design them we must, because there is an enormous need for AI at the edge. Devices need other words, everywhere), thereby mini-
to learn fast and make decisions in real time. mizing distance and improving speed. This
Consider a security camera that captures an image of an unattended package at the airport. doesn’t sound too hard! It only requires real
The camera must decide whether the package is a threat — and quickly. Or consider an auton- estate, construction, and lots of hardware.
omous vehicle image sensor that sees an object in the road, must decide if it’s a plastic bag or a (In a managed-services setting, a data closet
rock, and then must decide to swerve or not. is sometimes marketed as “the edge,” so for
These may be extreme examples, but even in less life-or-death situations, latency and the sake of clarity, they mean “the edge of
distance issues plague edge-to-cloud AI. For those of us in the industry, these can be ex- the cloud,” not the edge in our sense.) Factor
pressed as “the speed of light over distance at 3-µs/km latency problem.” The speed of light is in the environmental and power costs of
299,792,458 meters per second (approximately). Each additional meter of distance adds 3 ns of proliferating data centers and data closets
latency, one way, or 3 µs/km. around the world, and this turns out to be an
expensive and unsustainable solution.
Or we can dumb down the application
and settle for less effective AI in exchange
for reduced processing requirements. This
may (or may not) deliver improved speeds,
but then the device can’t fulfill its intended
purpose, which is real-time decision-making.
Ultimately, this “solution” is worthless.
Or we can bite the bullet and build edge
devices capable of running AI applications
right there and then, on the device itself.
Remember when I said this was a thorny
problem? It’s thorny, but not impossible.
It’s true that edge devices are often quite
IMAGE: SHUTTERSTOCK It needs to be sized appropriately for the
small. Picture a wearable medical device.
patient’s comfort and allow them to carry out
their daily activities. Into this device you need
to pack sensors, CPUs, GPUs, memory, stor-
age, networking and connectivity, batteries
NOVEMBER 2021 | www.eetimes.eu and power management, and perhaps a

