Author: EIS Release Date: Mar 11, 2020
Last week at Embedded World, representatives from a wide cross-section of the industry took part in the Embedded Vision Everywhere panel discussion.
The informative one-hour session covered use-cases, challenges, and the increasing role of AI in embedded vision systems.
The panel consisted of Jan-Erik Schmitt, md of Vision Components, Dr. Christopher Scheubel, executive director of Cubemos, Jason Carlson, ceo of Congatec, Bengt Abel of Still, Dr. Michael Bach, head of development at CST and Gion-Pitschen Gross, product manager at Allied Vision Technologies.
It was hosted by Dr. -Ing Peter Ebert, editor-in-chief of trade journal Invision.
Ebert began by asking the panellists what trends have been driving the embedded vision market in recent years.
Broadly, consensus was that adoption into consumer-focused applications, such as product scanning or facial recognition for travel and advertising, has spurred greater investment in the technologies.
One example mentioned was the use of cameras on billboards to detect who, demographically, looks at certain advertisements most frequently and for how long.
Between this added interest and general advancements in embedded computing capabilities, vision applications at the edge have become more viable in commercial and industrial contexts as well.
Each application poses different requirements however. In asset management for example, Still’s Abel pointed out, reliability and low cost are key: “nobody wants to pay for logistics!”
In AI use cases on the other hand, power consumption and exactly where the processing takes place is subject to more consideration. Scheubel, from Cubemos, suggested that for AI, a combination of embedded vision and more traditional machine vision is necessary to train the algorithm and apply it at the edge.
Here, a divergence of opinion emerged between the panellists. Congatec ceo Carlson advocated in favour of the firm’s philosophy of “workload consolidation”, that is, bringing multiple applications and OSs onto a single system based around a multicore CPU.
The embedded computing company argues that transmitting the large amounts of data required for AI to the cloud can be very slow, and limits real-time processing, whereas accomplishing these same tasks at the edge allows for faster local execution of AI algorithms and real-time reactions, in areas like industrial motion control.
In response to the workload consolidation approach, the issue of power consumption was raised again, particularly relevant to vision applications as heat generated through high power consumption can affect image quality.
Ebert also posed the question of which types of processor are most appropriate for embedded vision applications. The panellists generally agreed that this was dependent on the software being used and the individual customer’s preference.
For example, legacy systems using Windows OSs would likely be more appropriately served by x86 processors, where lower cost, targeted applications were likely to use Arm instead.
Embedded vision, particularly in concert with AI and facial recognition, raises a number of security and privacy concerns. Carlson noted in response that even at the edge, AI algorithms have the capability to redact identifying features from figures who aren’t relevant to the specific task being performed.
Scheubel pointed out that these concerns are more materially challenging within EU territory thanks to GDPR regulations.
Lastly, Ebert asked the panellists what they believe is required for embedded vision to expand further.
The group was optimistic about the technology’s future, pointing to its current momentum, but suggested its high cost and customisation requirements were a barrier to greater adoption. Allied Vision’s Gross suggested interoperability is key: “standardisation is important for creating embedded vision systems”.