|
|
www.design-reuse-china.com |
Forming a Benchmark for ML at the Edge
EEMBC pins down variables to compare edge inference chips
eetasia.com, Aug. 08, 2019 –
A new benchmark for machine learning inference chips aims to ease comparisons between processing architectures for embedded edge devices.
Developed by EEMBC (the Embedded Microprocessor Benchmark Consortium), MLMark uses three of the most common object detection and image classification models: ResNet-50, MobileNet and SSDMobileNet. The first hardware to be supported comes from Intel, Nvidia and Arm, with scores already available for Nvidia and HiSilicon (Arm-based) parts.
The market for machine learning inference compute in edge devices, while still emerging, has huge growth potential. EEMBC has seen strong demand for a new benchmark to help counter industry hype about performance, said Peter Torelli, the group's President.
"Organising and taxonomising these different architectures begins with reporting, and having a database of scores to see how they are performing on what are considered the standard models today," he said.
Moving Target
Developing a benchmark for such a rapidly evolving space is challenging, to say the least. Part of the problem is that for any machine learning (ML) model, there are many variables that can be optimised, as well as different model formats and training datasets.