UL Procyon® AI Inference Benchmark for Windows
Benchmark AI performance using various inference engines
Machine learning applications are rapidly growing as it becomes more accessible to integrate and deploy AI solutions into everyday applications. With the demand for faster machine learning performance, major hardware vendors have been optimizing their inference engines to provide the best possible performance on their hardware.
The UL Procyon AI Inference Benchmark for Windows gives insights into how AI inference engines perform on your hardware in a Windows environment, helping you decide which engines to support to achieve the best performance. The benchmark features several AI inference engines from different vendors, with benchmark scores reflecting the performance of on-device inferencing operations.
In the benchmark, common machine-vision tasks are executed using a range of popular, state-of-the-art neural networks. Measure AI accelerator performance by comparing it with the same operations run on the CPU or GPU.Buy now
- Tests based on common machine-vision tasks using state-of-the-art neural networks.
- Measure inference performance using the CPU, GPU or dedicated AI accelerators.
- Benchmark with NVIDIA® TensorRT™, Intel® OpenVINO™, Qualcomm® SNPE, and Microsoft® Windows ML.
- Verify inference engine implementation and compatibility.
- Optimize drivers for hardware accelerators.
- Compare float and integer-optimized model performance.
- Simple to set up and use via the UL Procyon application or via command-line.
Inference engine performance
With the UL Procyon AI Inference Benchmark for Windows, you can measure the performance of dedicated AI processing hardware and verify inference engine implementation quality with tests based on common machine-vision tasks.
Designed for professionals
We created the UL Procyon AI Inference Benchmark for Windows for engineering teams who need independent, standardized tools for assessing the general AI performance of inference engine implementations and dedicated hardware.
Fast and easy to use
The benchmark is easy to install and run—no complicated configuration is required. Run the benchmark using the UL Procyon application or via command-line. View benchmark scores and charts or export detailed result files for further analysis.
Developed with industry expertise
UL Procyon benchmarks are designed for industry, enterprise, and press use, with tests and features created specifically for professional users. The AI Inference Benchmark for Windows was designed and developed with industry partners through the UL Benchmark Development Program (BDP). The BDP is an initiative from UL Solutions that aims to create relevant and impartial benchmarks by working in close cooperation with program members.
Neural network models
MobileNet V3 is a compact visual recognition model that was created specifically for mobile devices. The benchmark uses MobileNet V3 to identify the subject of an image, taking an image as the input and outputting a list of probabilities for the content in the image. The benchmark uses the large minimalistic variant of MobileNet V3.
Inception V4 is a state-of-the-art model for image classification tasks. Designed for accuracy, it is a much wider and deeper model than MobileNet. The benchmark uses Inception V4 to identify the subject of an image, taking an image as the input and outputting a list of probabilities for the content identified in the image.
YOLO, which stands for You Only Look Once, is an object detection model that aims to identify the location of objects in an image. The benchmark uses YOLO V3 to produce bounding boxes around objects with probabilities on the confidence of each detection.
DeepLab is an image segmentation model that aims to cluster the pixels of an image that belong to the same object class. Semantic image segmentation labels each region of the image with a class of object. The benchmark uses MobileNet V2 for feature extraction enabling fast inference with little difference in quality compared with larger models.
Real-ESRGAN is a super-resolution model trained on synthetic data for increasing the resolution of an image, reconstructing a higher resolution image from a lower resolution counterpart. The model used in the benchmark is the general image variant of Real-ERSGAN, and upscales a 250x250 image to an 1000x1000 image.
ResNet 50 is an image classification model that provides a novel way of adding more convolutional layers with the use of residual blocks. Its release enabled the training of deep neural networks previously not possible. The benchmark uses ResNet 50 to identify image subjects, outputting a list of probabilities for the content identified in the image.
Integer and float models
The benchmark includes both float- and integer-optimized versions of each model. Each model runs in turn on all compatible hardware in the device. Select the device and inference precision for each runtime to compare performance between integer and float models.
Results and insights
Compare AI Inference performance with integer and float models using a CPU, GPU or dedicated AI accelerator.
See inference times for each neural network test when using your selected inference engine and processing unit.
Get detailed metrics on how CPU and GPU temperatures, clock speeds and usage changes during the benchmark run.
Benchmark Development ProgramContact us Find out more
The Benchmark Development Program™ is an initiative from UL Solutions for building partnerships with technology companies.
OEMs, ODMs, component manufacturers and their suppliers are invited to join us in developing new AI processing benchmarks. Please contact us for details.
Minimum system requirements
|OS||Windows 10, 64-bit or Windows 11|
|Processor||2 GHz dual-core CPU|
Latest version 1.0 | February 1st 2023
- Portuguese (Brazil)
- Simplified Chinese