-
Software
- Procyon Benchmark-Suite
- AI Text Generation Benchmark
- AI Image Generation-Benchmark
- AI Inference Benchmark for Android
- AI Computer Vision-Benchmark
- Battery Life-Benchmark
- One-Hour Battery Consumption-Benchmark
- Office Productivity-Benchmark
- Photo Editing-Benchmark
- Video Editing-Benchmark
- Testdriver
- 3DMark
- 3DMark für Android
- 3DMark für iOS
- PCMark 10
- PCMark für Android
- VRMark
- MEHR …
-
Services
-
Support
-
Einblicke
-
Ranking
-
Mehr
Procyon AI Inference now available on macOS
April 8, 2024
Last month, we committed to further developing our AI Inference Benchmarks offering with the release of our AI Image Generation Benchmark, complementing our AI Computer Vision Benchmark. Today, we’re adding support for the Core ML AI inference engine to our AI Computer Vision Benchmark. With this update, UL Procyon can now be used to measure the AI Inference performance of computers running macOS on Apple Silicon.
The UL Procyon AI Computer Vision Benchmark provides insights into how AI inference engines perform, helping you decide which engines to support to achieve the best performance on the hardware and platforms you are looking to implement. The benchmark supports several leading AI inference engines, with benchmark scores reflecting the performance of on-device inferencing operations.
Running the Computer Vision Benchmark on macOS using Core ML performs the same AI Inference workload as on Windows and is run using the command line. The benchmark on macOS gives you four combinations of accelerators to run the workload:
- CPU
- CPU + GPU
- CPU + Neural Engine
- All (CPU + GPU + Neural Engine)
Currently, all operators of the AI models in the AI Computer Vision Benchmark are supported by the Neural Engine and GPU found in current generation Mac hardware.
While comparing inference engines, it’s also important to consider accuracy in addition to raw performance. There are notable trade-offs and differences between inference precision and performance, especially when comparing between results from FP16 and INT inference engines. We’ve run our own tests measuring the accuracy of inference engines supported by the Procyon AI Inference Benchmarks. For more information, please see this page on the UL Solutions Benchmarks site.
This is a free update to all UL Procyon customers with a valid annual license for UL Procyon AI Inference Benchmarks. You only need to update to the latest version of UL Procyon to access it.
You can read more about the UL Procyon AI Computer Vision Benchmark on our website.
Recent news
-
Test LLM performance with the Procyon AI Text Generation Benchmark
December 9, 2024
-
New DirectStorage test available in 3DMark
December 4, 2024
-
New Opacity Micromap test now in 3DMark for Android
October 9, 2024
-
NPUs now supported by Procyon AI Image Generation
September 6, 2024
-
Test the latest version of Intel XeSS in 3DMark
September 3, 2024
-
Introducing the Procyon Battery Consumption Benchmark
June 6, 2024
-
3DMark Steel Nomad is out now!
May 21, 2024
-
Procyon AI Inference now available on macOS
April 8, 2024
-
Procyon AI Image Generation Benchmark Now Available
March 25, 2024
-
Announcing the Procyon AI Image Generation Benchmark
March 21, 2024
-
3DMark Steel Nomad will be free for all 3DMark users.
December 20, 2023
-
Using 3DMark to measure sustained performance.
December 13, 2023
-
New NPUs supported by Procyon AI Benchmark
November 16, 2023
-
A preview of 3DMark Steel Nomad
November 10, 2023
-
Qualcomm Uses Procyon AI Benchmark at Snapdragon Summit
November 1, 2023