Zum Hauptinhalt wechseln Zur Fußnote wechseln

Procyon® AI Benchmarks Provide Full Coverage and Actionable Performance Insights

January 13, 2026

Before CES 2026, tech influencer and artificial intelligence (AI) thought leader Harold Sinnott said the annual event feels like a turning point, reflecting that “innovation isn’t happening in silos anymore. AI, energy, mobility, health and creativity intersect in ways that will define the next decade of work, industry and society”(Sinnott, 2025). 

AI PCs currently represent nearly a third of the worldwide PC market and are predicted to become the norm by 2029(Gartner, 2025). Businesses must adapt to this rise in AI capabilities within employees’ computers, and the added demand they may put on PCs’ central processing units (CPUs), graphics processing units (GPUs), storage and battery life.

This growing link between AI and how it will define the next decade of work is foundational to innovation. Procyon® AI Benchmarks, part of ULTRUS® software from UL Solutions, help businesses keep pace with the rapid evolution of AI, focusing on accuracy in real-world scenarios and relevance to the day-to-day activities professionals perform on their PCs. Our latest patch release offers greater coverage — with neural processing units (NPUs) for our AI Text Generation Benchmark and additional support for our AI Image Generation Benchmark.

The Procyon AI Text Generation Benchmark is now accelerated by NPUs across all major vendors on Microsoft Windows

Our AI Text Generation workloads now support NPUs, delivering faster and more efficient inference across Windows platforms. On Qualcomm devices, this is powered by Qualcomm Gen AI Inference Extensions (GENIE), where the first token runs on the NPU while the remaining inference happens on the CPU. For Intel®, using OpenVINO™, the entire workload runs on the NPU for consistent acceleration. AMD support is enabled through ONNX Runtime combined with the Vitis AI Execution Provider (VAIP), offering two options: a pure NPU mode and a hybrid NPU + GPU mode. In the hybrid mode, similar to GENIE, the first token runs on the NPU while subsequent tokens are processed on the GPU.

Procyon AI Image Generation updates: AMD XDNA2 support and new default for Intel OpenVINO in SD 1.5 Light

We’re adding AMD XDNA2 NPU support for our AI Image Generation Benchmark in Stable Diffusion (SD) 1.5 Light. The inference engine appears as “ONNX Runtime – RyzenAI” in the graphical user interface (GUI) and runs the model in mixed-precision 8-bit weight and 16-bit activation (w8a16) mode for optimal performance and image quality.

Our latest update adjusts the default precision for our SD 1.5 Light Benchmark with the OpenVINO runtime from int8 to int8 weights with 16-bit activations (w8a16). This improves alignment with QNN and ONNX Runtime – Ryzen™ AI by offering the most comparable output image quality setting as the default across vendors for more intuitive comparisons. To support our AI benchmarks’ relevance and accuracy with industry trends, we’ve also added support for running fp8 with OpenVINO on supported Intel hardware, notably the new Panther Lake Generation (Intel Core™ Ultra Series 3 processors) in SD 1.5 Light through custom definition files.

With these updates, all major NPU vendors are now represented across our AI inference benchmarks for Windows-based hardware, supporting comprehensive coverage and performance insights for the latest platform innovations. Find our release notes here 

Powered by