UL Procyon AI Image Generation Benchmark Now Available

March 25, 2024

Our new AI Inference Benchmark is now available!

The UL Procyon AI Image Generation Benchmark provides a consistent, accurate and understandable workload for measuring the AI performance of high-end hardware, built with input from members of the industry to ensure fair and comparable results across all supported hardware.

UL Procyon AI Image Generation Benchmark

To better measure the performance of both mid-range and high-end discrete graphics cards, this benchmark contains two tests built using different versions of the Stable Diffusion model, and we hope to add more tests in the future to support other performance categories.

The Stable Diffusion XL (FP16) test is our most demanding AI inference workload, with only the latest high-end GPUs meeting the minimum requirements to run it. For mid-range discrete GPUs, the Stable Diffusion 1.5 (FP16) test is our recommended test.

Test performance across multiple AI Inference Engines

Like our AI Computer Vision Benchmark, you can freely switch between several leading inference engines, letting you compare engine performance differences, hardware using the same engine, or the best-case implementations of AI Inference performance across devices. By default, the benchmark selects the optimal inference engine for the system’s hardware.

Currently, the AI Image Generation Benchmark supports the following inference engines. We plan to add support for more inference engines in the future to provide optimal performance for all supported AI hardware.

  • Intel® OpenVINO™
  • NVIDIA® TensorRT™ 
  • ONNX® runtime with DirectML™ 

AI Image Generation Benchmark banner

UL Procyon Benchmarking Suite

Find out more about the UL Procyon AI Image Generation Benchmark on the UL Solutions Benchmarks website.