Test LLM performance with the Procyon AI Text Generation Benchmark

December 9, 2024

We’re excited to announce the launch the Procyon AI Text Generation Benchmark – a new LLM-based AI Benchmark for enterprises interested in quantifying local LLM performance and capabilities.

click this image for a higher quality version

Testing AI LLM performance can be very complicated and time-consuming. There are also many variables such as quantization, conversion, and variations in input tokens that can reduce a test’s reliability if not configured correctly.

The Procyon AI Text Generation Benchmark provides a more compact and easier way to repeatedly and consistently test AI performance with multiple LLM AI models. We worked closely with many AI software and hardware leaders to ensure our benchmark tests take full advantage of the local AI accelerator hardware in your systems.

click this image for a higher quality version

This new benchmark supports hardware from a wide range of Vendors, with support for discrete GPUs from AMD, Intel, and NVIDIA with DirectML, and Intel with OpenVINO. Testing with all supported AI models – Phi-3.5-mini, Mistral-7B, Llama-3.1-8B, and Llama-2-13B lets you measure performance over a broad range of LLM use cases.

The Procyon AI Text Generation Benchmark is available now for owners of Procyon AI Benchmarks and complements the AI Image Generation and AI Computer Vision Benchmarks. With a growing range of local AI benchmarks, we are committed to making Procyon the first choice for enterprises, tech media and vendors for measuring the local AI performance of the latest PCs.