Procyon AI Text Generation

Procyon® AI Text Generation Benchmark

Simplifying Local LLM AI Performance testing

Testing AI LLM performance can be very complicated and time-consuming, with full AI models requiring large amounts of storage space and bandwidth to download. There are also many variables such as quantization, conversion, and variations in input tokens that can reduce a test’s reliability if not configured correctly.

The Procyon AI Text Generation Benchmark provides a more compact and easier way to repeatedly and consistently test AI performance with multiple LLM AI models. We worked closely with many AI software and hardware leaders to ensure our benchmark tests take full advantage of the local AI accelerator hardware in your systems.

Jetzt kaufen

Prompt 7 (RAG Query): How can benchmarking save time and money for my organization? How to choose a reference benchmark score for RFPs? Summarize how to efficiently test the performance of PCs for Enterprise IT. Answer based on the context provided.

Ergebnisse und Erkenntnisse


Built with input from industry leaders

  • Built with input from leading AI vendors to take full advantage of next-generation local AI accelerator hardware.
  • Seven prompts simulating multiple real-world use cases, with RAG (Retrieval-Augmented Generation) and non-RAG queries
  • Designed to run consistent, repeatable workloads, minimizing common AI LLM workload variables.
Ai text generation benchmark benchmark scores
Ai text generation benchmark detailed scores

Detailed Results

  • Get in-depth reporting as to how system resources are being used during AI workloads.
  • Reduced install size vs testing with entire AI models.
  • Easily compare results between devices to help identify the best systems for your use cases.

AI Testing Simplified

  • Easily and quickly test using four industry standard AI Models of varying parameter sizes.
  • Get a real-time view of responses being generated during the benchmark
  • One-click to easily test with all supported inference engines, or configure based on your preference.
Ai text generation benchmark hardware monitoring

Entwickelt mit Branchenfachkenntnissen


Procyon benchmarks are designed for industry, enterprise, and press use, with tests and features created specifically for professional users. The Procyon AI Text Generation Benchmark was designed and developed with industry partners through the UL Benchmark Development Program (BDP). Das BDP ist eine Initiative von UL Solutions, die zum Ziel hat, durch enge Zusammenarbeit mit den Programm-Mitgliedern relevante und unparteiische Benchmarks zu schaffen.

Leistung der Inferenzmaschine

With the Procyon AI Text Generation Benchmark, you can measure the performance of dedicated AI processing hardware and verify inference engine implementation quality with tests based on a heavy AI image generation workload.

Konzipiert für Fachkräfte

We created our Procyon AI Inference Benchmarks for engineering teams who need independent, standardized tools for assessing the general AI performance of inference engine implementations and dedicated hardware.

Schnell und einfach zu bedienen

Der Benchmark ist einfach zu installieren und auszuführen, ohne komplizierte Konfiguration. Run the benchmark using the Procyon application or via command-line. Zeigen Sie die Benchmark-Scores und Diagramme an oder exportieren Sie detaillierte Ergebnisdateien zur weiteren Analyse.

Procyon AI Text Generation Benchmark

Kostenlose Testversion

Testversion anfordern

Site-Lizenz

Angebot anfragen Presselizenz
  • Annual site license for Procyon AI Text Generation Benchmark.
  • Unbegrenzte Anzahl von Benutzern.
  • Unbegrenzte Anzahl von Geräten.
  • Vorrangiger Support per E-Mail und Telefon

BDP

Kontaktieren Sie uns Mehr erfahren

Kontaktieren Sie uns


Das Benchmark Development Program™ ist eine Initiative von UL Solutions zum Ausbau von Partnerschaften mit Technologiefirmen.

OEMs, ODMs, Komponentenhersteller und deren Zulieferer sind eingeladen, gemeinsam mit uns neue Benchmarks für die AI-Verarbeitung zu entwickeln. Bitte kontaktieren Sie uns für Details.

Systemanforderungen

All ONNX models

Storage: 18,25 GB

All OpenVINO models

Storage: 15,45 GB

Phi-3.5-mini

ONNX with DirectML
  • 6GB VRAM (discrete GPU)
  • 16GB System RAM (iGPU)
  • Storage: 2,15 GB
Intel OpenVINO
  • 4GB VRAM (Discrete GPU)
  • 16GB System RAM (iGPU)
  • Storage: 1,84 GB

Llama-3.1-8B

ONNX with DirectML
  • 8GB VRAM (Discrete GPU)
  • 32GB System RAM (iGPU)
  • Storage: 5,37 GB

Intel OpenVINO

  • 8GB VRAM (Discrete GPU)
  • 32GB System RAM (iGPU)
  • Storage: 3,88 GB

Mistral-7B

ONNX with DirectML
  • 8GB VRAM (discrete GPU)
  • 32GB System RAM (iGPU)
  • Storage: 3,69 GB
Intel OpenVINO
  • 8GB VRAM (Discrete GPU)
  • 32GB System RAM (iGPU)
  • Storage: 3,48 GB

Llama-2-13B

ONNX with DirectML
  • 12GB VRAM (Discrete GPU)
  • 32GB System RAM (iGPU)
  • Storage: 7,04 GB
Intel OpenVINO
  • 10GB VRAM (Discrete GPU)
  • 32GB System RAM (iGPU)
  • Storage: 6,25 GB

Support

Latest 1.0.73.0 | 09. Dezember 2024

Sprachen

  • Englisch
  • Deutsch
  • Japanisch
  • Portugiesisch (Brasilien)
  • Simplified Chinese
  • Spanisch

Powered by