MLPerf Inference

The MLPerf inference benchmark measures how fast a system can perform ML inference using a trained model. The MLPerf inference benchmark is intended for a wide range of systems from mobile devices to servers. To learn more about it, read the overview , read the inference rules , or consult the reference implementation of each benchmark . If you intend to submit results, please read the submission rules carefully. The v0.5 inference results are available.