mlcommons/inference

Reference implementations of MLPerf™ inference benchmarks

PythonC++Jupyter NotebookShellCSSMakefileOther
This is stars and forks stats for /mlcommons/inference repository. As of 27 Apr, 2024 this repository has 950 stars and 455 forks.

MLPerf™ Inference Benchmark Suite MLPerf Inference is a benchmark suite for measuring how fast systems can run models in a variety of deployment scenarios. Please see the MLPerf Inference benchmark paper for a detailed description of the benchmarks along with the motivation and guiding principles behind the benchmark suite. If you use any part of this benchmark (e.g., reference implementations, submissions, etc.), please cite the following: @misc{reddi2019mlperf, title={MLPerf Inference Benchmark}, ...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
Azure/azure-sdk-for-pythonPythonPowerShellJupyter Notebook3.9k02.5k0
cider-security-research/cicd-goatPythonJavaScriptHTML1.6k02220
torrinworx/Blend_My_NFTsPython962-13460
awslabs/aws-data-wranglerPythonJupyter NotebookOther3.6k06450
httpie/httpiePythonRoffShell29.6k03.8k0
avinash201199/Competitions-and-Programs-ListJavaScriptHTMLCSS50102080
KissLinkA-205/BSUIR-LabsAssemblyC++Java390240
kingToolbox/WindTermCShellCMake17k01.3k0
enjoy-digital/litexCPythonSystemVerilog2.4k04800
neutrinolabs/xrdpCM4C++4.8k02.7k0