google/XNNPACK

High-efficiency floating-point neural network inference operators for mobile, server, and Web

CC++AssemblyShellStarlarkCMakeOthercpuneural-networkinferencemultithreadingsimdmatrix-multiplicationneural-networksconvolutional-neural-networksconvolutional-neural-networkinference-optimizationmobile-inference
This is stars and forks stats for /google/XNNPACK repository. As of 30 Apr, 2024 this repository has 1553 stars and 293 forks.

XNNPACK XNNPACK is a highly optimized solution for neural network inference on ARM, x86, WebAssembly, and RISC-V platforms. XNNPACK is not intended for direct use by deep learning practitioners and researchers; instead it provides low-level performance primitives for accelerating high-level machine learning frameworks, such as TensorFlow Lite, TensorFlow.js, PyTorch, ONNX Runtime, and MediaPipe. Supported Architectures ARM64 on Android, iOS, macOS, Linux, and Windows ARMv7 (with NEON) on Android ARMv6...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
rmcrackan/LibationC#Shell1.3k0850
ekoontz/clojure-dojo-gameClojure0000
sunng87/openbirdingmapClojureHTMLJavaScript40020
wballard/reactCoffeeScriptJavaScript1000
proximitybbdo/leroyCoffeeScriptShell0000
jfrux/asyncColdFusionJavaScript0000
heliaxdev/Org-GenerationCommon Lisp1000
formal-land/coq-of-rustCoqRustOther66010
sukepc0824/Twitter-Dog2BirdCSS12010
BloomTech-Labs/BandersnatchStarterCSSHTMLPython00330