This is stars and forks stats for /abacaj/mpt-30B-inference repository. As of 08 May, 2024 this repository has 561 stars and 326 forks.
MPT 30B inference code using CPU Run inference on the latest MPT-30B model using your CPU. This inference code uses a ggml quantized model. To run the model we'll use a library called ctransformers that has bindings to ggml in python. Turn style with history on latest commit: Video of initial demo: 2023-06-25.20-13-24.mp4 Requirements I recommend you use docker for this model, it will make everything easier for you. Minimum specs system with 32GB of ram. Recommend to...
MPT 30B inference code using CPU Run inference on the latest MPT-30B model using your CPU. This inference code uses a ggml quantized model. To run the model we'll use a library called ctransformers that has bindings to ggml in python. Turn style with history on latest commit: Video of initial demo: 2023-06-25.20-13-24.mp4 Requirements I recommend you use docker for this model, it will make everything easier for you. Minimum specs system with 32GB of ram. Recommend to...
repo | techs | stars | weekly | forks | weekly |
---|---|---|---|---|---|
GitHubSecurityLab/actions-permissions | PythonOther | 184 | 0 | 17 | 0 |
oracle-devrel/technology-engineering | PythonPLSQLShell | 46 | 0 | 8 | 0 |
Infineon/cce-mtb-psoc6-ubm-controller | AssemblyCHTML | 0 | 0 | 0 | 0 |
flowdriveai/flowpilot | CPythonC++ | 253 | 0 | 62 | 0 |
trevorwang/retrofit.dart | DartRubyShell | 967 | 0 | 216 | 0 |
MetOffice/opsinputs | FortranC++Python | 4 | 0 | 0 | 0 |
labring/FastGPT | TypeScriptJavaScriptHTML | 5k | +105 | 1k | +41 |
apache/kvrocks | C++GoCMake | 2.5k | +10 | 356 | 0 |
terraform-aws-modules/terraform-aws-solutions | HCLPython | 70 | 0 | 5 | 0 |
cdfmlr/muvtuber | HTMLPython | 300 | 0 | 58 | 0 |