This is stars and forks stats for /tatsu-lab/alpaca_eval repository. As of 05 May, 2024 this repository has 658 stars and 90 forks.
AlpacaEval : An Automatic Evaluator for Instruction-following Language Models Evaluation of instruction-following models (e.g., ChatGPT) typically requires human interactions. This is time-consuming, expensive, and hard to replicate. AlpacaEval in an LLM-based automatic evaluation that is fast, cheap, replicable, and validated against 20K human annotations. It is particularly useful for model development. Although we improved over prior automatic evaluation pipelines, there are still fundamental...
AlpacaEval : An Automatic Evaluator for Instruction-following Language Models Evaluation of instruction-following models (e.g., ChatGPT) typically requires human interactions. This is time-consuming, expensive, and hard to replicate. AlpacaEval in an LLM-based automatic evaluation that is fast, cheap, replicable, and validated against 20K human annotations. It is particularly useful for model development. Although we improved over prior automatic evaluation pipelines, there are still fundamental...
repo | techs | stars | weekly | forks | weekly |
---|---|---|---|---|---|
cszn/FFDNet | MATLABOther | 412 | 0 | 124 | 0 |
facebookresearch/ijepa | Python | 2.3k | 0 | 369 | 0 |
Victorwz/LongMem | PythonShellCuda | 654 | +4 | 105 | 0 |
uzh-rpg/RVT | Python | 226 | 0 | 29 | 0 |
sinsinology/CVE-2023-20887 | RubyPython | 220 | 0 | 44 | 0 |
spyglass-search/spyglass | RustHTMLJavaScript | 2.2k | 0 | 45 | 0 |
aorumbayev/autogpt4all | PythonShell | 350 | +4 | 50 | 0 |
openSIL/openSIL | CPythonAssembly | 250 | 0 | 16 | 0 |
iburzynski/jambhala | HaskellPythonShell | 18 | +1 | 33 | +11 |
shade-econ/nber-workshop-2023 | HTMLJupyter NotebookPython | 57 | 0 | 28 | 0 |