uptrain-ai/uptrain

Your open-source LLM evaluation toolkit. Get scores for factual accuracy, context retrieval quality, tonality, and many more to understand the quality of your LLM applications

Pythonmachine-learningmonitoringevaluationexperimentationautoevaluationprompt-engineeringllmopsllm-promptingllm-evalllm-test
This is stars and forks stats for /uptrain-ai/uptrain repository. As of 29 Apr, 2024 this repository has 1585 stars and 131 forks.

Try out Evaluations - Read Docs - Slack Community - Feature Request UpTrain is an open-source tool to evaluate LLM applications. UpTrain provides pre-built metrics to check LLM responses on aspects such as correctness, hallucination, toxicity, etc. as well as provides an easy-to-use framework to configure custom checks. Pre-built Evaluations We Offer 📝 Evaluation Description Factual Accuracy Checks if the response is grounded by the context provided Guideline...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
darrenjw/fp-ssc-courseScalaHaskellMakefile59040
schnommus/eurorack-pmodSystemVerilogPythonOpenSCAD123040
optimass/continual_learning_papersTeXPython6780810
stefounet/vimfilesVim ScriptJavaScriptPython1000
es0j/CVE-2023-0045CShellPython13010
linux-test-project/ltpCShellHTML2.1k+3991+2
TheD1rkMtr/NTDLLReflectionC++Python2780420
g3gg0/flipper-swd_probeCPawnPython45030
acikkaynak/deprem-yardim-backendPythonDockerfileShell3910820
TEXTurePaper/TEXTurePaperPython5780500