Arize-ai/phoenix

AI Observability & Evaluation - Evaluate, troubleshoot, and fine tune your LLM, CV, and NLP models in a notebook.

PythonTypeScriptJupyter NotebookOtherclusteringhacktoberfestumapmlopsmodel-monitoringml-monitoringai-monitoringml-observabilityai-observabilityai-roimodel-observabilityllmopsllm-eval
This is stars and forks stats for /Arize-ai/phoenix repository. As of 27 Apr, 2024 this repository has 1525 stars and 95 forks.

Phoenix provides MLOps and LLMOps insights at lightning speed with zero-config observability. Phoenix provides a notebook-first experience for monitoring your models and LLM Applications by providing: LLM Traces - Trace through the execution of your LLM Application to understand the internals of your LLM Application and to troubleshoot problems related to things like retrieval and tool execution. LLM Evals - Leverage the power of large language models to evaluate your generative model or application's...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
Luodian/OtterPythonShellJupyter Notebook2.5k02490
adieyal/sd-dynamic-promptsPythonJavaScriptHTML1.5k+222180
WeOpenML/PandaLMPythonShell7270530
mtravers/skijSchemeJavaOther2000
jscraftcamp/websiteSvelteTypeScriptJavaScript4103320
erskingardner/nostr-howSvelteTypeScriptCSS87+1370
primocms/primoSvelteJavaScriptTypeScript1.7k+9339+4
alsterverse/docsSvelteTypeScriptCSS0000
Shital-Agrawal/SE-DIVISION-B_2023TclJavaScriptCSS00190
giellalt/lang-smaTextYAMLShell3030