laiyer-ai/llm-guard

The Security Toolkit for LLM Interactions

PythonMakefilepromptsecurity-toolsadversarial-machine-learningadversarial-attackslarge-language-modelsllmprompt-engineeringchatgptllmopsprompt-injection
This is stars and forks stats for /laiyer-ai/llm-guard repository. As of 29 Apr, 2024 this repository has 263 stars and 27 forks.

LLM Guard - The Security Toolkit for LLM Interactions LLM Guard by Laiyer.ai is a comprehensive tool designed to fortify the security of Large Language Models (LLMs). Documentation | Demo Production Support / Help for companies We're eager to provide personalized assistance when deploying your LLM Guard to a production environment. Send Email ✉️ What is LLM Guard? By offering sanitization, detection of harmful language, prevention of data leakage, and resistance against prompt injection attacks,...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
posit-dev/r-shinyliveRPython61+740
leonow32/verilog-fpgaVerilogPythonBatchfile17010
grafana/beylaCGoRuby6540350
samtools/htslibCPerlMakefile7200481+3
S3cur3Th1sSh1t/Caro-KannCNimC++1950210
YOLOP0wn/POSTDumpC#Python1820220
punica-ai/punicaCudaPythonC++46+26+1
ashleykleynhans/stable-diffusion-dockerDockerfileShellHTML960290
sigma/codexEmacs LispMakefile10010
salaboy/platforms-on-k8sHTMLJavaScriptGo480190