Trusted-AI/adversarial-robustness-toolbox

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

PythonOtherpythonmachine-learningprivacyaiattackextractioninferenceartificial-intelligenceevasionred-teampoisoningadversarial-machine-learningblue-teamadversarial-examplesadversarial-attackstrusted-aitrustworthy-ai
This is stars and forks stats for /Trusted-AI/adversarial-robustness-toolbox repository. As of 29 Apr, 2024 this repository has 4015 stars and 1068 forks.

Adversarial Robustness Toolbox (ART) v1.16 中文README请按此处 Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART is hosted by the Linux Foundation AI & Data Foundation (LF AI & Data). ART provides tools that enable developers and researchers to defend and evaluate Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference. ART supports all popular machine learning frameworks (TensorFlow,...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
thunlp/BMCoursePythonShell2260420
rizinorg/rizinCC++Meson2.1k+8284+1
OpenAtomFoundation/TencentOS-tinyCAssemblyPython6k01.6k0
fontforge/fontforgeCPythonCMake5.6k06590
Phaiax/turningtableOther0000
wuba/FairDartJavaScriptJava2.3k02790
gempesaw/dotemacsEmacs LispMakefileOther2000
vmware-tanzu/veleroGoShellHTML7.7k+481.3k+7
vektra/mockeryGoOther5.1k+23410+3
github/deliHaskellOther166090