haotian-liu/LLaVA

Visual Instruction Tuning: Large Language-and-Vision Assistant built towards multimodal GPT-4 level capabilities.

PythonShellJavaScriptHTMLCSSchatbotllamamultimodalmulti-modalitygpt-4foundation-modelsvisual-language-learningchatgptinstruction-tuningvision-language-modelllavallama2llama-2
This is stars and forks stats for /haotian-liu/LLaVA repository. As of 25 Apr, 2024 this repository has 7599 stars and 651 forks.

🌋 LLaVA: Large Language and Vision Assistant Visual instruction tuning towards large language and vision models with GPT-4 level capabilities. [Project Page] [Demo] [Data] [Model Zoo] Improved Baselines with Visual Instruction Tuning [Paper] Haotian Liu, Chunyuan Li, Yuheng Li, Yong Jae Lee Visual Instruction Tuning (NeurIPS 2023, Oral) [Paper] Haotian Liu*, Chunyuan Li*, Qingyang Wu, Yong Jae Lee (*Equal Contribution) Generated by GLIGEN via "a cute lava llama with glasses" and box prompt Release [10/11]...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
thomas-yanxin/LangChain-ChatGLM-WebuiPythonDockerfile2.3k03550
rhohndorf/Auto-Llama-cppPythonDockerfile3330580
Doriandarko/BabyAGIChatGPTPython3580560
nucliweb/webperf-snippetsMDXJavaScript1.2k0680
EzpieCo/ezpieAstroTypeScriptJavaScript100150
tudat-team/sofa-cmake-feedstockBatchfileShell1000
tudat-team/tudat-resources-feedstockBatchfileShellPython3010
noQ-sweden/noqJavaTypeScriptBicep8000
Azure/aca-dotnet-workshopBicepC#HTML570300
rvs/planDCDockerfileShell70010