NExT-GPT/NExT-GPT

Code and models for NExT-GPT: Any-to-Any Multimodal Large Language Model

PythonShellmultimodalgpt-4foundation-modelsvisual-language-learninglarge-language-modelsllmchatgptinstruction-tuningmulti-modal-chatgpt
This is stars and forks stats for /NExT-GPT/NExT-GPT repository. As of 29 Apr, 2024 this repository has 1999 stars and 206 forks.

NExT-GPT: Any-to-Any Multimodal LLM Shengqiong Wu, Hao Fei*, Leigang Qu, Wei Ji, and Tat-Seng Chua. (*Correspondence ) NExT++, School of Computing, National University of Singapore This repository hosts the code, data and model weight of NExT-GPT, the first end-to-end MM-LLM that perceives input and generates output in arbitrary combinations (any-to-any) of text, image, video, and audio and beyond. πŸŽ‰ News [2023.09.15] πŸš€πŸš€ Release the code of NExT-GPT in version 7b_tiva_v0. [2023.09.27] πŸ”¨ 🧩Added...
Read on GithubGithub Stats Page
repotechsstarsweeklyforksweekly
langchain-ai/chat-langchainPythonTypeScriptDockerfile3.8k09010
grays42/reddit-profile-analyzerPython940160
haoheliu/versatile_audio_super_resolutionPython4600430
nvimdev/guard-collectionLuaShell18080
mikesmithgh/kitty-scrollback.nvimLuaPythonShell93010
Substra/substra-documentationMustacheDockerfileMakefile20090
AliMD/blogNunjucksCSSJavaScript2000
Ber666/RAPPDDLPythonC73090
mit-han-lab/efficientvitPythonShell5390420
saltops/openstack-folsom-saltSchemeShell2020