ZrrSkywalker/LLaMA-Adapter

Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters

This is stars and forks stats for /ZrrSkywalker/LLaMA-Adapter repository. As of 27 Apr, 2024 this repository has 52 stars and 3 forks.

LLaMA-Adapter: Efficient Fine-tuning of LLaMA 🚀 The official codebase has been transferred to OpenGVLab/LLaMA-Adapter for better follow-up maintenance! Citation If you find our LLaMA-Adapter code and paper useful, please kindly cite: @article{zhang2023llamaadapter, title = {LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention}, author={Zhang, Renrui and Han, Jiaming and Zhou, Aojun and Hu, Xiangfei and Yan, Shilin and Lu, Pan and Li, Hongsheng and Gao, Peng and Qiao,...
Read on GithubGithub Stats Page