Vision-Language Adapters: Parameter-Efficient Multimodal Fine-tuning
Master LoRA, bottleneck adapters, and prefix tuning for parameter-efficient fine-tuning of vision-language models like LLaVA with minimal compute and memory.
8 min readConcept
Explore machine learning concepts related to adapters. Clear explanations and practical insights.
Master LoRA, bottleneck adapters, and prefix tuning for parameter-efficient fine-tuning of vision-language models like LLaVA with minimal compute and memory.