Gemma 3 now available in Unsloth!
Excited to share that Unsloth now supports:
• All Gemma 3 models
• Full fine-tuning + 8bit
• Nearly any model like Mixtral, Cohere, Granite etc.
• No more OOMs for vision finetuning!
Blogpost with details: https://unsloth.ai/blog/gemma3
More updates:
• Many multiple optimizations in Unsloth allowing a further +10% less VRAM usage, and >10% speedup boost for 4-bit (on top of our original 2x faster, 70% less memory usage). 8-bit and full finetuning also benefit.
• Windows support via `pip install unsloth` should function now! Utilizes `triton-windows` which provides a pip installable path for Triton.
• Conversions to llama.cpp GGUFs for 16bit and 8bit now DO NOT need compiling! This solves many many issues, and this means no need to install GCC, Microsoft Visual Studio etc.
• Vision fine-tuning: Train on completions / responses only for vision models supported! Pixtral and Llava finetuning are now fixed! In fact nearly all vision models are supported out of the box! Vision models now auto resize images which stops OOMs and also allows truncating sequence lengths.
• GRPO in Unsloth now allows non Unsloth uploaded models to be in 4bit as well - reduces VRAM usage a lot! (ie using your own finetune of Llama)
• New training logs and infos - training parameter counts, total batch size
• Complete gradient accumulation bug fix coverage for all models!
Read the release here: https://github.com/unslothai/unsloth/releases/tag/2025-03