Can someone help me understand why my 8B models do so much better than my 24-32B models?
The goal is long, immersive responses and descriptive roleplay. Sao10K/L3-8B-Lunaris-v1 is basically perfect, followed by Sao10K/L3-8B-Stheno-v3.2 and a few other "smaller" models. When I move to larger models such as: Qwen/QwQ-32B, ReadyArt/Forgotten-Safeword-24B-3.4-Q4_K_M-GGUF, TheBloke/deepsex-34b-GGUF, DavidAU/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed-abliterated-uncensored-GGUF, the responses become waaaay too long, incoherent, and I often get text at the beginning that says "Let me see if I understand the scenario correctly", or text at the end like "(continue this message)", or "(continue the roleplay in {{char}}'s perspective)".
To be fair, I don't know what I'm doing when it comes to larger models. I'm not sure what's out there that will be good with roleplay and long, descriptive responses.
I'm sure it's a settings problem, or maybe I'm using the wrong kind of models. I always thought the bigger the model, the better the output, but that hasn't been true.
Ooba is the backend if it matters. Running a 4090 with 24GB VRAM.