The performance of dense small models is now gaining renewed attention. Could GLM's 32B/40B variants get their chance to shine?
#62
by Eilian - opened
I'm really looking forward to seeing the 32B dense model make a comeback.
Could you mention more than Qwen3.5 27B? Which ones are worth of testing?
In my personal experiments, dense model at the same size (or even 2x smaller than moe model) always out perform MoEs. Maybe a 32B dense cost much more than a 32B MoE model? For example I can train at most a 1B model in a single A100-40G, but I can train a 4B MoE on the same device.