There has been some excitement over the last week or two around the new model in the Qwen series by Alibaba. Qwen Image is a 20B parameter — that’s 3 billion more than HiDream — MMDiT (Multimodal Diffusion Transformer) model, open-sourced under the Apache 2.0 license.
As well as the features of the core model it also uses the Qwen2.5-VL LLM for text encoding and has a specialised VAE (Variational Autoencoder). It supposedly can render readable, multilingual text in much longer forms than previous models and the VAE is trained to preserve small fonts, text edges and layout. Using Qwen2.5-VL as the text encoder should mean better language, vision and context understanding.
… These improvements come at a cost: size. The full BF16 model is 40GB in size, with the FP16 version of the text encoder coming in at an additional 16GB. FP8 versions are more reasonable at 20GB for the model and 9GB for the text encoder. If those sizes are still too large for your set up, then there are distilled versions available from links on the ComfyUI guide. City96 has also created various GGUF versions available for download from Hugging Face. — Read More