Qwen3-VL-4B-Thinking-Unredacted-MAX-GGUF

Qwen3-VL-4B-Thinking-Unredacted-MAX represents a powerful and unredacted evolution of the original Qwen3-VL-4B-Thinking model, meticulously fine-tuned through advanced abliterated training strategies that are explicitly designed to minimize internal refusal mechanisms which often constrain standard language and vision-language models, while simultaneously preserving and enhancing the model’s core multimodal reasoning abilities, enabling it to process complex visual inputs with remarkable precision and generate highly detailed, nuanced, and contextually rich captions, descriptions, and analyses across a diverse range of content types, including artistic, technical, forensic, scientific, and abstract domains; this 4-billion-parameter vision-language model excels in delivering unrestricted, high-fidelity outputs suitable for tasks such as in-depth data annotation, accessibility enhancement, creative storytelling, historical or medical dataset curation, and rigorous red-teaming research, all while balancing computational efficiency and interpretability, making it an ideal tool for researchers, developers, and professionals who require advanced, unfiltered reasoning and descriptive capabilities from a state-of-the-art vision-language system.

Qwen3-VL-4B-Thinking-Unredacted-MAX [GGUF]

File Name Quant Type File Size File Link
Qwen3-VL-4B-Thinking-Unredacted-MAX.BF16.gguf BF16 8.05 GB Download
Qwen3-VL-4B-Thinking-Unredacted-MAX.F16.gguf F16 8.05 GB Download
Qwen3-VL-4B-Thinking-Unredacted-MAX.Q8_0.gguf Q8_0 4.28 GB Download
Qwen3-VL-4B-Thinking-Unredacted-MAX.mmproj-bf16.gguf mmproj-bf16 839 MB Download
Qwen3-VL-4B-Thinking-Unredacted-MAX.mmproj-f16.gguf mmproj-f16 839 MB Download
Qwen3-VL-4B-Thinking-Unredacted-MAX.mmproj-q8_0.gguf mmproj-q8_0 454 MB Download

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
200
GGUF
Model size
4B params
Architecture
qwen3vl
Hardware compatibility
Log In to add your hardware

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Qwen3-VL-4B-Thinking-Unredacted-MAX-GGUF

Collection including prithivMLmods/Qwen3-VL-4B-Thinking-Unredacted-MAX-GGUF