Inference Providers
Active filters: quant
AngelSlim/Hy-MT1.5-1.8B-1.25bit
Translation
• Updated • 17.3k
• 166
AngelSlim/Hy-MT1.5-1.8B-1.25bit-GGUF
Translation
• 2B • Updated • 6.6k
• 36
AngelSlim/Hy-MT1.5-1.8B-2bit-GGUF
Translation
• 2B • Updated • 4.34k
• 18
tencent/Hy-MT1.5-1.8B-2bit-GGUF
Translation
• 2B • Updated • 6.13k
• 25
tencent/Hy-MT1.5-1.8B-2bit
Translation
• 2B • Updated • 46.8k
• 33
tencent/Hy-MT1.5-1.8B-1.25bit-GGUF
Translation
• 2B • Updated • 5.29k
• 15
tencent/Hy-MT1.5-1.8B-1.25bit
Translation
• Updated • 257
• 26
AngelSlim/Hy-MT1.5-1.8B-2bit
Translation
• 2B • Updated • 924
• 10
eaddario/Qwen3.6-35B-A3B-GGUF
Image-Text-to-Text
• 35B • Updated • 1.15k
• 2
digitous/13B-HyperMantis_GPTQ_4bit-128g
Text Generation
• Updated • 9
• 12
pszemraj/nougat-small-onnx-quant_avx2
Image-Text-to-Text
• Updated • 5
pszemraj/nougat-base-onnx-quant_avx2
Image-Text-to-Text
• Updated • 6
fhai50032/RolePlayLake-7B-GGUF
7B • Updated • 32
• 3
oldbridge/latxa-7b-instruct-q8
Text Generation
• 7B • Updated • 15
pszemraj/nougat-small-onnx-quant_avx512_vnni
Image-Text-to-Text
• Updated • 5
RDson/Llama-3-Magenta-Instruct-4x8B-MoE-GGUF
25B • Updated • 190
• 1
TroyDoesAI/Codestral-21B-Pruned
Text Generation
• 21B • Updated • 11
• 2
mradermacher/Codestral-21B-Pruned-GGUF
21B • Updated • 344
mradermacher/Codestral-21B-Pruned-i1-GGUF
21B • Updated • 542
pszemraj/candle-flanUL2-quantized
Text Generation
• 19B • Updated • 33
byroneverson/gemma-2-27b-it-abliterated-gguf
Text Generation
• 27B • Updated • 265
• 12
QuantFactory/gemma-2-27b-it-abliterated-GGUF
Text Generation
• 27B • Updated • 744
• 7
EmperorKronos/gemma-2-27b-it-abliterated-exl2
Text Generation
• Updated • 2
byroneverson/LongWriter-glm4-9b-abliterated-gguf
Text Generation
• 9B • Updated • 13
• 3
Question Answering
• 8B • Updated • 8
• 4
mradermacher/FinShibainu-GGUF
8B • Updated • 131
• 1
eaddario/Hammer2.1-7b-GGUF
Text Generation
• 8B • Updated • 5.77k
• 2
eaddario/DeepSeek-R1-Distill-Qwen-7B-GGUF
Text Generation
• 8B • Updated • 1.17k
• 3