This is openchat's openchat_3.5, converted to GGUF without quantization. No other changes were made.
The model was converted using convert.py from Georgi Gerganov's llama.cpp repo as of commit #e86fc56.
All credit belongs to openchat for fine-tuning and releasing this model. Thank you!
- Downloads last month
- 3
Hardware compatibility
Log In
to view the estimation
We're not able to determine the quantization variants.