miscii-14b-1225

Image source: Rrhar'il | Phigros

Prompting & Usage

See miscii-14b-1028 for more details.

Training Details

Coming soon

Merge Details

This is a merge of pre-trained language models created using mergekit.

Open LLM Leaderboard Evaluation Results

Congratulations to the miscii series models for surpassing 40 points for the first time! As of December 25, 2024, this should be the best-performing 14B model in the tests, right?

Metric Value
Avg. 40.08
IFEval (0-Shot) 78.78
BBH (3-Shot) 50.91
MATH Lvl 5 (4-Shot) 31.57
GPQA (0-shot) 17.00
MuSR (0-shot) 14.77
MMLU-PRO (5-shot) 47.46

Refined since 2025-02-15

Metric Value
Avg. 42.35
IFEval (0-Shot) 78.78
BBH (3-Shot) 50.91
MATH Lvl 5 (4-Shot) 45.17
GPQA (0-shot) 17.00
MuSR (0-shot) 14.77
MMLU-PRO (5-shot) 47.46
Downloads last month
38
Safetensors
Model size
15B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for sthenno-com/miscii-14b-1225

Merge model
this model
Finetunes
1 model
Merges
16 models
Quantizations
7 models

Space using sthenno-com/miscii-14b-1225 1

Collection including sthenno-com/miscii-14b-1225

Evaluation results