Update README.md
014792b verified - 1.52 kB initial commit
- 10.9 kB Update README.md
- 42 Bytes Add HuggingFaceH4/mistral-7b-dpo-v0.4 checkpoint
- 732 Bytes Add HuggingFaceH4/mistral-7b-dpo-v0.4 checkpoint
- 167 kB Upload colab-demo.ipynb
- 628 Bytes Shard checkpoints in 2GB chunks to run on Colab
- 557 Bytes Add HuggingFaceH4/mistral-7b-dpo-v0.4 checkpoint
- 111 Bytes Add HuggingFaceH4/mistral-7b-dpo-v0.4 checkpoint
- 1.89 GB Shard checkpoints in 2GB chunks to run on Colab
- 1.95 GB Shard checkpoints in 2GB chunks to run on Colab
- 1.98 GB Shard checkpoints in 2GB chunks to run on Colab
- 1.95 GB Shard checkpoints in 2GB chunks to run on Colab
- 1.98 GB Shard checkpoints in 2GB chunks to run on Colab
- 1.95 GB Shard checkpoints in 2GB chunks to run on Colab
- 1.98 GB Shard checkpoints in 2GB chunks to run on Colab
- 816 MB Shard checkpoints in 2GB chunks to run on Colab
- 24 kB Shard checkpoints in 2GB chunks to run on Colab
- 1.89 GB Shard checkpoints in 2GB chunks to run on Colab
- 1.95 GB Shard checkpoints in 2GB chunks to run on Colab
- 1.98 GB Shard checkpoints in 2GB chunks to run on Colab
- 1.95 GB Shard checkpoints in 2GB chunks to run on Colab
- 1.98 GB Shard checkpoints in 2GB chunks to run on Colab
- 1.95 GB Shard checkpoints in 2GB chunks to run on Colab
- 1.98 GB Shard checkpoints in 2GB chunks to run on Colab
- 816 MB Shard checkpoints in 2GB chunks to run on Colab
- 24 kB Shard checkpoints in 2GB chunks to run on Colab
- 168 Bytes Add chat template (#2)
- 510 kB Upload thumbnail.png
- 1.8 MB Add HuggingFaceH4/mistral-7b-dpo-v0.4 checkpoint
- 493 kB Add HuggingFaceH4/mistral-7b-dpo-v0.4 checkpoint
- 1.43 kB Add chat template (#2)
- 195 Bytes Add HuggingFaceH4/mistral-7b-dpo-v0.4 checkpoint
- 104 kB Add HuggingFaceH4/mistral-7b-dpo-v0.4 checkpoint
training_args.bin Detected Pickle imports (11)
- "transformers.trainer_utils.IntervalStrategy",
- "transformers.trainer_utils.HubStrategy",
- "transformers.trainer_utils.SchedulerType",
- "accelerate.state.PartialState",
- "accelerate.utils.dataclasses.DeepSpeedPlugin",
- "transformers.integrations.deepspeed.HfDeepSpeedConfig",
- "torch.device",
- "accelerate.utils.dataclasses.DistributedType",
- "h4.training.config.DPOTrainingArguments",
- "transformers.training_args.OptimizerNames",
- "accelerate.utils.deepspeed.HfDeepSpeedConfig"
How to fix it?
5.44 kB Add HuggingFaceH4/mistral-7b-dpo-v0.4 checkpoint