https://arca.live/b/alpaca/118261066?p=2 의 방법을 활용하여 만들어졌음.

  • lcw99/wikipedia-korean-20240501-1million-qna
  • MarkrAI/KOpen-HQ-Hermes-2.5-60K
  • garage-bAInd/Open-Platypus
  • rwkv-x-dev/openorca-gpt4
  • gbharti/finance-alpaca
  • 내가 직접 만든 데이터

를 적당히 샘플링하여 만들었음.

LogicKor 점수는 가장 높음.

해당 모델은 DPO 학습되지 않았음.

Downloads last month
34
Safetensors
Model size
8.03B params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for lIlBrother/Llama-3.1-8B-Instruct-KoEn-FFT-Merge

Finetuned
(146)
this model

Datasets used to train lIlBrother/Llama-3.1-8B-Instruct-KoEn-FFT-Merge