NextGenC commited on
Commit
9d85150
Β·
verified Β·
1 Parent(s): 69251cc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -27,7 +27,7 @@ Using advanced 4-bit quantization techniques, Erynn delivers impressive performa
27
  > πŸ“„ **License Note**: This model is governed by a customized MIT-based license. Please refer to the included `LICENSE.md` file for details. Usage of this model implies acceptance of the license terms.
28
 
29
 
30
- > πŸ“„ **Use Note**: Use Note: Erynn 1 is a LoRA adapter trained on top of the openai-community/gpt2-large base model, which is required to run it. To get started with Erynn 1, you first need to obtain the base model weights from the openai-community/gpt2-large repository; it is recommended to download the pytorch_model.bin file along with other necessary configuration and tokenizer files. Alternatively, you can simply let the Hugging Face transformers library download and cache the base model automatically when you load it (recommended approach). Once the base model is ready, download the LoRA adapter files for Erynn 1 from the 'Files' tab in this repository. To load the gpt2-large base model and apply the downloaded Erynn 1 adapter correctly, please refer to the example Python code provided within this repository, which typically illustrates how to use the Hugging Face PEFT library for this purpose. Thank you for your support, from 7enn Labs!
31
 
32
  ## 🌟 Key Capabilities
33
 
 
27
  > πŸ“„ **License Note**: This model is governed by a customized MIT-based license. Please refer to the included `LICENSE.md` file for details. Usage of this model implies acceptance of the license terms.
28
 
29
 
30
+ > πŸ“„ **Use Note**: Erynn 1 is a LoRA adapter trained on top of the openai-community/gpt2-large base model, which is required to run it. To get started with Erynn 1, you first need to obtain the base model weights from the openai-community/gpt2-large repository; it is recommended to download the pytorch_model.bin file along with other necessary configuration and tokenizer files. Alternatively, you can simply let the Hugging Face transformers library download and cache the base model automatically when you load it (recommended approach). Once the base model is ready, download the LoRA adapter files for Erynn 1 from the 'Files' tab in this repository. To load the gpt2-large base model and apply the downloaded Erynn 1 adapter correctly, please refer to the example Python code provided within this repository, which typically illustrates how to use the Hugging Face PEFT library for this purpose. Thank you for your support, from 7enn Labs!
31
 
32
  ## 🌟 Key Capabilities
33