pszemraj's picture
typos
99e25a9
|
raw
history blame contribute delete
1.7 kB
---
license: cc-by-nc-sa-4.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- stableLM
- sharded
- 8-bit
- quantized
- tuned
inference: false
---
# stablelm-tuned-alpha-7b-sharded-8bit
This is a sharded checkpoint (with ~4GB shards) of the `stabilityai/stablelm-tuned-alpha-7b` model **in `8bit` precision** using `bitsandbytes`.
Refer to the [original model](https://huggingface.co/stabilityai/stablelm-tuned-alpha-7b) for all details w.r.t. to the model. For more info on loading 8bit models, refer to the [example repo](https://huggingface.co/ybelkada/bloom-1b7-8bit) and/or the `4.28.0` [release info](https://github.com/huggingface/transformers/releases/tag/v4.28.0).
- total model size is only ~7 GB!
- this enables low-RAM loading, i.e. Colab :)
## Basic Usage
<a href="https://colab.research.google.com/gist/pszemraj/4bd75aa3744f2a02a5c0ee499932b7eb/sharded-stablelm-testing-notebook.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
You can use this model as a drop-in replacement in the notebook for the standard sharded models.
### Python
Install/upgrade `transformers`, `accelerate`, and `bitsandbytes`. For this to work **you must have** `transformers>=4.28.0` and `bitsandbytes>0.37.2`.
```bash
pip install -U -q transformers bitsandbytes accelerate
```
Load the model. As it is serialized in 8bit you don't need to do anything special:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "ethzanalytics/stablelm-tuned-alpha-7b-sharded-8bit"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
```