πŸš€ TinyBERT Encoder Model

This is a fine-tuned TinyBERT Encoder model, optimized for lightweight NLP tasks.

πŸ”Ή Use This Model

To use this model with transformers, simply run:

from transformers import AutoModel, AutoTokenizer

model_name = "hjsgfd/my_tinybert_encoder"  # Replace with your actual repo name
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)

# Encode text
text = "TinyBERT is small but powerful."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)

print(outputs.last_hidden_state)  # Encoded text representation


from sentence_transformers import SentenceTransformer

model = SentenceTransformer("hjsgfd/my_tinybert_encoder")
embeddings = model.encode("This is an example sentence.")
print(embeddings)
---


# TinyBERT Encoder Model

This is a fine-tuned **TinyBERT Encoder** model optimized for lightweight NLP tasks.

## πŸ”Ή How to Use

```python
from transformers import AutoModel, AutoTokenizer

model_name = " hjsgfd/my_tinybert_encoder"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)

# Encode text
text = "TinyBERT is small but powerful."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)

print(outputs.last_hidden_state)  # Encoded text representation
Downloads last month
5
Safetensors
Model size
14.4M params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support