Sinisa Stanivuk

Stopwolf

AI & ML interests

Multilingual LLMs, STT and TTS models

Recent Activity

liked a model 29 days ago
capleaf/viXTTS
liked a model about 2 months ago
EuroBERT/EuroBERT-610m
View all activity

Organizations

Intellya Data Science Team's profile picture Data Is Better Together Contributor's profile picture

Stopwolf's activity

upvoted an article about 1 month ago
view article
Article

Training and Finetuning Reranker Models with Sentence Transformers v4

β€’ 125
reacted to onekq's post with πŸ”₯ 3 months ago
view post
Post
4833
πŸ‹DeepSeek πŸ‹ is the real OpenAI 😯
Β·
upvoted an article 4 months ago
view article
Article

Train 400x faster Static Embedding Models with Sentence Transformers

β€’ 177
reacted to nataliaElv's post with πŸ‘€ 5 months ago
view post
Post
1653
Would you like to get a high-quality dataset to pre-train LLMs in your language? 🌏

At Hugging Face we're preparing a collaborative annotation effort to build an open-source multilingual dataset as part of the Data is Better Together initiative.

Follow the link below, check if your language is listed and sign up to be a Language Lead!

https://forms.gle/s9nGajBh6Pb9G72J6
reacted to prithivMLmods's post with πŸ”₯πŸš€ 7 months ago
view post
Post
3973
I’m recently experimenting with the Flux-Ultra Realism and Real Anime LoRA models, using the Flux.1-dev model as the base. The model and its demo example are provided in the Flux LoRA DLC collections.πŸ“ƒ

πŸ₯³Demo : πŸ”— prithivMLmods/FLUX-LoRA-DLC

πŸ₯³Model:
- prithivMLmods/Canopus-LoRA-Flux-UltraRealism-2.0
- prithivMLmods/Flux-Dev-Real-Anime-LoRA

πŸ₯³For more details, please visit the README.md of the Flux LoRA DLC Space & prithivMLmods/lora-space-collections-6714b72e0d49e1c97fbd6a32
  • 1 reply
Β·
reacted to tomaarsen's post with πŸ”₯ 7 months ago
view post
Post
7151
πŸ“£ Sentence Transformers v3.2.0 is out, marking the biggest release for inference in 2 years! 2 new backends for embedding models: ONNX (+ optimization & quantization) and OpenVINO, allowing for speedups up to 2x-3x AND Static Embeddings for 500x speedups at 10-20% accuracy cost.

1️⃣ ONNX Backend: This backend uses the ONNX Runtime to accelerate model inference on both CPU and GPU, reaching up to 1.4x-3x speedup depending on the precision. We also introduce 2 helper methods for optimizing and quantizing models for (much) faster inference.
2️⃣ OpenVINO Backend: This backend uses Intel their OpenVINO instead, outperforming ONNX in some situations on CPU.

Usage is as simple as SentenceTransformer("all-MiniLM-L6-v2", backend="onnx"). Does your model not have an ONNX or OpenVINO file yet? No worries - it'll be autoexported for you. Thank me later πŸ˜‰

πŸ”’ Another major new feature is Static Embeddings: think word embeddings like GLoVe and word2vec, but modernized. Static Embeddings are bags of token embeddings that are summed together to create text embeddings, allowing for lightning-fast embeddings that don't require any neural networks. They're initialized in one of 2 ways:

1️⃣ via Model2Vec, a new technique for distilling any Sentence Transformer models into static embeddings. Either via a pre-distilled model with from_model2vec or with from_distillation where you do the distillation yourself. It'll only take 5 seconds on GPU & 2 minutes on CPU, no dataset needed.
2️⃣ Random initialization. This requires finetuning, but finetuning is extremely quick (e.g. I trained with 3 million pairs in 7 minutes). My final model was 6.6% worse than bge-base-en-v1.5, but 500x faster on CPU.

Full release notes: https://github.com/UKPLab/sentence-transformers/releases/tag/v3.2.0
Documentation on Speeding up Inference: https://sbert.net/docs/sentence_transformer/usage/efficiency.html
  • 1 reply
Β·