Open-Source AI Meetup

community

AI & ML interests

Open science and open source

SFEvent's activity

BrigitteTousiΒ 
posted an update 26 days ago
view post
Post
3028
AI agents are transforming how we interact with technology, but how sustainable are they? 🌍

Design choices β€” like model size and structure β€” can massively impact energy use and cost. βš‘πŸ’° The key takeaway: smaller, task-specific models can be far more efficient than large, general-purpose ones.

πŸ”‘ Open-source models offer greater transparency, allowing us to track energy consumption and make more informed decisions on deployment. 🌱 Open-source = more efficient, eco-friendly, and accountable AI.

Read our latest, led by @sasha with assists from myself + @yjernite πŸ€—
https://huggingface.co/blog/sasha/ai-agent-sustainability
  • 1 reply
Β·
jeffboudierΒ 
posted an update 27 days ago
view post
Post
2172
Llama4 is out and Scout is already on the Dell Enterprise Hub to deploy on Dell systems πŸ‘‰ dell.huggingface.co
zamalΒ 
posted an update 30 days ago
view post
Post
1819
πŸš€ DeepGit Lite is live! πŸ”βœ¨

Hey folks!
Just launched DeepGit Lite β€” a lighter version of DeepGit with fewer components under the hood.
It won’t perform quite like the full powerhouse, but it’s great for a quick peek and first-hand feel! βš™οΈπŸ‘€

Give it a spin and tell us what you think!
πŸ‘‰ Try it here zamal/DeepGit-lite
#opensource #DeepGit #gradio #githubresearch
Β·
jeffboudierΒ 
posted an update about 1 month ago
view post
Post
1544
Enterprise orgs now enable serverless Inference Providers for all members
- includes $2 free usage per org member (e.g. an Enterprise org with 1,000 members share $2,000 free credit each month)
- admins can set a monthly spend limit for the entire org
- works today with Together, fal, Novita, Cerebras and HF Inference.

Here's the doc to bill Inference Providers usage to your org: https://huggingface.co/docs/inference-providers/pricing#organization-billing
  • 2 replies
Β·
zamalΒ 
posted an update about 1 month ago
view post
Post
2553
DeepGit: Your GitHub Gold Digger! πŸ’°πŸš€
Hey Hugging Face gang! Meet DeepGitβ€”my open-source sidekick that rips through GitHub to snag repos that fit you. Done with dead-end searches? Me too. Built it with LangGraph and some dope tricks:
Embeddings grab the good stuff (HF magic, baby!)

Re-ranking nails the best picks

Snoops docs, code, and buzz in one slick flow

Drops a clean list of hidden gems πŸ’Ž

Unearth that sneaky ML lib or Python gemβ€”run python app.py or langgraph dev and boom! Peek it at https://github.com/zamalali/DeepGit. Fork it, tweak it, love itβ€”Docker’s in, HF vibes are strong. Drop a 🌟 or a crazy ideaβ€”I’m pumped to jam with you all! πŸͺ‚
BrigitteTousiΒ 
posted an update about 2 months ago
BrigitteTousiΒ 
posted an update about 2 months ago
view post
Post
3737
Regardless of X being down or not, so glad I can rely on HF Posts for AI news β€οΈπŸ€—
  • 1 reply
Β·
zamalΒ 
posted an update 2 months ago
view post
Post
2004
πŸš€ ftBoost is LIVE – Stop Struggling with Fine-Tuning Data!

Alright folks, if you’re tired of manually crafting fine-tuning datasets, ftBoost is here to do the heavy lifting. One-click, LangChain-Groq-powered data augmentation that scales your training data in OpenAI, Gemini, Mistral, and LLaMA formatsβ€”automatically.

πŸ”₯ What’s inside?
βœ… Smart Augmentations – Paraphrasing, back translation, synonym swapping & synthetic noise.
βœ… No more JSONL headaches – Auto-formats everything for OpenAI, Gemini, Mistral & LLaMA.
βœ… Custom tuning – Adjust similarity, diversity, and fluency in real-time.
βœ… Upload, generate, download – That’s it.

⚑ If you’re fine-tuning LLMs, this will save you hours.

πŸš€ Try it now: πŸ‘‰ zamal/Finetune-Boost

🌟 Give us a star on GitHub!

Let me know what you think & how it boosts your workflow! πŸ”₯
zamalΒ 
posted an update 3 months ago
view post
Post
606
πŸš€ Try Out RAG Demo! πŸš€

A Hugging Face Space where you can compare DeepSeek-R1 vs Llama-3 using Stuff RAG (Retrieval-Augmented Generation)!

πŸ” Upload a PDF, ask questions, and see how both models perform in real-time!

Try out now:
zamal/Deepseek-R1-vs-LLama3
  • 1 reply
Β·
zamalΒ 
posted an update 4 months ago
view post
Post
1480
zamal/Multimodal-Chat-PDF

πŸš€ Introducing Chat PDF Multimodal πŸ’¬

Interact with your PDF documents like never before! 🀯
Extract text & images, then ask context-aware questions based on both. Powered by RAG techniques & multimodal LLMs. Perfect for studying, research & more! πŸ“πŸ‘€
Try it out now!!!! ✍️

#LlavaNext #MultimodalAI #Transformers
BrigitteTousiΒ 
posted an update 4 months ago
view post
Post
1332
Community fine-tuned models are more carbon efficient than the models they are derived from! πŸ₯³πŸŒΏ

@alozowski @clefourrier @SaylorTwift @albertvillanova evaluated COβ‚‚ emissions associated with model inference for over 3000 models on the Open LLM Leaderboard. Interesting trends and new insights emerged...πŸ‘€

Blog Post: https://huggingface.co/blog/leaderboard-emissions-analysis

Leaderboard: open-llm-leaderboard/open_llm_leaderboard
jeffboudierΒ 
posted an update 4 months ago
view post
Post
744
NVIDIA just announced the Cosmos World Foundation Models, available on the Hub: nvidia/cosmos-6751e884dc10e013a0a0d8e6

Cosmos is a family of pre-trained models purpose-built for generating physics-aware videos and world states to advance physical AI development.
The release includes Tokenizers nvidia/cosmos-tokenizer-672b93023add81b66a8ff8e6

Learn more in this great community article by @mingyuliutw and @PranjaliJoshi https://huggingface.co/blog/mingyuliutw/nvidia-cosmos
  • 1 reply
Β·
jeffboudierΒ 
posted an update 5 months ago
BrigitteTousiΒ 
posted an update 5 months ago
zamalΒ 
posted an update 6 months ago
view post
Post
1833
πŸš€ Announcement for the Lovely community! πŸš€

Just launched the zamal/DeepSeek-VL-1.3B-Chat on Hugging Face, and it's ready for YOU to explore! πŸ’¬πŸ–ΌοΈ

This full-fledged model is perfect for advanced image and text interactions, with zero GPU required. The Deepseek VL-1.3B Chat typically needs around 8 GB of VRAM and storage of almost 4 GB, but now you can experience it hassle-free right on our space!

Want something lighter? We’ve also uploaded a 4 bit quantized version (just around 1GB!), available on my profile. Perfect for those with limited hardware. πŸŒπŸ”

Come try it now and see what this model can do! πŸš€βœ¨

zamalΒ 
posted an update 7 months ago
view post
Post
2092
Hello, lovely community! 🌟

zamal/Molmo-4bit Thrilled to announce that the Molmo 7B 4-bit Space is now live! πŸš€ The model size has been reduced by six times with almost no performance loss, and the results will leave you amazed!

It runs on zero GPU, making it incredibly accessible for everyone!

Check it out here and start exploring today!

Happy experimenting! πŸŽ‰
jeffboudierΒ 
posted an update 7 months ago
zamalΒ 
posted an update 7 months ago
view post
Post
1958
πŸš€ New Model Release: zamal/Molmo-7B-GPTQ-4bit πŸš€

Hello lovely community,

zamal/Molmo-7B-GPTQ-4bit model is now available for all! This model has been highly quantized, reducing its size by almost six times. It now occupies significantly less space and vRAM, making it perfect for deployment on resource-constrained devices without compromising performance.

Now we get:
Efficient Performance: Maintains high accuracy while being highly quantized.
Reduced Size: The model size is reduced by nearly six times, optimizing storage and memory usage.
Versatile Application: Ideal for integrating a powerful visual language model into various projects particularly multi rag chains.
Check it out!

  • 1 reply
Β·