Andrea Soria PRO

asoria

AI & ML interests

Maintainer of 🤗Datasets: Data processing

Recent Activity

updated a dataset 4 days ago
asoria/dataset-notebook-creator-content
updated a Space 10 days ago
asoria/AlfredAgent
published a Space 10 days ago
asoria/AlfredAgent
View all activity

Organizations

Hugging Face's profile picture BigScience Data's profile picture Datasets Maintainers's profile picture Blog-explorers's profile picture Enterprise Explorers's profile picture ZeroGPU Explorers's profile picture Datasets examples's profile picture Women on Hugging Face's profile picture Dev Mode Explorers's profile picture Hugging Face Discord Community's profile picture AI Developers from Latin America's profile picture Datasets Topics's profile picture AI Starter Pack's profile picture

asoria's activity

reacted to cfahlgren1's post with 🔥🚀 3 months ago
view post
Post
3032
We just dropped an LLM inside the SQL Console 🤯

The amazing, new Qwen/Qwen2.5-Coder-32B-Instruct model can now write SQL for any Hugging Face dataset ✨

It's 2025, you shouldn't be hand writing SQL! This is a big step in making it where anyone can do in depth analysis on a dataset. Let us know what you think 🤗
posted an update 5 months ago
view post
Post
1947
🚀 Exploring Topic Modeling with BERTopic 🤖

When you come across an interesting dataset, you often wonder:
Which topics frequently appear in these documents? 🤔
What is this data really about? 📊

Topic modeling helps answer these questions by identifying recurring themes within a collection of documents. This process enables quick and efficient exploratory data analysis.

I’ve been working on an app that leverages BERTopic, a flexible framework designed for topic modeling. Its modularity makes BERTopic powerful, allowing you to switch components with your preferred algorithms. It also supports handling large datasets efficiently by merging models using the BERTopic.merge_models approach. 🔗

🔍 How do we make this work?
Here’s the stack we’re using:

📂 Data Source ➡️ Hugging Face datasets with DuckDB for retrieval
🧠 Text Embeddings ➡️ Sentence Transformers (all-MiniLM-L6-v2)
⚡ Dimensionality Reduction ➡️ RAPIDS cuML UMAP for GPU-accelerated performance
🔍 Clustering ➡️ RAPIDS cuML HDBSCAN for fast clustering
✂️ Tokenization ➡️ CountVectorizer
🔧 Representation Tuning ➡️ KeyBERTInspired + Hugging Face Inference Client with Meta-Llama-3-8B-Instruct
🌍 Visualization ➡️ Datamapplot library
Check out the space and see how you can quickly generate topics from your dataset: datasets-topics/topics-generator

Powered by @MaartenGr - BERTopic
reacted to celinah's post with ❤️ 5 months ago
view post
Post
1198
📣 𝚑𝚞𝚐𝚐𝚒𝚗𝚐𝚏𝚊𝚌𝚎_𝚑𝚞𝚋 v0.26.0 is out with some new features and improvements!

✨ 𝗧𝗼𝗽 𝗛𝗶𝗴𝗵𝗹𝗶𝗴𝗵𝘁𝘀:
- 🔐 Multiple access tokens support: Easily manage multiple access tokens with new CLI commands. Perfect for handling multiple tokens with specific permissions in production or when collaborating with external teams.
- 🖼️ Conversational VLMs inference is now supported with InferenceClient's chat completion!
- 📄 Daily Papers API: Seamlessly search and retrieve detailed paper information from the Hub!

We’ve also introduced multiple bug fixes and quality-of-life improvements - thanks to the awesome contributions from our community! 🤗

Check out the release notes here: Wauplin/huggingface_hub#9

and you can try it out now 👇
pip install huggingface_hub==0.26.0

reacted to davidberenstein1957's post with 🔥 6 months ago
reacted to anakin87's post with 👍 6 months ago
view post
Post
1761
🕵🏻 𝐀𝐠𝐞𝐧𝐭𝐢𝐜 𝐑𝐀𝐆 𝐰𝐢𝐭𝐡 🦙 𝐋𝐥𝐚𝐦𝐚 3.2

I was excited to explore Llama 3.2, but as a simple 🇪🇺 EU guy, I don't have access to Meta's multimodal models 😿

🤔 So I thought: why not challenge the small 3B text model with Agentic RAG?

🎯 The plan:
- Build a system that tries to answer questions using a knowledge base.
- If the documents don't contain the answer, use Web search for additional context.


Check out my experimental notebook here: 📓 https://colab.research.google.com/github/deepset-ai/haystack-cookbook/blob/main/notebooks/llama32_agentic_rag.ipynb


My stack:
🏗️ haystack (https://haystack.deepset.ai/): open-source LLM orchestration framework
🦙 meta-llama/Llama-3.2-3B-Instruct
🦆🌐 free DuckDuckGo API, integrated with Haystack

✨ 𝘛𝘩𝘦 𝘳𝘦𝘴𝘶𝘭𝘵𝘴? 𝘌𝘯𝘤𝘰𝘶𝘳𝘢𝘨𝘪𝘯𝘨 - 𝘢 𝘧𝘦𝘸 𝘮𝘰𝘯𝘵𝘩𝘴 𝘢𝘨𝘰, 𝘵𝘩𝘪𝘴 𝘭𝘦𝘷𝘦𝘭 𝘰𝘧 𝘱𝘦𝘳𝘧𝘰𝘳𝘮𝘢𝘯𝘤𝘦 𝘧𝘳𝘰𝘮 𝘢 𝘴𝘮𝘢𝘭𝘭 𝘮𝘰𝘥𝘦𝘭 𝘸𝘰𝘶𝘭𝘥'𝘷𝘦 𝘣𝘦𝘦𝘯 𝘶𝘯𝘵𝘩𝘪𝘯𝘬𝘢𝘣𝘭𝘦!
This probably reflects the impressive IFEval score of the model (comparable to Llama 3.1 8B).
posted an update 6 months ago
posted an update 6 months ago
view post
Post
964
🚀 Excited to share the latest update to the Notebook Creator Tool!

Now with basic fine-tuning support using Supervised Fine-Tuning! 🎯

How it works:
1️⃣ Choose your Hugging Face dataset and notebook type (SFT)
2️⃣ Automatically generate your training notebook
3️⃣ Start fine-tuning with your data!

Link to the app 👉 https://lnkd.in/e_3nmWrB
💡 Want to contribute with new notebooks? 👉https://lnkd.in/eWcZ92dS
reacted to m-ric's post with 👀 6 months ago
view post
Post
1661
𝗔𝗿𝗲 𝗔𝗴𝗲𝗻𝘁𝘀 𝗰𝗮𝗽𝗮𝗯𝗹𝗲 𝗲𝗻𝗼𝘂𝗴𝗵 𝗳𝗼𝗿 𝗗𝗮𝘁𝗮 𝗦𝗰𝗶𝗲𝗻𝗰𝗲? ⇒ 𝗠𝗲𝗮𝘀𝘂𝗿𝗲 𝘁𝗵𝗲𝗶𝗿 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝘄𝗶𝘁𝗵 𝗗𝗦𝗕𝗲𝗻𝗰𝗵 📊

A team from Tencent AI wanted to evaluate agentic systems on data science (DS) tasks : but they noticed that existing agentic benchmarks were severely limited in several aspects: they were limited to text and did not include tables or images, were only specific to certain packages, only performed exact match evaluation…

➡️ So they set out to build a much more exhaustive approach, to finally make the definitive DS agent benchmark.

𝗧𝗵𝗲 𝗗𝗦𝗕𝗲𝗻𝗰𝗵 𝗱𝗮𝘁𝗮𝘀𝗲𝘁
▪️DS bench has 466 data analysis tasks and 74 data modelling tasks
▪️The tasks are sourced from ModelOff and Kaggle, the platforms hosting the most popular data science competitions
▪️Difference with previous DS benchmarks:
❶ This benchmark leverages various modalities on top of text: images, Excel files, tables
❷ Complex tables: sometimes several tables should be leveraged to answer one question
❸ The context is richer, with longer descriptions.
▪️ Evaluation metrics : the benchmark is scored with an LLM as a judge, using a specific prompt.

𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀 𝗳𝗿𝗼𝗺 𝗲𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗻𝗴 𝗮𝗴𝗲𝗻𝘁𝘀
▪️ Their evaluation confirms that using LLMs in an agent setup, for instance by allowing them to run a single step of code execution, is more costly (especially with multi-turn frameworks like autogen) but also much more performant than the vanilla LLM.
▪️ The sets of tasks solved by different models (like GPT-3.5 vs Llama-3-8B) has quite low overlap, which suggests that different models tend to try very different approches.

This new benchmark is really welcome, can't wait to try transformers agents on it! 🤗

Read their full paper 👉 DSBench: How Far Are Data Science Agents to Becoming Data Science Experts? (2409.07703)
posted an update 6 months ago
view post
Post
825
I've been working on a Space to make it super easy to create notebooks and help users quickly understand and manipulate their data!
With just a few clicks automatically generate notebooks for:

📊 Exploratory Data Analysis
🧠 Text Embeddings
🤖 Retrieval-Augmented Generation (RAG)

✨ Automatic training is coming soon!
Check it out here asoria/auto-notebook-creator
Appreciate any feedback to improve this tool 🤗
reacted to davanstrien's post with 🚀 7 months ago
view post
Post
3188
🚀 Introducing Hugging Face Similar: a Chrome extension to find relevant datasets!

✨ Adds a "Similar Datasets" section to Hugging Face dataset pages
🔍 Recommendations based on dataset READMEs
🏗️ Powered by https://huggingface.co/chromadb and https://huggingface.co/Snowflake embeddings.

You can try it here: https://chromewebstore.google.com/detail/hugging-face-similar/aijelnjllajooinkcpkpbhckbghghpnl?authuser=0&hl=en.

I am very happy to get feedback on whether this could be useful or not 🤗
·
reacted to m-ric's post with 🚀 8 months ago
view post
Post
2304
𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗗𝗮𝘁𝗮 𝗮𝗻𝗮𝗹𝘆𝘀𝘁: 𝗱𝗿𝗼𝗽 𝘆𝗼𝘂𝗿 𝗱𝗮𝘁𝗮 𝗳𝗶𝗹𝗲, 𝗹𝗲𝘁 𝘁𝗵𝗲 𝗟𝗟𝗠 𝗱𝗼 𝘁𝗵𝗲 𝗮𝗻𝗮𝗹𝘆𝘀𝗶𝘀 📊⚙️

Need to make quick exploratory data analysis? ➡️ Get help from an agent.

I was impressed by Llama-3.1's capacity to derive insights from data. Given a csv file, it makes quick work of exploratory data analysis and can derive interesting insights.

On the data from the Kaggle titanic challenge, that records which passengers survived the Titanic wreckage, it was able by itself to derive interesting trends like "passengers that paid higher fares were more likely to survive" or "survival rate was much higher for women than men".

The cookbook even lets the agent built its own submission to the challenge, and it ranks under 3,000 out of 17,000 submissions: 👏 not bad at all!

Try it for yourself in this Space demo 👉 m-ric/agent-data-analyst
  • 2 replies
·
reacted to albertvillanova's post with 🔥 10 months ago
view post
Post
2722
Easily convert your script-based datasets to Parquet and explore them in the dataset viewer. 🌟

🛠️ Use @huggingface Datasets CLI:
$ 𝚍𝚊𝚝𝚊𝚜𝚎𝚝𝚜-𝚌𝚕𝚒 𝚌𝚘𝚗𝚟𝚎𝚛𝚝_𝚝𝚘_𝚙𝚊𝚛𝚚𝚞𝚎𝚝 𝚄𝚂𝙴𝚁𝙽𝙰𝙼𝙴/𝙳𝙰𝚃𝙰𝚂𝙴𝚃_𝙽𝙰𝙼𝙴

Learn more: https://huggingface.co/docs/datasets/main/en/cli#convert-to-parquet
#Data #AI
reacted to davanstrien's post with 🔥 10 months ago
view post
Post
952
In my ongoing quest to learn more about building synthetic datasets, I've created an "Awesome Synthetic Datasets" list.

The aim is to lightly curate a collection of resources, tutorials, and tools for generating synthetic datasets using large language models.

I plan to add some "key techniques" to the repo, but for now, it focuses on important datasets, papers, and tools.

🔗 https://github.com/davanstrien/awesome-synthetic-datasets
reacted to tomaarsen's post with ❤️ about 1 year ago
view post
Post
🤗 Sentence Transformers v2.4.0 for embedding models is now out! It introduces a lot of powerful features, such as:

1. Matryoshka Loss function - you can now train & perform inference on 🪆 Matryoshka Embedding models. See also our blogpost: https://huggingface.co/blog/matryoshka

2. CoSENTLoss & AnglELoss: State of the art loss functions. These are quite interesting, they outperform CosineSimilarityLoss on nearly all benchmarks as a drop-in replacement! See also the docs: https://sbert.net/docs/package_reference/losses.html#cosentloss

3. Prompt templates: Many popular models such as intfloat/multilingual-e5-large and BAAI/bge-large-en-v1.5 prefix their texts with prompts, so this adds configuration options to automatically include prompts using model.encode(..., prompt_name="query") which will include a prompt with the name "query". More info in the docs: https://sbert.net/examples/applications/computing-embeddings/README.html#prompt-templates

4. Instructor support: Support for the INSTRUCTOR line of models, such as hkunlp/instructor-large. Learn how to use them here: https://sbert.net/docs/pretrained_models.html#instructor-models

5. Removed NLTK & sentencepiece dependencies: Should allow for a smaller installation & a slightly faster import!

6. Updated documentation: a new Loss Overview section: https://sbert.net/docs/training/loss_overview.html and more detailed loss functions: https://sbert.net/docs/package_reference/losses.html

And much more! See the full release notes here: https://github.com/UKPLab/sentence-transformers/releases/tag/v2.4.0

Some more very exciting updates are still on the horizon!
·