Ritvik Gaur PRO
ritvik77
AI & ML interests
Trying new things
Recent Activity
published
a Space
about 23 hours ago
ritvik77/CarMaa
reacted
to
their
post
with 🔥
27 days ago
Hi 🤗HF Community,
I would be incredibly grateful for an opportunity to contribute — in any capacity — and learn alongside researchers here. Is there any possibility I could collaborate or assist with any of your research works ?
I’m happy to support ongoing projects, contribute to data analysis, code, documentation, or anything that adds value.
Thank you for your time and consideration!
Warm regards,
Ritvik Gaur
Organizations
None yet
ritvik77's activity

posted
an
update
4 days ago
Post
3274
Hi 🤗HF Community,
I would be incredibly grateful for an opportunity to contribute — in any capacity — and learn alongside researchers here. Is there any possibility I could collaborate or assist with any of your research works ?
I’m happy to support ongoing projects, contribute to data analysis, code, documentation, or anything that adds value.
Thank you for your time and consideration!
Warm regards,
Ritvik Gaur
I would be incredibly grateful for an opportunity to contribute — in any capacity — and learn alongside researchers here. Is there any possibility I could collaborate or assist with any of your research works ?
I’m happy to support ongoing projects, contribute to data analysis, code, documentation, or anything that adds value.
Thank you for your time and consideration!
Warm regards,
Ritvik Gaur

posted
an
update
29 days ago
Post
3274
Hi 🤗HF Community,
I would be incredibly grateful for an opportunity to contribute — in any capacity — and learn alongside researchers here. Is there any possibility I could collaborate or assist with any of your research works ?
I’m happy to support ongoing projects, contribute to data analysis, code, documentation, or anything that adds value.
Thank you for your time and consideration!
Warm regards,
Ritvik Gaur
I would be incredibly grateful for an opportunity to contribute — in any capacity — and learn alongside researchers here. Is there any possibility I could collaborate or assist with any of your research works ?
I’m happy to support ongoing projects, contribute to data analysis, code, documentation, or anything that adds value.
Thank you for your time and consideration!
Warm regards,
Ritvik Gaur
Post
2295
ritvik77/ContributionChartHuggingFace
It's Ready!
One feature Hugging Face could really benefit from is a contribution heatmap — a visual dashboard to track user engagement and contributions across models, datasets, and models over the year, similar to GitHub’s contribution graph. Guess what, Clem Delangue mentioned idea about using HF API reference for it and we made it for use.
If you are a Hugging Face user add this Space in your collection and it will give you all stats about your contributions and commits nearly same as GitHub. It's still a prototype and still working on it as a product feature.
It's Ready!
One feature Hugging Face could really benefit from is a contribution heatmap — a visual dashboard to track user engagement and contributions across models, datasets, and models over the year, similar to GitHub’s contribution graph. Guess what, Clem Delangue mentioned idea about using HF API reference for it and we made it for use.
If you are a Hugging Face user add this Space in your collection and it will give you all stats about your contributions and commits nearly same as GitHub. It's still a prototype and still working on it as a product feature.
Hey, there's one problem I was facing that instead of having permission for restricted spaces and models I was bypassing them as you can see in code, we can figure out together if HF can provide some API endpoint particularly for this issue.

replied to
their
post
about 1 month ago
Thanks for the info, I'll be working on it.

posted
an
update
about 1 month ago
Post
2295
ritvik77/ContributionChartHuggingFace
It's Ready!
One feature Hugging Face could really benefit from is a contribution heatmap — a visual dashboard to track user engagement and contributions across models, datasets, and models over the year, similar to GitHub’s contribution graph. Guess what, Clem Delangue mentioned idea about using HF API reference for it and we made it for use.
If you are a Hugging Face user add this Space in your collection and it will give you all stats about your contributions and commits nearly same as GitHub. It's still a prototype and still working on it as a product feature.
It's Ready!
One feature Hugging Face could really benefit from is a contribution heatmap — a visual dashboard to track user engagement and contributions across models, datasets, and models over the year, similar to GitHub’s contribution graph. Guess what, Clem Delangue mentioned idea about using HF API reference for it and we made it for use.
If you are a Hugging Face user add this Space in your collection and it will give you all stats about your contributions and commits nearly same as GitHub. It's still a prototype and still working on it as a product feature.

reacted to
burtenshaw's
post with ❤️
about 1 month ago
Post
3791
The Hugging Face Agents Course now includes three major agent frameworks!
🔗
agents-course
This includes LlamaIndex, LangChain, and our very own smolagents. We've worked to integrate the three frameworks in distinctive ways so that learners can reflect on when and where to use each.
This also means that you can follow the course if you're already familiar with one of these frameworks, and soak up some of the fundamental knowledge in earlier units.
Hopefully, this makes the agents course as open to as many people as possible.
🔗

This includes LlamaIndex, LangChain, and our very own smolagents. We've worked to integrate the three frameworks in distinctive ways so that learners can reflect on when and where to use each.
This also means that you can follow the course if you're already familiar with one of these frameworks, and soak up some of the fundamental knowledge in earlier units.
Hopefully, this makes the agents course as open to as many people as possible.

posted
an
update
about 1 month ago
Post
484
Someone remember the Wile E. Coyote from Looney Tunes Show? He did it again but now with fooling a Tesla! This shows the difference of LiDAR vs Camera.
Tesla Autopilot Fails Wile E. Coyote Test, Drives Itself Into Picture of a Road.
For Original Video: https://lnkd.in/g4Qi8fd4
Tesla Autopilot Fails Wile E. Coyote Test, Drives Itself Into Picture of a Road.
For Original Video: https://lnkd.in/g4Qi8fd4

replied to
their
post
about 1 month ago
Big Asset Firms and tech Giants will soon get a way to even put some price on open source for money.
Post
2515
Big companies are now training huge AI models with tons of data and billions of parameters, and the future seems to be about quantization—making those models smaller by turning big numbers into simpler ones, like going from 32-bit to 8-bit without reducing accuracy by +/- 0.01%. There should be some standard unit of measurement for the ratio of model size reduction to accuracy lost.
What do you all thing about this ?
What do you all thing about this ?

reacted to
nicolay-r's
post with 👍
about 2 months ago
Post
1585
📢 With the recent release of Gemma-3, If you interested to play with textual chain-of-though, the notebook below is a wrapper over the the model (native transformers inference API) for passing the predefined schema of promps in batching mode.
https://github.com/nicolay-r/nlp-thirdgate/blob/master/tutorials/llm_gemma_3.ipynb
Limitation: schema supports texts only (for now), while gemma-3 is a text+image to text.
Model: google/gemma-3-1b-it
Provider: https://github.com/nicolay-r/nlp-thirdgate/blob/master/llm/transformers_gemma3.py
https://github.com/nicolay-r/nlp-thirdgate/blob/master/tutorials/llm_gemma_3.ipynb
Limitation: schema supports texts only (for now), while gemma-3 is a text+image to text.
Model: google/gemma-3-1b-it
Provider: https://github.com/nicolay-r/nlp-thirdgate/blob/master/llm/transformers_gemma3.py

posted
an
update
about 2 months ago
Post
2515
Big companies are now training huge AI models with tons of data and billions of parameters, and the future seems to be about quantization—making those models smaller by turning big numbers into simpler ones, like going from 32-bit to 8-bit without reducing accuracy by +/- 0.01%. There should be some standard unit of measurement for the ratio of model size reduction to accuracy lost.
What do you all thing about this ?
What do you all thing about this ?

reacted to
burtenshaw's
post with 👍
about 2 months ago
Post
2090
Here’s a notebook to make Gemma reason with GRPO & TRL. I made this whilst prepping the next unit of the reasoning course:
In this notebooks I combine together google’s model with some community tooling
- First, I load the model from the Hugging Face hub with transformers’s latest release for Gemma 3
- I use PEFT and bitsandbytes to get it running on Colab
- Then, I took Will Browns processing and reward functions to make reasoning chains from GSM8k
- Finally, I used TRL’s GRPOTrainer to train the model
Next step is to bring Unsloth AI in, then ship it in the reasoning course. Links to notebook below.
https://colab.research.google.com/drive/1Vkl69ytCS3bvOtV9_stRETMthlQXR4wX?usp=sharing
In this notebooks I combine together google’s model with some community tooling
- First, I load the model from the Hugging Face hub with transformers’s latest release for Gemma 3
- I use PEFT and bitsandbytes to get it running on Colab
- Then, I took Will Browns processing and reward functions to make reasoning chains from GSM8k
- Finally, I used TRL’s GRPOTrainer to train the model
Next step is to bring Unsloth AI in, then ship it in the reasoning course. Links to notebook below.
https://colab.research.google.com/drive/1Vkl69ytCS3bvOtV9_stRETMthlQXR4wX?usp=sharing

replied to
their
post
about 2 months ago
Hey @nicolay-r this is still in dev phase, also I am trying to super quantize some 70B+ parameter LLM with active layering then tune again on Medical Data and benchmarks and get approved with some doctors and organizations. This way the log GPU can also handle it and making it accessable to everyone.