Social Post Explorers

community
Activity Feed

AI & ML interests

None defined yet.

Recent Activity

social-post-explorers's activity

merterbak 
posted an update 1 day ago
view post
Post
1135
Microsoft released their new fine-tuned phi-4 models with reasoning data yesterday. They outperform/rival much larger models . Check out them if you haven't yet. 🚀

Phi4 mini reasoning(SFT): microsoft/Phi-4-mini-reasoning
Phi-4 reasoning(SFT): microsoft/Phi-4-reasoning
Phi-4 reasoning plus (SFT + RL): microsoft/Phi-4-reasoning-plus
Demo: https://github.com/marketplace/models/azureml/Phi-4-reasoning/playground
Articles: https://arxiv.org/pdf/2504.21318
https://arxiv.org/pdf/2504.21233
Blog: https://azure.microsoft.com/en-us/blog/one-year-of-phi-small-language-models-making-big-leaps-in-ai/

MikeDoes 
posted an update 3 days ago
view post
Post
1454
PII-Masking-1M Final Day (7/7)! 🚀 Today, we unveil 5 NEW Enterprise PII (E-PII) Dataset PREVIEWS!

Standard PII tools often miss sensitive *business* data. That's why we built E-PII previews for the data that powers your operations and compliance needs.

Get a first look (representing 100,000 samples each!) into datasets designed for real-world enterprise security across these categories:

🏥 **PHI Preview**: For Healthcare Data
💳 **PFI Preview:** For Financial Data
🏢 **PWI Preview:** For Workplace Data
💻 **PDI Preview:** For Digital Activity Data
📍 **PLI Preview:** For Location Data


That wraps up our #PIIMasking1M 7 days announcement! HUGE thanks for following along and for your engagement.
Explore ALL our releases, including these E-PII previews, in the Ai4Privacy Hugging Face Collection & show some love ❤️ if you find them useful!
🔗 Visit the Collection:https://huggingface.co/ai4privacy

Let's keep building safer AI, together!
merterbak 
posted an update 4 days ago
view post
Post
4719
Qwen 3 models released🔥
It offers 2 MoE and 6 dense models with following parameter sizes: 0.6B, 1.7B, 4B, 8B, 14B, 30B(MoE), 32B, and 235B(MoE).
Models: Qwen/qwen3-67dd247413f0e2e4f653967f
Blog: https://qwenlm.github.io/blog/qwen3/
Demo: Qwen/Qwen3-Demo
GitHub: https://github.com/QwenLM/Qwen3

✅ Pre-trained 119 languages(36 trillion tokens) and dialects with strong translation and instruction following abilities. (Qwen2.5 was pre-trained on 18 trillion tokens.)
✅Qwen3 dense models match the performance of larger Qwen2.5 models. For example, Qwen3-1.7B/4B/8B/14B/32B perform like Qwen2.5-3B/7B/14B/32B/72B.
✅ Three stage done while pretraining:
• Stage 1: General language learning and knowledge building.
• Stage 2: Reasoning boost with STEM, coding, and logic skills.
• Stage 3: Long context training
✅ It supports MCP in the model
✅ Strong agent skills
✅ Supports seamless between thinking mode (for hard tasks like math and coding) and non-thinking mode (for fast chatting) inside chat template.
✅ Better human alignment for creative writing, roleplay, multi-turn conversations, and following detailed instructions.
merterbak 
posted an update 8 days ago
view post
Post
3586
FlowReasoner is a new system that builds a custom set of small AI agents for every user question. Unlike search based methods it uses reasoning driven optimization with external execution feedback.

✅ First, it distills reasoning data using DeepSeek R1-671B to build multi agent systems. 🤖
✅ Then, reasoning data used for DeepSeek-R1-Distill-Qwen-7B via supervised fine tuning for basic reasoning skills. 💡
✅ Finally, RL with GRPO (optimizes by comparing response groups from queries/tasks) to improve reasoning.

FlowReasoner: Reinforcing Query-Level Meta-Agents (2504.15257)
Code: https://github.com/sail-sg/flowreasoner
davidberenstein1957 
posted an update 10 days ago
view post
Post
2140
🔥 Announcing FLUX-Juiced: The Fastest Image Generation Endpoint (2.6x faster)!

Optimisations are widely applied and can reduce inference time, but their impact on quality often remains unclear, so we decided to challenge the status quo and create our own optimised version of FLUX.1[dev] called FLUX-juiced.

Blog: https://huggingface.co/blog/PrunaAI/flux-fastest-image-generation-endpoint
merterbak 
posted an update 14 days ago
view post
Post
2087
Here’s a cool paper I found: “Massive Image Embedding Benchmark (MIEB).” It is a new tool to test how good image embedding models are. It has 130 different tasks grouped into 8 categories, like image search, classification, clustering similar images, answering questions based on images, and understanding documents. It even covers 38 different languages.

The authors tested 50 models and found that no single model was best at everything. Some models were great at recognizing text inside images but struggled to handle complicated tasks like matching images and text that appear together.

Paper: https://arxiv.org/pdf/2504.10471v1
Code: https://github.com/embeddings-benchmark/mteb
  • 2 replies
·
davidberenstein1957 
posted an update 15 days ago
davidberenstein1957 
posted an update 17 days ago
view post
Post
1380
RealHarm: A Collection of Real-World Language Model Application Failure

I'm David from Giskard, and we work on securing your Agents.
Today, we are launching RealHarm: a dataset of real-world problematic interactions with AI agents, drawn from publicly reported incidents.

Check out the dataset and paper: https://realharm.giskard.ai/
neph1 
posted an update 17 days ago
tomaarsen 
posted an update 18 days ago
view post
Post
2744
I just released Sentence Transformers v4.1; featuring ONNX and OpenVINO backends for rerankers offering 2-3x speedups and improved hard negatives mining which helps prepare stronger training datasets. Details:

🏎️ ONNX, OpenVINO, Optimization, Quantization
- I've added ONNX and OpenVINO support with just one extra argument: "backend" when loading the CrossEncoder reranker, e.g.: CrossEncoder("cross-encoder/ms-marco-MiniLM-L6-v2", backend="onnx")
- The export_optimized_onnx_model, export_dynamic_quantized_onnx_model, and export_static_quantized_openvino_model functions now work with CrossEncoder rerankers, allowing you to optimize (e.g. fusions, gelu approximations, etc.) or quantize (int8 weights) rerankers.
- I've uploaded ~340 ONNX & OpenVINO models for all existing models under the cross-encoder Hugging Face organization. You can use these without having to export when loading.

⛏ Improved Hard Negatives Mining
- Added 'absolute_margin' and 'relative_margin' arguments to mine_hard_negatives.
- absolute_margin ensures that sim(query, negative) < sim(query, positive) - absolute_margin, i.e. an absolute margin between the negative & positive similarities.
- relative_margin ensures that sim(query, negative) < sim(query, positive) * (1 - relative_margin), i.e. a relative margin between the negative & positive similarities.
- Inspired by the excellent NV-Retriever paper from NVIDIA.

And several other small improvements. Check out the full release notes here: https://github.com/UKPLab/sentence-transformers/releases/tag/v4.1.0

With this release, I introduce near-feature parity between the SentenceTransformer embedding & CrossEncoder reranker models, which I've wanted to do for quite some time! With rerankers very strongly supported now, it's time to look forward to other useful architectures!

merterbak 
posted an update 19 days ago
view post
Post
3033
OpenAI published 2 benchmark datasets on Hugging Face 🔥
openai/mrcr
openai/graphwalks
MRCR tests how well a model can find the right answer when many similar questions are spread out in a long context. Graphwalks checks if a model can follow steps in a big graph and find the correct nodes by thinking through the structure
leonardlin 
posted an update 19 days ago
view post
Post
2629
Happy to announce the release of Shisa V2, our latest generation of our bilingual Japanese-English language models. After hundreds of ablations and months of work, we're releasing some of the strongest open Japanese models at 7B, 8B, 12B, 14B, 32B and 70B! Full announcement here https://shisa.ai/posts/shisa-v2/ or visit the Shisa V2 HF collection: shisa-ai/shisa-v2-67fc98ecaf940ad6c49f5689
merterbak 
posted an update 22 days ago
view post
Post
3027
OpenAI has released BrowseComp an open source benchmark designed to evaluate the web browsing capabilities of AI agents. This dataset comprising 1,266 questions challenges AI models to navigate the web and uncover complex and obscure information. Crafted by human trainers, the questions are intentionally difficult. (unsolvable by another person in under ten minutes and beyond the reach of existing models like ChatGPT with and without browsing and an early version of OpenAI's Deep Research tool.)

Blog Post: https://openai.com/index/browsecomp/
Paper: https://cdn.openai.com/pdf/5e10f4ab-d6f7-442e-9508-59515c65e35d/browsecomp.pdf
Code in simple eval repo: https://github.com/openai/simple-evals
takarajordan 
posted an update 24 days ago
view post
Post
587
🎌 Two months in, https://github.com/takara-ai/go-attention has passed 429 stars on GitHub.

We built this library at takara.ai to bring attention mechanisms and transformer layers to Go — in a form that's lightweight, clean, and dependency-free.

We’re proud to say that every part of this project reflects what we set out to do.

- Pure Go — no external dependencies, built entirely on the Go standard library
- Core support for DotProductAttention and MultiHeadAttention
- Full transformer layers with LayerNorm, feed-forward networks, and residual connections
- Designed for edge, embedded, and real-time environments where simplicity and performance matter

Thank you to everyone who has supported this so far — the stars, forks, and feedback mean a lot.
  • 4 replies
·
merterbak 
posted an update 24 days ago
giux78 
posted an update 25 days ago
view post
Post
2284
LLAMA4 release highlight the importance of political and social bias. According to their own evaluation described in the release blog post:
- Refusals on contentious prompts dropped from 7% (hashtag#LLAMA 3.3) to under 2%
- Unequal response refusals are now under 1%
- Political lean bias is said to be halved compared to hashtag#LLaMA 3.3 and comparable to Grok

However, we @efederici @mferraretto @FinancialSupport and I released some weeks ago an independent open source benchmark called Propaganda to measure political bias in LLMs: https://github.com/mii-llm/propaganda

In the chart below, we evaluated multiple leading models on the basis of ratings across a range of prompts designed to expose ideological leanings.

Despite Meta’s stated neutrality goals, LLAMA4 ranks at the very top in terms of total ratings aligned with a clear ideological bias. The models were tested on their ability to respond even-handedly to politically sensitive prompts. LLaMA 4 scored even higher than models known for strong alignment policies like GPT-4o.

LLMs may be refusing less, but they still show bias through content framing. This suggests that refusal rates alone are not a sufficient measure of ideological bias. Relying solely on internal evaluations from AI labs also raises concerns about transparency and objectivity.