view article Article Fine-tuning LLMs to 1.58bit: extreme quantization made easy By medmekk and 5 others • Sep 18, 2024 • 246
Think Only When You Need with Large Hybrid-Reasoning Models Paper • 2505.14631 • Published 14 days ago • 19
Multimodal Large Language Model is a Human-Aligned Annotator for Text-to-Image Generation Paper • 2404.15100 • Published Apr 23, 2024
UncertaintyRAG: Span-Level Uncertainty Enhanced Long-Context Modeling for Retrieval-Augmented Generation Paper • 2410.02719 • Published Oct 3, 2024
Think Only When You Need with Large Hybrid-Reasoning Models Paper • 2505.14631 • Published 14 days ago • 19
Think Only When You Need with Large Hybrid-Reasoning Models Paper • 2505.14631 • Published 14 days ago • 19 • 2
Mixture_of_LoRA_Experts Collection The official trained LoRA candidates collection for "Mixture of LoRA Experts" by Xun Wu, Shaohan Huang and Furu Wei • 30 items • Updated Nov 5, 2024