How Far are LLMs from Being Our Digital Twins? A Benchmark for Persona-Based Behavior Chain Simulation Paper • 2502.14642 • Published Feb 20 • 1
Why Safeguarded Ships Run Aground? Aligned Large Language Models' Safety Mechanisms Tend to Be Anchored in The Template Region Paper • 2502.13946 • Published Feb 19 • 10
Instruct Once, Chat Consistently in Multiple Rounds: An Efficient Tuning Framework for Dialogue Paper • 2402.06967 • Published Feb 10, 2024
TokenSkip: Controllable Chain-of-Thought Compression in LLMs Paper • 2502.12067 • Published Feb 17 • 2
TokenSkip: Controllable Chain-of-Thought Compression in LLMs Paper • 2502.12067 • Published Feb 17 • 2
Why Safeguarded Ships Run Aground? Aligned Large Language Models' Safety Mechanisms Tend to Be Anchored in The Template Region Paper • 2502.13946 • Published Feb 19 • 10
Why Safeguarded Ships Run Aground? Aligned Large Language Models' Safety Mechanisms Tend to Be Anchored in The Template Region Paper • 2502.13946 • Published Feb 19 • 10 • 2
Running 2.57k 2.57k The Ultra-Scale Playbook 🌌 The ultimate guide to training LLM on large GPU Clusters