BadVLA: Towards Backdoor Attacks on Vision-Language-Action Models via Objective-Decoupled Optimization Paper • 2505.16640 • Published 12 days ago • 2
Automating Safety Enhancement for LLM-based Agents with Synthetic Risk Scenarios Paper • 2505.17735 • Published 11 days ago • 3 • 1
BadVLA: Towards Backdoor Attacks on Vision-Language-Action Models via Objective-Decoupled Optimization Paper • 2505.16640 • Published 12 days ago • 2 • 1
BadVLA: Towards Backdoor Attacks on Vision-Language-Action Models via Objective-Decoupled Optimization Paper • 2505.16640 • Published 12 days ago • 2
MMMR: Benchmarking Massive Multi-Modal Reasoning Tasks Paper • 2505.16459 • Published 12 days ago • 45
MMMR: Benchmarking Massive Multi-Modal Reasoning Tasks Paper • 2505.16459 • Published 12 days ago • 45
Large Reasoning Models in Agent Scenarios: Exploring the Necessity of Reasoning Capabilities Paper • 2503.11074 • Published Mar 14 • 1
Automating Safety Enhancement for LLM-based Agents with Synthetic Risk Scenarios Paper • 2505.17735 • Published 11 days ago • 3
Large Reasoning Models in Agent Scenarios: Exploring the Necessity of Reasoning Capabilities Paper • 2503.11074 • Published Mar 14 • 1
Automating Safety Enhancement for LLM-based Agents with Synthetic Risk Scenarios Paper • 2505.17735 • Published 11 days ago • 3
On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective Paper • 2502.14296 • Published Feb 20 • 46
Both Text and Images Leaked! A Systematic Analysis of Multimodal LLM Data Contamination Paper • 2411.03823 • Published Nov 6, 2024 • 50