Architectural Backdoors for Within-Batch Data Stealing and Model Inference Manipulation
Abstract
A novel class of backdoors in neural network architectures exploits batched inference to enable large-scale data manipulation, demonstrating information leakage and control over user inputs and outputs, with a proposed mitigation strategy using Information Flow Control.
For nearly a decade the academic community has investigated backdoors in neural networks, primarily focusing on classification tasks where adversaries manipulate the model prediction. While demonstrably malicious, the immediate real-world impact of such prediction-altering attacks has remained unclear. In this paper we introduce a novel and significantly more potent class of backdoors that builds upon recent advancements in architectural backdoors. We demonstrate how these backdoors can be specifically engineered to exploit batched inference, a common technique for hardware utilization, enabling large-scale user data manipulation and theft. By targeting the batching process, these architectural backdoors facilitate information leakage between concurrent user requests and allow attackers to fully control model responses directed at other users within the same batch. In other words, an attacker who can change the model architecture can set and steal model inputs and outputs of other users within the same batch. We show that such attacks are not only feasible but also alarmingly effective, can be readily injected into prevalent model architectures, and represent a truly malicious threat to user privacy and system integrity. Critically, to counteract this new class of vulnerabilities, we propose a deterministic mitigation strategy that provides formal guarantees against this new attack vector, unlike prior work that relied on Large Language Models to find the backdoors. Our mitigation strategy employs a novel Information Flow Control mechanism that analyzes the model graph and proves non-interference between different user inputs within the same batch. Using our mitigation strategy we perform a large scale analysis of models hosted through Hugging Face and find over 200 models that introduce (unintended) information leakage between batch entries due to the use of dynamic quantization.
Community
This paper introduces a novel class of architectural backdoors specifically designed to exploit batched inference in neural networks, enabling attackers to steal data from or manipulate the outputs of other users processed within the same batch. These backdoors are effective and they can be easily injected into ONNX checkpoints of common architectures like Transformers. To counteract this threat, we propose a deterministic mitigation strategy called the "Batch Isolation Checker," which uses Information Flow to analyze the model graph and certify non-interference between different user inputs within a batch.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- BitHydra: Towards Bit-flip Inference Cost Attack against Large Language Models (2025)
- BadMoE: Backdooring Mixture-of-Experts LLMs via Optimizing Routing Triggers and Infecting Dormant Experts (2025)
- Memory Under Siege: A Comprehensive Survey of Side-Channel Attacks on Memory (2025)
- I Know What You Said: Unveiling Hardware Cache Side-Channels in Local Large Language Model Inference (2025)
- Defending the Edge: Representative-Attention for Mitigating Backdoor Attacks in Federated Learning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper