Papers
arxiv:2505.10167

QuXAI: Explainers for Hybrid Quantum Machine Learning Models

Published on May 15
ยท Submitted by AlignAI on May 16
Authors:
,
,

Abstract

The emergence of hybrid quantum-classical machine learning (HQML) models opens new horizons of computational intelligence but their fundamental complexity frequently leads to black box behavior that undermines transparency and reliability in their application. Although XAI for quantum systems still in its infancy, a major research gap is evident in robust global and local explainability approaches that are designed for HQML architectures that employ quantized feature encoding followed by classical learning. The gap is the focus of this work, which introduces QuXAI, an framework based upon Q-MEDLEY, an explainer for explaining feature importance in these hybrid systems. Our model entails the creation of HQML models incorporating quantum feature maps, the use of Q-MEDLEY, which combines feature based inferences, preserving the quantum transformation stage and visualizing the resulting attributions. Our result shows that Q-MEDLEY delineates influential classical aspects in HQML models, as well as separates their noise, and competes well against established XAI techniques in classical validation settings. Ablation studies more significantly expose the virtues of the composite structure used in Q-MEDLEY. The implications of this work are critically important, as it provides a route to improve the interpretability and reliability of HQML models, thus promoting greater confidence and being able to engage in safer and more responsible use of quantum-enhanced AI technology.

Community

Paper author Paper submitter

๐Ÿš€ What itโ€™s about:

As quantum machine learning (QML) evolves, hybrid quantum-classical models (HQMLs) have become central. But theyโ€™re hard to interpret โ€” making decisions via complex transformations that span both classical and quantum domains. This paper introduces QuXAI, a framework designed to make HQMLs more interpretable and trustworthy. At its core is Q-MEDLEY, a novel explainer that attributes global feature importance while respecting the hybrid data flow from classical inputs through quantum encodings to classical learners.

๐Ÿง  Key contributions:

โœ… Q-MEDLEY: An explainer combining Drop-Column and Permutation Importance โ€” tailored for HQMLs using quantum feature encoding.

๐Ÿงช Full pipeline (QuXAI): Data prep โ†’ HQML model training โ†’ explanation โ†’ visualization โ€” all adapted to quantum settings.

๐Ÿ“Š Visual Explanations: Clear bar chart visualizations for feature importance help researchers understand what matters.

๐Ÿ” Evaluated against classical ground truths using interpretable models (e.g., decision trees) to validate explanation fidelity.

๐Ÿงช Ablation studies confirm that interaction-aware and adaptive components boost Q-MEDLEYโ€™s performance.

๐Ÿ“Œ Why it matters:

HQMLs are promising but opaque. QuXAI is a critical step toward trustworthy, interpretable, and safe quantum AI. Understanding which classical features drive decisions after quantum transformation is key for debugging, trust, and scientific insight.

Paper author Paper submitter

๐Ÿš€ What itโ€™s about:

As quantum machine learning (QML) evolves, hybrid quantum-classical models (HQMLs) have become central. But theyโ€™re hard to interpret โ€” making decisions via complex transformations that span both classical and quantum domains. This paper introduces QuXAI, a framework designed to make HQMLs more interpretable and trustworthy. At its core is Q-MEDLEY, a novel explainer that attributes global feature importance while respecting the hybrid data flow from classical inputs through quantum encodings to classical learners.

๐Ÿง  Key contributions:

โœ… Q-MEDLEY: An explainer combining Drop-Column and Permutation Importance โ€” tailored for HQMLs using quantum feature encoding.

๐Ÿงช Full pipeline (QuXAI): Data prep โ†’ HQML model training โ†’ explanation โ†’ visualization โ€” all adapted to quantum settings.

๐Ÿ“Š Visual Explanations: Clear bar chart visualizations for feature importance help researchers understand what matters.

๐Ÿ” Evaluated against classical ground truths using interpretable models (e.g., decision trees) to validate explanation fidelity.

๐Ÿงช Ablation studies confirm that interaction-aware and adaptive components boost Q-MEDLEYโ€™s performance.

๐Ÿ“Œ Why it matters:

HQMLs are promising but opaque. QuXAI is a critical step toward trustworthy, interpretable, and safe quantum AI. Understanding which classical features drive decisions after quantum transformation is key for debugging, trust, and scientific insight.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2505.10167 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.10167 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.10167 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.