A Graph Perspective to Probe Structural Patterns of Knowledge in Large Language Models
Abstract
The study explores the structural patterns of knowledge in large language models from a graph perspective, uncovering knowledge homophily and developing models for graph machine learning to estimate entity knowledge.
Large language models have been extensively studied as neural knowledge bases for their knowledge access, editability, reasoning, and explainability. However, few works focus on the structural patterns of their knowledge. Motivated by this gap, we investigate these structural patterns from a graph perspective. We quantify the knowledge of LLMs at both the triplet and entity levels, and analyze how it relates to graph structural properties such as node degree. Furthermore, we uncover the knowledge homophily, where topologically close entities exhibit similar levels of knowledgeability, which further motivates us to develop graph machine learning models to estimate entity knowledge based on its local neighbors. This model further enables valuable knowledge checking by selecting triplets less known to LLMs. Empirical results show that using selected triplets for fine-tuning leads to superior performance.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Injecting Knowledge Graphs into Large Language Models (2025)
- ConceptFormer: Towards Efficient Use of Knowledge-Graph Embeddings in Large Language Models (2025)
- SEMMA: A Semantic Aware Knowledge Graph Foundation Model (2025)
- Towards Explainable Temporal Reasoning in Large Language Models: A Structure-Aware Generative Framework (2025)
- LightPROF: A Lightweight Reasoning Framework for Large Language Model on Knowledge Graph (2025)
- Beyond Completion: A Foundation Model for General Knowledge Graph Reasoning (2025)
- Align-GRAG: Reasoning-Guided Dual Alignment for Graph Retrieval-Augmented Generation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper