Sherirto commited on
Commit
73a2a07
·
verified ·
1 Parent(s): 7a0f113

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +214 -29
README.md CHANGED
@@ -8,45 +8,39 @@ tags:
8
  - benchmark
9
  ---
10
 
11
- # MAGB
12
 
13
- This repository contains the Multimodal Attribute Graph Benchmark (MAGB) datasets described in the paper [When Graph meets Multimodal: Benchmarking on Multimodal Attributed Graphs Learning](https://huggingface.co/papers/2410.09132).
14
 
15
- [Github repository](https://github.com/sktsherlock/MAGB)
16
 
17
- MAGB provides 5 datasets from E-Commerce and Social Networks, and evaluates two major learning paradigms: _**GNN-as-Predictor**_ and **_VLM-as-Predictor_**. The datasets are publicly available on Hugging Face: [https://huggingface.co/datasets/Sherirto/MAGB](https://huggingface.co/datasets/Sherirto/MAGB).
18
 
 
 
 
19
 
20
- Each dataset consists of several parts:
21
-
22
- - Graph Data (*.pt): Stores the graph structure, including adjacency information and node labels. Loadable using DGL.
23
- - Node Textual Metadata (*.csv): Contains node textual descriptions, neighborhood relationships, and category labels.
24
- - Text, Image, and Multimodal Features (TextFeature/, ImageFeature/, MMFeature/): Pre-extracted embeddings from the MAGB paper for different modalities.
25
- - Raw Images (*.tar.gz): A compressed folder containing images named by node IDs. Requires extraction before use. The Reddit-M dataset is particularly large and may require special handling (see Github README for details).
26
-
27
 
28
- ## 📖 Table of Contents
29
- - [📖 Introduction](#-introduction)
30
  - [💻 Installation](#-installation)
31
- - [🚀 Usage](#-usage)
32
- - [📊 Results](#-results)
33
- - [🤝 Contributing](#-contributing)
34
- - [❓ FAQ](#-faq)
35
 
36
  ---
37
 
38
- ## 📖 Introduction
 
39
  Multimodal attributed graphs (MAGs) incorporate multiple data types (e.g., text, images, numerical features) into graph structures, enabling more powerful learning and inference capabilities.
40
  This benchmark provides:
41
  ✅ **Standardized datasets** with multimodal attributes.
42
  ✅ **Feature extraction pipelines** for different modalities.
43
  ✅ **Evaluation metrics** to compare different models.
44
- ✅ **Baselines and benchmarks** to accelerate research.
45
 
46
  ---
47
 
48
- ## 💻 Installation
49
- Ensure you have the required dependencies installed before running the benchmark.
 
50
 
51
  ```bash
52
  # Clone the repository
@@ -56,20 +50,211 @@ cd MAGB
56
  # Install dependencies
57
  pip install -r requirements.txt
58
  ```
59
- ## 🚀 Usage
60
 
61
- ### 1. Download the datasets from [MAGB](https://huggingface.co/datasets/Sherirto/MAGB). 👐
 
 
62
 
63
  ```bash
64
  cd Data/
65
  sudo apt-get update && sudo apt-get install git-lfs && git clone https://huggingface.co/datasets/Sherirto/MAGB .
66
  ls
67
  ```
68
- Now, you can see the **Movies**, **Toys**, **Grocery**, **Reddit-S** and **Reddit-M** under the **''Data''** folder.
69
 
70
- <p align="center">
71
- <img src="Figure/Dataset.jpg" width="900"/>
72
- <p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73
 
74
- ### 2. GNN-as-Predictor
75
- ...(rest of the content from Github README can be pasted here)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  - benchmark
9
  ---
10
 
11
+ # MAGB: A Comprehensive Benchmark for Multimodal Attributed Graphs</b>
12
 
 
13
 
14
+ In many real-world scenarios, graph nodes are associated with multimodal attributes, such as texts and images, resulting in **Multimodal Attributed Graphs (MAGs)**.
15
 
16
+ MAGB first provide 5 dataset from E-Commerce and Social Networks. And we evaluate two major paradigms: _**GNN-as Predictor**_ and **_VLM-as-Predictor_** . The datasets are publicly available:
17
 
18
+ <p>
19
+ 🤗 <a href="https://huggingface.co/datasets/Sherirto/MAGB">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp📑 <a href="https://arxiv.org/abs/2410.09132">Paper</a>&nbsp&nbsp
20
+ </p>
21
 
22
+ ## 📖 Table of Contents
 
 
 
 
 
 
23
 
24
+ - [📖 Introduction](#-introduction)
 
25
  - [💻 Installation](#-installation)
26
+ - [🚀 Usage](#-usage)
 
 
 
27
 
28
  ---
29
 
30
+ ## 📖 Introduction
31
+
32
  Multimodal attributed graphs (MAGs) incorporate multiple data types (e.g., text, images, numerical features) into graph structures, enabling more powerful learning and inference capabilities.
33
  This benchmark provides:
34
  ✅ **Standardized datasets** with multimodal attributes.
35
  ✅ **Feature extraction pipelines** for different modalities.
36
  ✅ **Evaluation metrics** to compare different models.
37
+ ✅ **Baselines and benchmarks** to accelerate research.
38
 
39
  ---
40
 
41
+ ## 💻 Installation
42
+
43
+ Ensure you have the required dependencies installed before running the benchmark.
44
 
45
  ```bash
46
  # Clone the repository
 
50
  # Install dependencies
51
  pip install -r requirements.txt
52
  ```
 
53
 
54
+ # 🚀 Usage
55
+
56
+ ## 1. Download the datasets from [MAGB](https://huggingface.co/datasets/Sherirto/MAGB). 👐
57
 
58
  ```bash
59
  cd Data/
60
  sudo apt-get update && sudo apt-get install git-lfs && git clone https://huggingface.co/datasets/Sherirto/MAGB .
61
  ls
62
  ```
 
63
 
64
+ Now, you can see the **Movies**, **Toys**, **Grocery**, **Reddit-S** and **Reddit-M** under the **''Data''** folder.
65
+
66
+
67
+ Each dataset consists of several parts shown in the image below, including:
68
+
69
+ - Graph Data (\*.pt): Stores the graph structure, including adjacency information and node labels. It can be loaded using DGL.
70
+ - Node Textual Metadata (\*.csv): Contains node textual descriptions, neighborhood relationships, and category labels.
71
+ - Text, Image, and Multimodal Features (TextFeature/, ImageFeature/, MMFeature/): Pre-extracted embeddings from the MAGB paper for different modalities.
72
+ - Raw Images (\*.tar.gz): A compressed folder containing images named by node IDs. It needs to be extracted before use.
73
+
74
+ Because of the Reddit-M dataset is too large, you may need to follow the below scripts to unzip the dataset.
75
+
76
+ ```bash
77
+ cd MAGB/Data/
78
+ cat RedditMImages_parta RedditMImages_partb RedditMImages_partc > RedditMImages.tar.gz
79
+ tar -xvzf RedditMImages.tar.gz
80
+ ```
81
+
82
+ ## 2. Experiments
83
+
84
+ In this section, we demonstrate the execution code for both GNN-as-Predictor and VLM-as-Predictor.
85
+
86
+ ### GNN-as-Predictor
87
+
88
+ #### 🧩 Node Classification
89
+
90
+ In the `GNN/Library` directory, we provide the code for models evaluated in the paper, including `GCN, GraphSAGE, GAT, RevGAT`,and `MLP`. Additionally, we have added graph learning models such as `APPNP`, `SGC`, `Node2Vec`, and `DeepWalk` for your use. Below, we show the code for node classification using `GCN` on the Movies dataset in two scenarios: 3-shot learning and supervised learning.
91
+
92
+ ```python
93
+ python GNN/Library/GCN.py --graph_path Data/Movies/MoviesGraph.pt --feature Data/Movies/TextFeature/ Movies_roberta_base_512_mean.npy --fewshots 3
94
+ ```
95
+
96
+ ```python
97
+ python GNN/Library/GCN.py --graph_path Data/Movies/MoviesGraph.pt --feature Data/Movies/TextFeature/ Movies_roberta_base_512_mean.npy --train_ratio 0.6 --val_ratio 0.2
98
+ ```
99
+
100
+ Note: The file `Movies_roberta_base_512_mean.npy` contains the textual features of the Movies dataset extracted using the RoBERTa-Base model. `512` indicates the maximum text length used, and `mean` indicates that mean pooling was applied to extract the features. You can use the features we provide or extract your own.
101
+
102
+ Similarly, you can replace GCN.py with the corresponding code for other models, such as `GraphSAGE.py`, `GAT.py`, etc. For all node classification training code, it is necessary to pass the graph data path and the corresponding feature file. Other basic parameters can be found in the `GNN/Utils/model_config.py` file.
103
+
104
+ Below are the key parameters related to model training, along with their default values and descriptions:
105
+
106
+ | Parameter | Type | Default Value | Description |
107
+ | ------------------- | ------- | ------------- | ----------------------------------------------------------- |
108
+ | `--n-runs` | `int` | `3` | Number of runs for averaging results. |
109
+ | `--lr` | `float` | `0.005` | Learning rate for model optimization. |
110
+ | `--n-epochs` | `int` | `1000` | Total number of training epochs. |
111
+ | `--n-layers` | `int` | `3` | Number of layers in the model. |
112
+ | `--n-hidden` | `int` | `256` | Number of hidden units per layer. |
113
+ | `--dropout` | `float` | `0.5` | Dropout rate to prevent overfitting. |
114
+ | `--label-smoothing` | `float` | `0.1` | Smoothing factor for label smoothing to reduce overfitting. |
115
+ | `--train_ratio` | `float` | `0.6` | Proportion of the dataset used for training. |
116
+ | `--val_ratio` | `float` | `0.2` | Proportion of the dataset used for validation. |
117
+ | `--fewshots` | `int` | `None` | Number of samples for few-shot learning. |
118
+ | `--metric` | `str` | `'accuracy'` | Evaluation metric (e.g., accuracy, precision, recall, f1). |
119
+ | `--average` | `str` | `'macro'` | Averaging method (e.g., weighted, micro, macro). |
120
+ | `--graph_path` | `str` | `None` | Path to the graph dataset file (e.g., `.pt` file). |
121
+ | `--feature` | `str` | `None` | Specifies the unimodal feature embedding to use as input. |
122
+ | `--undirected` | `bool` | `True` | Whether to treat the graph as undirected. |
123
+ | `--selfloop` | `bool` | `True` | Whether to add self-loops to the graph. |
124
+
125
+ Note: Some models may have their own unique parameters, such as 'edge-drop' for `RevGAT` and `GAT`. For these parameters, please refer to the respective code for details.
126
+
127
+ #### 🔗 Link Prediction
128
+
129
+ In the `GNN/LinkPrediction` directory, we provide the code for link prediction experiments using three backbone models: `GCN`, `GraphSAGE`, and `MLP`. Below, we demonstrate the code for running link prediction using `GCN` on the `Movies` dataset. The parameters for `GraphSAGE` and `MLP` are similar, and you can replace `GCN.py` with `SAGE.py` or `MLP.py` to run experiments with those models.
130
+
131
+ ```python
132
+ python GNN/LinkPrediction/GCN.py \
133
+ --n-hidden 256 \
134
+ --n-layers 3 \
135
+ --n-runs 5 \
136
+ --lr 0.001 \
137
+ --neg_len 5000 \
138
+ --dropout 0.2 \
139
+ --batch_size 2048 \
140
+ --graph_path Data/Movies/MoviesGraph.pt \
141
+ --feature Data/Movies/TextFeature/Movies_Llama_3.2_1B_Instruct_512_mean.npy \
142
+ --link_path Data/LinkPrediction/Movies/
143
+ ```
144
+
145
+ Below are the unique parameters specifically used for link prediction tasks:
146
+
147
+ | Parameter | Type | Default Value | Description |
148
+ | -------------- | ----- | ------------- | ------------------------------------------------------------------------------------------ |
149
+ | `--neg_len` | `int` | `5000` | Number of negative samples used for training. |
150
+ | `--batch_size` | `int` | `2048` | Batch size for training. |
151
+ | `--link_path` | `str` | `None` | Path to the directory containing link prediction data (e.g., positive and negative edges). |
152
 
153
+ These parameters are critical for handling the unique requirements of link prediction tasks, such as generating and managing negative samples, processing large datasets efficiently, and specifying the location of link prediction data.
154
+
155
+ ### VLM-as-Predictor
156
+
157
+ The `MLLM/Zero-shot.py` script is designed for zero-shot node classification tasks using multimodal large language models (MLLMs). Below are the key command-line arguments for this script:
158
+
159
+ | Parameter | Type | Default Value | Description |
160
+ | -------------------- | ----- | -------------------------------------------- | ------------------------------------------------------------------------- |
161
+ | `--model_name` | `str` | `'meta-llama/Llama-3.2-11B-Vision-Instruct'` | HuggingFace model name or path. |
162
+ | `--dataset_name` | `str` | `'Movies'` | Name of the dataset (corresponds to a subdirectory in the `Data` folder). |
163
+ | `--base_dir` | `str` | `Project root directory` | Path to the root directory of the project. |
164
+ | `--max_new_tokens` | `int` | `15` | Maximum number of tokens to generate. |
165
+ | `--neighbor_mode` | `str` | `'both'` | Mode for using neighbor information (`text`, `image`, or `both`). |
166
+ | `--use_center_text` | `str` | `'True'` | Whether to use the center node's text. |
167
+ | `--use_center_image` | `str` | `'True'` | Whether to use the center node's image. |
168
+ | `--add_CoT` | `str` | `'False'` | Whether to add Chain of Thought (CoT) reasoning. |
169
+ | `--num_samples` | `int` | `5` | Number of test samples to evaluate. |
170
+ | `--num_neighbours` | `int` | `0` | Number of neighbors to consider for each node. |
171
+
172
+ Below, we present the code for performing zero-shot node classification on the `Movies` dataset using the `LLaMA-3.2-11B Vision Instruct` model with different strategies. This is provided to help researchers reproduce the experimental results presented in our paper.
173
+
174
+ 1. $\text{Center-only}$
175
+
176
+ ```python
177
+ python MLLM/Zero-shot.py --model_name meta-llama/Llama-3.2-11B-Vision-Instruct --num_samples 300 --max_new_tokens 30 --dataset_name Moives
178
+ ```
179
+
180
+ 2. $\text{GRE-T}_{k=1}$
181
+
182
+ ```python
183
+ python MLLM/Zero-shot.py --model_name meta-llama/Llama-3.2-11B-Vision-Instruct --num_neighbours 1 --neighbor_mode text --num_samples 300 --max_new_tokens 30 --dataset_name Moives
184
+ ```
185
+
186
+ 3. $\text{GRE-V}_{k=1}$
187
+
188
+ ```python
189
+ python MLLM/Zero-shot.py --model_name meta-llama/Llama-3.2-11B-Vision-Instruct --num_neighbours 1 --neighbor_mode image --num_samples 300 --max_new_tokens 30 --dataset_name Moives
190
+ ```
191
+
192
+ 4. $\text{GRE-M}_{k=1}$
193
+
194
+ ```python
195
+ python MLLM/Zero-shot.py --model_name meta-llama/Llama-3.2-11B-Vision-Instruct --num_neighbours 1 --neighbor_mode both --num_samples 300 --max_new_tokens 30 --dataset_name Moives
196
+ ```
197
+
198
+ Please note that both the VLMs and GNNs used the same original test set for the node classification task. However, for efficiency during VLM testing, we randomly selected 300 samples from this original test set.
199
+ We observed that the experimental results obtained on this subset did not deviate significantly from those obtained on the complete test set.
200
+
201
+ #### 🔧 Customizing `load_model_and_processor` for Unsupported VLMs
202
+
203
+ The `load_model_and_processor` function in `MLLM/Library.py` is designed to load specific models and their corresponding processors from the Hugging Face library. If you want to use a model that is not currently supported, you can modify this function to include your custom model. Below is an example to guide you through the process.
204
+
205
+ #### Example: Adding Support for a Custom Model
206
+
207
+ Suppose you want to add support for a new model, `custom-org/custom-model-7B`, which uses the `AutoModelForCausalLM` class and `AutoProcessor`. Here's how you can modify the `load_model_and_processor` function:
208
+
209
+ 1. Open the `MLLM/Library.py` file.
210
+ 2. Locate the `model_mapping` dictionary inside the `load_model_and_processor` function.
211
+ 3. Add a new entry for your custom model.
212
+
213
+ Here is the modified code:
214
+
215
+ ```python
216
+ def load_model_and_processor(model_name: str):
217
+ """
218
+ Load the model and processor based on the Hugging Face model name.
219
+ """
220
+ model_mapping = {
221
+ "meta-llama/Llama-3.2-11B-Vision-Instruct": {
222
+ "model_cls": MllamaForConditionalGeneration,
223
+ "processor_cls": AutoProcessor,
224
+ },
225
+ "custom-org/custom-model-7B": { # Add your custom model here
226
+ "model_cls": AutoModelForCausalLM, # Replace with the correct model class
227
+ "processor_cls": AutoProcessor, # Replace with the correct processor class
228
+ },
229
+ # Other existing models...
230
+ }
231
+
232
+ # Other existing codes...
233
+
234
+ return model, processor
235
+ ```
236
+
237
+ ## 🤝 Contributing
238
+
239
+ We welcome contributions to **MAGB**. To contribute:
240
+
241
+ 1. Fork the repository.
242
+ 2. Create a new branch for your feature or bug fix.
243
+ 3. Submit a pull request with a detailed description of your changes.
244
+
245
+ For major changes, please open an issue first to discuss what you would like to change.
246
+
247
+ ## 📚 Citation
248
+
249
+ If you use MAGB in your research, please cite our paper:
250
+
251
+ ```bibtex
252
+ @misc{yan2025graphmeetsmultimodalbenchmarking,
253
+ title={When Graph meets Multimodal: Benchmarking and Meditating on Multimodal Attributed Graphs Learning},
254
+ author={Hao Yan and Chaozhuo Li and Jun Yin and Zhigang Yu and Weihao Han and Mingzheng Li and Zhengxin Zeng and Hao Sun and Senzhang Wang},
255
+ year={2025},
256
+ eprint={2410.09132},
257
+ archivePrefix={arXiv},
258
+ url={https://arxiv.org/abs/2410.09132},
259
+ }
260
+ ```