---
license: mit
task_categories:
- video-text-to-text
- robotics
---
Magma: A Foundation Model for Multimodal AI Agents
[Jianwei Yang](https://jwyang.github.io/)*1†
[Reuben Tan](https://cs-people.bu.edu/rxtan/)1†
[Qianhui Wu](https://qianhuiwu.github.io/)1†
[Ruijie Zheng](https://ruijiezheng.com/)2‡
[Baolin Peng](https://scholar.google.com/citations?user=u1CNjgwAAAAJ&hl=en&oi=ao)1‡
[Yongyuan Liang](https://cheryyunl.github.io)2‡
[Yu Gu](http://yu-gu.me/)1
[Mu Cai](https://pages.cs.wisc.edu/~mucai/)3
[Seonghyeon Ye](https://seonghyeonye.github.io/)4
[Joel Jang](https://joeljang.github.io/)5
[Yuquan Deng](https://scholar.google.com/citations?user=LTC0Q6YAAAAJ&hl=en)5
[Lars Liden](https://sites.google.com/site/larsliden)1
[Jianfeng Gao](https://www.microsoft.com/en-us/research/people/jfgao/)1▽
1 Microsoft Research; 2 University of Maryland; 3 University of Wisconsin-Madison
4 KAIST; 5 University of Washington
* Project lead † First authors ‡ Second authors ▽ Leadership
\[[arXiv Paper](https://www.arxiv.org/pdf/2502.13130)\] \[[Project Page](https://microsoft.github.io/Magma/)\] \[[Hugging Face Paper](https://huggingface.co/papers/2502.13130)\] \[[Github Repo](https://github.com/microsoft/Magma)\] \[[Video](https://www.youtube.com/watch?v=SbfzvUU5yM8)\]
## Introduction
This dataset contains the robotic manipulation data used in Magma pretraining. For fair comparison, we followed OpenVLA to use the data mix "siglip-224px+mx-oxe-magic-soup".
The dataset is organized by following source datasets, with each source containing one or more arrow files:
| Folder | Number of Shards |
|:------------------------------------------------------|-------------------:|
| ego4d | 15 |
| sthv2 | 6 |
| instruct_video | 14 |
### Features
In addition to the default features, we extracted the visual traces of future 16 frames for each frame. The dataset contains the following fields:
- `dataset_name`: Original source dataset name
- `video_name`: video name
- `task_string`: Description of the task
- 'start_time': starting time stamp for the video segment
- 'end_time': ending time stamp for the video segment
- `frame_index`: starting index of the frame in the video segment
- `height`: resized image height for visual trace extraction
- 'width': resized image width for visual trace extraction
- `trace`: Robot trajectory trace (serialized numpy array)
- `trace_visibility`: Visibility mask for the trace (serialized numpy array)
## Dataset Loading
### Full Dataset Load
```py
from datasets import load_dataset
dataset = load_dataset("MagmaAI/Magma-Video-ToM", streaming=True, split="train")
```
### Individual Dataset Load
or specify a dataset by:
```py
from datasets import load_dataset
dataset = load_dataset("MagmaAI/Magma-Video-ToM", data_dir="sthv2", streaming=True, split="train")
```
### Sample Decoding
```py
# Helper function to deserialize binary fields
def deserialize_array(bytes_data):
return pickle.loads(bytes_data)
# Helper function to convert binary image data to PIL Image
def bytes_to_image(image_bytes):
return Image.open(io.BytesIO(image_bytes))
for i, example in enumerate(dataset):
# decode trace: 1 x 16 x 256 x 2
trace = deserialize_array(example['trace'])
# decode trace visibility: 1 x 16 x 256 x 1
trace_visibility = deserialize_array(example['trace_visibility'])
```
**NOTE**: the temporal length of traces for video data is 16 as we excluded the starting frame. For all robotics data, it is 17 as we did not exclude the starting frame.