TransformerBlock2Vec

This model card provides an overview of the TransformerBlock2Vec model, a transformer-based embedding model designed to create a 144-dimensional embedding space for Minecraft build chunks (up to 16x8x16 blocks). It uses 3D Rotary Positional Embeddings (RoPE) and is trained to predict masked blocks in a sequence, enabling downstream tasks like build vs. terrain segmentation.

Model Details

Model Description

TransformerBlock2Vec is a transformer-based model that maps 3D Minecraft build chunks (up to 16x8x16 blocks) into a 144-dimensional embedding space. It leverages a custom 3D RoPE implementation to encode block positions and is trained using a masked language modeling approach, predicting 30% of masked blocks in a sequence. It uses DeepSpeed and FlashAttention for efficient training on consumer hardware (e.g., RTX 4070).

  • Model type: Transformer-based embedding model
  • License: MIT

Model Sources [optional]

Uses

The model works very well for distinguishing user-made builds from terrain in Minecraft worlds, achieving 95% accuracy on unseen data for build vs. terrain segmentation. The embedding space of 3D minecraft data will enable downstream tasks such as search and retrieval, generative AI, and context understanding (bots).

Direct Use

TransformerBlock2Vec can be used to generate 144-dimensional embeddings for Minecraft build chunks, enabling tasks such as clustering similar builds, visualizing build distributions using t-SNE, or classifying chunks as builds vs. terrain. It is particularly suited for extracting meaningful representations from Minecraft data.

Downstream Use

The model supports downstream tasks like:

  • Build vs. Terrain Segmentation: Identifying user-made structures in raw Minecraft worlds with 95% accuracy.
  • Schematic Search Engine: Enabling nearest-neighbor searches for similar builds based on embedding similarity.
  • Generative Model Pretraining: Providing embeddings for text-to-voxel generative models.
  • Duplicate Analysis: Clustering builds shows very similar builds near one another in the embedding space, and thus gives an effective way to remove duplicates from the corpus.

Out-of-Scope Use

  • Is not directly integrated into the Minecraft game in any way, this requires a realtime application or mod written in Java.
  • Not suitable for predicting block properties (e.g., stair directions) without additional finetuning.
  • Not intended for non-Minecraft 3D data.

Bias, Risks, and Limitations

  • Bias: The model is trained on a dataset of Minecraft schematics, which may overrepresent certain build styles or block types based on the scraped data sources.
  • Risks: Misclassification of terrain as builds (or vice versa) could lead to incorrect data extraction in downstream tasks.
  • Limitations:
    • Limited to chunks of 16x8x16 or smaller.
    • Excludes block properties (e.g., stair orientations) to reduce vocabulary size.
    • Requires significant compute for training (5 days on an RTX 4070 with DeepSpeed).
    • Performance may degrade on highly unique or novel builds not represented in the training data.

Recommendations

Users should:

  • Validate model outputs for specific use cases, especially with modded or atypical builds.
  • Consider finetuning for tasks requiring block property predictions.
  • Be aware of potential biases in the training data and augment the dataset if targeting underrepresented build styles.

How to Get Started with the Model

Training Details

Training Data

The model is trained on a PostgreSQL database of approximately 108 Billion tokens, the largest known dataset of its kind. Schematics are converted to litematica format, and augmented with metadata (e.g., block counts, dimensions). Chunks are extracted up to 16x8x16, with non-empty chunks used for training. Schematic files are preprocessed with a "terrain-removal" algorithm, to reduce redundancy of terrain-like builds in the training data. This results in a model more directly suitable for embedding builds, instead of over represented generated terrain. Curating and refining data is paramount to the models sucess in downstream tasks, and is currently the most difficult part of this endeavor.

Training Procedure

Preprocessing

  • Schematics are loaded using an optimized litematica parser.
  • Chunks are augmented with random rotations (25% probability) and flips (25% probability per axis).
  • Sequences are padded with a PAD token (1097), separated by a SEP token (1096), and masked with a MASK token (1098) for 30% of tokens.

Training Hyperparameters

  • Batch size: 86 with 4 gradient accumulation steps (effective batch size of 328 with DeepSpeed configuration)
  • Learning rate: 2e-4 warmup, 1e-5 thereafter.
  • Epochs: 4 (took 5 days!)
  • Optimizer: DeepSpeed-managed AdamW
  • Dropout: 0.1
  • Training regime: Mixed precision (fp16) with DeepSpeed

Sizes, Times

  • Training time: ~5 days on a single RTX 4070 with DeepSpeed and FlashAttention.
  • Checkpoint size: ~30MB

Evaluation

Testing Data, Factors & Metrics

Testing Data

Evaluated on a held-out set of unseen Minecraft schematics and raw world data, including both user-made builds and terrain chunks.

Factors

  • Build type: Various structures (e.g., houses, farms, castles).
  • Chunk size: Up to 16x8x16.

Metrics

  • Accuracy (segmentation): 95%+ for build vs. terrain classification on unseen data.
  • Accuracy: 50-90% for masking task as trained on. (If provided chunks are not too far from training distribution)
  • Loss: Cross-entropy loss on masked block prediction.

Results

For the masking task it was trained on (unusual for real-world use-case), probability can range from 40-90% correctly predicted blocks given a masking rate <30%. The model achieves 95% accuracy in distinguishing builds from terrain when training a segmentation head, with clear separation in the 144-dimensional embedding space (visualized via t-SNE). It generalizes well to unseen builds but may struggle with highly unique or novel block distributions. See [the Github repo](https://github.com/Kingburrito777/TransformerBlock2Vec)

Summary

TransformerBlock2Vec provides a robust embedding space for Minecraft builds, enabling accurate segmentation and potential for generative tasks with further development.

Model Examination

t-SNE visualizations of the 144-dimensional embedding space show distinct clusters for different build types, with terrain chunks separable from user-made structures. The model captures 3D spatial relationships effectively due to the 3D RoPE implementation.

Technical Specifications

Model Architecture and Objective

  • Architecture: Transformer encoder with 6 layers, 8 attention heads, 144-dimensional embeddings, and SwiGLU feed-forward networks. Uses 3D RoPE for positional encoding and FlashAttention for efficiency.
  • Objective: Masked block prediction (30% of tokens masked) to learn a 144-dimensional embedding space for Minecraft chunks.

I will have a technical breakdown released with proper metrics and further details.

Compute Infrastructure

Hardware

  • Hardware Type: NVIDIA RTX 4070
  • Hours used: ~120 hours (5 days) for training

Software

  • PyTorch
  • DeepSpeed (for distributed training and mixed precision)
  • FlashAttention
  • Python 3.8+
  • PostgreSQL (for data storage)

Citation

BibTeX:

[tba]

Glossary

  • 3D RoPE: 3D Rotary Positional Embeddings, a positional encoding method for 3D voxel data.
  • Litematica: A Minecraft schematic file format for storing 3D build data.
  • Chunk: A 3D block region in Minecraft (up to 16x8x16 in this model).
  • Embedding Space: A 144-dimensional vector space representing Minecraft build chunks.

More Information

NOT AN OFFICIAL MINECRAFT PRODUCT. NOT APPROVED BY OR ASSOCIATED WITH MOJANG OR MICROSOFT. The project is part of the CraftGPT initiative to build generative AI for Minecraft.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support