Unconditional Image Generation
English

license: apache-2.0 language: - en datasets: - bitmind/AFHQ - ILSVRC/imagenet-1k pipeline_tag: unconditional-image-generation

Transformer AutoRegressive Flow Model

The TarFlow is proposed by [Zhai et al., 2024], which introduces stacks of autoregressive Transformer blocks (similar to MAF) into the building of affine coupling layers to do Non-Volume Preserving, combined with guidance and denoising, finally achieves state-of-the-art results across multiple benchmarks. Associated code can be found at https://github.com/apple/ml-tarflow.

It's sampling process is extremely slow, and we want to accelerate it in [Liu and Qin, 2025], code in https://github.com/encoreus/GS-Jacobi_for_TarFlow. In experiments, we find that the model parameters are not available in original paper, so we retrain TarFlow models and upload them.

As metioned in [Zhai et al., 2024], a TarFlow model can be denoted as P-Ch-T-K-pε, with patch size (P), model channel size (Ch), number of autoregressive flow blocks (T), the number of attention layers in each flow (K), the best input noise variance pε that yields the best sampling quality for generation tasks.

We trained five models:

  • AFHQ (256x256) conditional: afhq_model_8_768_8_8_0.07.pth

  • ImageNet (128x128) conditional: imagenet_model_4_1024_8_8_0.15.pth

  • ImageNet (64x64) unconditional: imagenet64_model_2_768_8_8_0.05.pth

  • ImageNet (64x64) conditional: imagenet_model_2_768_8_8_0.05.pth

  • ImageNet (64x64) conditioanl: imagenet_model_4_1024_8_8_0.05.pth

We also compute the stats for the true data distribution which can be used to calculate FID.

  • AFHQ (256x256) conditional: afhq_256_fid_stats.pth

  • ImageNet (128x128) conditional: imagenet_128_fid_stats.pth

  • ImageNet (64x64) unconditional: imagenet64_64_fid_stats.pth

  • ImageNet (64x64) conditional: imagenet_64_fid_stats.pth

The sampling traces maybe look like this:

image/png From top to bottom: Img128cond, Img64cond (patch4), Img64uncond, AFHQ. From left to right: noise, Block 7-0, denoised image.

[1] Zhai S, Zhang R, Nakkiran P, et al. Normalizing flows are capable generative models[J]. arXiv preprint arXiv:2412.06329, 2024.

[2] Liu, B. & Qin, Z. Accelerate TarFlow Sampling with GS-Jacobi Iteration. (2025), https://arxiv.org/abs/2505.12849

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Datasets used to train encoreus/Transformer_Autoregressive_Flow