xizaoqu
commited on
Commit
·
27ca8b3
1
Parent(s):
dee805b
init
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- README.md +226 -12
- __pycache__/app.cpython-310.pyc +0 -0
- algorithms/README.md +21 -0
- algorithms/__init__.py +0 -0
- algorithms/__pycache__/__init__.cpython-310.pyc +0 -0
- algorithms/common/README.md +5 -0
- algorithms/common/__init__.py +0 -0
- algorithms/common/__pycache__/__init__.cpython-310.pyc +0 -0
- algorithms/common/__pycache__/base_pytorch_algo.cpython-310.pyc +0 -0
- algorithms/common/base_algo.py +22 -0
- algorithms/common/base_pytorch_algo.py +253 -0
- algorithms/common/metrics/__init__.py +3 -0
- algorithms/common/metrics/__pycache__/__init__.cpython-310.pyc +0 -0
- algorithms/common/metrics/__pycache__/fid.cpython-310.pyc +0 -0
- algorithms/common/metrics/__pycache__/fvd.cpython-310.pyc +0 -0
- algorithms/common/metrics/__pycache__/lpips.cpython-310.pyc +0 -0
- algorithms/common/metrics/fid.py +1 -0
- algorithms/common/metrics/fvd.py +158 -0
- algorithms/common/metrics/lpips.py +1 -0
- algorithms/common/models/__init__.py +0 -0
- algorithms/common/models/cnn.py +141 -0
- algorithms/common/models/mlp.py +22 -0
- algorithms/worldmem/__init__.py +2 -0
- algorithms/worldmem/__pycache__/__init__.cpython-310.pyc +0 -0
- algorithms/worldmem/__pycache__/df_base.cpython-310.pyc +0 -0
- algorithms/worldmem/__pycache__/df_video.cpython-310.pyc +0 -0
- algorithms/worldmem/__pycache__/pose_prediction.cpython-310.pyc +0 -0
- algorithms/worldmem/df_base.py +307 -0
- algorithms/worldmem/df_video.py +908 -0
- algorithms/worldmem/models/__pycache__/attention.cpython-310.pyc +0 -0
- algorithms/worldmem/models/__pycache__/cameractrl_module.cpython-310.pyc +0 -0
- algorithms/worldmem/models/__pycache__/diffusion.cpython-310.pyc +0 -0
- algorithms/worldmem/models/__pycache__/dit.cpython-310.pyc +0 -0
- algorithms/worldmem/models/__pycache__/my_rotary_embedding_torch.cpython-310.pyc +0 -0
- algorithms/worldmem/models/__pycache__/pose_prediction.cpython-310.pyc +0 -0
- algorithms/worldmem/models/__pycache__/rotary_embedding_torch.cpython-310.pyc +0 -0
- algorithms/worldmem/models/__pycache__/utils.cpython-310.pyc +0 -0
- algorithms/worldmem/models/__pycache__/vae.cpython-310.pyc +0 -0
- algorithms/worldmem/models/attention.py +351 -0
- algorithms/worldmem/models/cameractrl_module.py +12 -0
- algorithms/worldmem/models/diffusion.py +520 -0
- algorithms/worldmem/models/dit.py +577 -0
- algorithms/worldmem/models/pose_prediction.py +42 -0
- algorithms/worldmem/models/rotary_embedding_torch.py +302 -0
- algorithms/worldmem/models/utils.py +163 -0
- algorithms/worldmem/models/vae.py +359 -0
- algorithms/worldmem/pose_prediction.py +374 -0
- app.py +365 -0
- app.sh +50 -0
- configurations/README.md +7 -0
README.md
CHANGED
@@ -1,12 +1,226 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion
|
2 |
+
|
3 |
+
#### [[Project Website]](https://boyuan.space/diffusion-forcing) [[Paper]](https://arxiv.org/abs/2407.01392)
|
4 |
+
|
5 |
+
[Boyuan Chen<sup>1</sup>](https://boyuan.space/), [Diego Martí Monsó<sup>2</sup>](https://www.linkedin.com/in/diego-marti/?originalSubdomain=de), [ Yilun Du<sup>1</sup>](https://yilundu.github.io/), [Max Simchowitz<sup>1</sup>](https://msimchowitz.github.io/), [Russ Tedrake<sup>1</sup>](https://groups.csail.mit.edu/locomotion/russt.html), [Vincent Sitzmann<sup>1</sup>](https://www.vincentsitzmann.com/) <br/>
|
6 |
+
<sup>1</sup>MIT <sup>2</sup>Technical University of Munich </br>
|
7 |
+
|
8 |
+
This is the v1.5 code base for our paper [Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion](https://boyuan.space/diffusion-forcing). The **main** branch contains our latest reimplementation with temporal attention (recommended) while the **paper** branch contains RNN code used by original paper for reproduction purpose.
|
9 |
+
|
10 |
+
Diffusion Forcing v2 is coming very soon! There is a stronger technique to achieve infinite, consistent video generation uniquely enabled by diffusion forcing. We are actively investigating that so please stay tuned. We will also release latent diffusion code by then that allows you to scale up to higher resolution / longer videos!
|
11 |
+
|
12 |
+

|
13 |
+
|
14 |
+
```
|
15 |
+
@misc{chen2024diffusionforcingnexttokenprediction,
|
16 |
+
title={Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion},
|
17 |
+
author={Boyuan Chen and Diego Marti Monso and Yilun Du and Max Simchowitz and Russ Tedrake and Vincent Sitzmann},
|
18 |
+
year={2024},
|
19 |
+
eprint={2407.01392},
|
20 |
+
archivePrefix={arXiv},
|
21 |
+
primaryClass={cs.LG},
|
22 |
+
url={https://arxiv.org/abs/2407.01392},
|
23 |
+
}
|
24 |
+
```
|
25 |
+
|
26 |
+
# Project Instructions
|
27 |
+
|
28 |
+
## Setup
|
29 |
+
|
30 |
+
If you want to use our latest improved implementation for video and planning with temporal attention instead of RNN, stay on this branch. If you are instead interested in reproducing claims by orignal paper, switch to the branch used by original paper via `git checkout paper`.
|
31 |
+
|
32 |
+
Run `conda create python=3.10 -n diffusion-forcing` to create environment.
|
33 |
+
Run `conda activate diffusion-forcing` to activate this environment.
|
34 |
+
|
35 |
+
Install dependencies for time series, video and robotics:
|
36 |
+
|
37 |
+
```
|
38 |
+
pip install -r requirements.txt
|
39 |
+
```
|
40 |
+
|
41 |
+
[Sign up](https://wandb.ai/site) a wandb account for cloud logging and checkpointing. In command line, run `wandb login` to login.
|
42 |
+
|
43 |
+
Then modify the wandb entity in `configurations/config.yaml` to your wandb account.
|
44 |
+
|
45 |
+
Optionally, if you want to do maze planning, install the following complicated dependencies due to outdated dependencies of d4rl. This involves first installing mujoco 210 and then run
|
46 |
+
|
47 |
+
```
|
48 |
+
pip install -r extra_requirements.txt
|
49 |
+
```
|
50 |
+
|
51 |
+
## Quick start with pretrained checkpoints
|
52 |
+
|
53 |
+
Since dataset is huge, we provide a mini subset and pre-trained checkpoints for you to quickly test out our model! To do so, download mini dataset and checkpoints from [here](https://drive.google.com/file/d/1xAOQxWcLzcFyD4zc0_rC9jGXe_uaHb7b/view?usp=sharing) to project root and extract with `tar -xzvf quickstart_atten.tar.gz`. Files shall appear in `data` and `outputs/xxx.ckpt`. Make sure you also git pull upstream to use latest version of code if you forked before ckpt release!
|
54 |
+
|
55 |
+
Then run the following commands and go to the wandb panel to see the results.
|
56 |
+
|
57 |
+
### Video Prediction:
|
58 |
+
|
59 |
+
Our visualization is side by side, with prediction on the left and ground truth on the right. However, ground truth is expected to not align with prediction since the sequence is highly stochastic. Ground truth is provided to provide an idea about quality only.
|
60 |
+
|
61 |
+
Autoregressively generate minecraft video with 1x the length it's trained on:
|
62 |
+
`python -m main +name=sample_minecraft_pretrained load=outputs/minecraft.ckpt experiment.tasks=[validation]`
|
63 |
+
|
64 |
+
To let the model roll out **longer than it's trained on**, simply append `dataset.validation_multiplier=8` to the above commands, and it will rollout `8x` longer than maximum sequence length it's trained on.
|
65 |
+
|
66 |
+
The above checkpoint is trained for 100K steps with small number of frames. We've already verified diffusion forcing works in latent diffusion setting and can be extended to many more tokens without sacrificing compositionally (with some addition techniques outside this repo)! Stay tuned for our next project!
|
67 |
+
|
68 |
+
### Maze Planning:
|
69 |
+
|
70 |
+
The maze planning setting is changed a bit as we gain more insighs, please see corresponding paragraphs in training section for details. We haven't reimplemented MCTG yet, but you can already see nice visualizations on wandb log.
|
71 |
+
|
72 |
+
Medium Maze
|
73 |
+
|
74 |
+
`python -m main experiment=exp_planning algorithm=df_planning dataset=maze2d_medium dataset.action_mean=[] dataset.action_std=[] dataset.observation_mean=[3.5092521,3.4765592] dataset.observation_std=[1.3371079,1.52102] load=outputs/maze2d_medium_x.ckpt experiment.tasks=[validation] algorithm.guidance_scale=3 +name=maze2d_medium_x_sampling`
|
75 |
+
|
76 |
+
Large Maze
|
77 |
+
|
78 |
+
`python -m main experiment=exp_planning algorithm=df_planning dataset=maze2d_large dataset.observation_mean=[3.7296331,5.3047247] dataset.observation_std=[1.8070312,2.5687592] dataset.action_mean=[] dataset.action_std=[] load=outputs/maze2d_large_x.ckpt experiment.tasks=[validation] algorithm.guidance_scale=2 +name=maze2d_large_x_sampling`
|
79 |
+
|
80 |
+
We also explored a couple more settings but haven't reimplemented everything in original paper yet. If you are interestted in those checkpoints, see the source code of this README file for ckpt loading instructions that's commented out.
|
81 |
+
|
82 |
+
<!--
|
83 |
+
Here is also a position + velocity setting ckpt, but we don't recommend this because diffusing quantity and its derivative together creates some bad optimization landscape.
|
84 |
+
|
85 |
+
`python -m main experiment=exp_planning algorithm=df_planning dataset=maze2d_medium dataset.observation_std=[2.6742158,3.04204,9.3630628,9.4774808] dataset.action_mean=[] dataset.action_std=[] load=outputs/maze2d_medium_xv.ckpt experiment.tasks=[validation] algorithm.guidance_scale=4 +name=maze2d_medium_xv_sampling`
|
86 |
+
|
87 |
+
`python -m main experiment=exp_planning algorithm=df_planning dataset=maze2d_large dataset.observation_std=[3.6140624,5.1375184,9.747382,10.5974788] dataset.action_mean=[] dataset.action_std=[] load=outputs/maze2d_large_xv.ckpt experiment.tasks=[validation] algorithm.guidance_scale=4 +name=maze2d_large_xv_sampling`
|
88 |
+
|
89 |
+
Here is also ckpt where we take diffused actions,a challenging setting that's not done in prior papers. We haven't got it working as well as original RNN version of diffusion forcing, but it does have okay numbers. You can tune up the guidance scale a bit.
|
90 |
+
|
91 |
+
`python -m main experiment=exp_planning algorithm=df_planning dataset=maze2d_medium dataset.observation_std=[2.67,3.04,8,8] dataset.action_std=[6,6] load=outputs/maze2d_medium_xva.ckpt experiment.tasks=[validation] algorithm.guidance_scale=2 algorithm.open_loop_horizon=10 +name=maze2d_medium_xva_sampling`
|
92 |
+
|
93 |
+
`python -m main experiment=exp_planning algorithm=df_planning dataset=maze2d_large dataset.observation_std=[3.62,5.14,9.76,10.6] dataset.action_std=[3,3] load=outputs/maze2d_large_xva.ckpt experiment.tasks=[validation] algorithm.guidance_scale=2 algorithm.open_loop_horizon=10 +name=maze2d_large_xva_sampling` -->
|
94 |
+
|
95 |
+
## Training
|
96 |
+
|
97 |
+
### Video
|
98 |
+
|
99 |
+
Video prediction requires downloading giant datasets. First, if you downloaded the mini subset following `Quick start with pretrained checkpoints` section, delete the mini subset folders `data/minecraft` and `data/dmlab` because we have to download the whole dataset this time. We've coded in python that it will download the dataset for you it doesn't already exist. Due to the slowness of the [source](https://github.com/wilson1yan/teco), this may take a couple days. If you prefer to do it yourself via bash script, please refer to the bash scripts in original [TECO dataset](https://github.com/wilson1yan/teco) and use `dmlab.sh` and `minecraft.sh` in their Dataset section of README, any maybe split bash script into parallel scripts.
|
100 |
+
|
101 |
+
Then just run the corresponding commands:
|
102 |
+
|
103 |
+
#### Minecraft
|
104 |
+
|
105 |
+
`python -m main +name=your_experiment_name algorithm=df_video dataset=video_minecraft`
|
106 |
+
|
107 |
+
#### DMLab
|
108 |
+
|
109 |
+
`python -m main +name=your_experiment_name algorithm=df_video dataset=video_dmlab algorithm.weight_decay=1e-3 algorithm.diffusion.architecture.network_size=48 algorithm.diffusion.architecture.attn_dim_head=32 algorithm.diffusion.architecture.attn_resolutions=[8,16,32,64] algorithm.diffusion.beta_schedule=cosine`
|
110 |
+
|
111 |
+
#### No causal masking
|
112 |
+
|
113 |
+
Simply append `algorithm.causal=False` to your command.
|
114 |
+
|
115 |
+
#### Play with sampling
|
116 |
+
|
117 |
+
Please take a look at "Load a checkpoint to eval" paragraph to understand how to use load checkpoint with `load=`. Then, run the exact training command with `experiment.tasks=[validation] load={wandb_run_id}` to load a checkpoint and experiment with sampling.
|
118 |
+
|
119 |
+
To see how you can roll out longer than the sequence is trained on, you can find instructions in `quick start with pretrained checkpoints` section. Keep in mind that rolling out infinitely without sliding window is a property of original RNN implementation on `paper` branch, and this version has to use sliding window since it's temporal attention.
|
120 |
+
|
121 |
+
By default, we run autoregressive sampling with stablization. To sample next 2 tokens jointly, you can append the following to the above command: `algorithm.scheduling_matrix=full_sequence algorithm.chunk_size=2`.
|
122 |
+
|
123 |
+
## Maze Planning
|
124 |
+
|
125 |
+
For those who only wish to reproduce the original paper instead of transformer architecture, please checkout`paper` branch of the code instead.
|
126 |
+
|
127 |
+
**Medium Maze**
|
128 |
+
|
129 |
+
`python -m main experiment=exp_planning algorithm=df_planning dataset=maze2d_medium dataset.action_mean=[] dataset.action_std=[] dataset.observation_mean=[3.5092521,3.4765592] dataset.observation_std=[1.3371079,1.52102] +name=maze2d_medium_x`
|
130 |
+
|
131 |
+
**Large Maze**
|
132 |
+
|
133 |
+
`python -m main experiment=exp_planning algorithm=df_planning dataset=maze2d_large dataset.observation_mean=[3.7296331,5.3047247] dataset.observation_std=[1.8070312,2.5687592] dataset.action_mean=[] dataset.action_std=[] +name=maze2d_large_x`
|
134 |
+
|
135 |
+
**Run planning after model is trained**
|
136 |
+
|
137 |
+
Please take a look at "Load a checkpoint to eval" paragraph to understand how to use load checkpoint with `load=`. To sample, simply append `load={wandb_id_of_above_runs} experiment.tasks=[validation] algorithm.guidance_scale=2 +name=maze2d_sampling` to above command after trained. Feel free to tune the `guidance_scale` from 1 - 5.
|
138 |
+
|
139 |
+
This version of maze planning uses a different version of diffusion forcing from original paper - while doing the follow up to diffusion forcing, we realized that training with independent noise actually constructed a smooth interpolation between causal and non-causal models too, since we can just masked out future by complete noise (fully causal) or some noise (interpolation). The best thing is, you can still account for causal uncertainty via pyramoid sampling in this setting, by masking out tokens at different noise levels, and you can still have flexible horizon because you can tell the model that padded entries are pure noise, a unique ability of diffusion forcing.
|
140 |
+
|
141 |
+
We also reflected a bit about the environment and concluded that the original metric isn't necessarily a good metric, because maze planning should reward those who can plan the fastest route to goal, not a slow walking agent that goes there at the end of episode. The dataset never contains data of staying at the goal, so agents are supposed to walk away after reaching the goal. I think [Diffuser](https://arxiv.org/abs/2205.09991) had an unfair advantage of just generating slow plans, that happend to let the agent stay in the neighbour hood of goal for longer and got very high reward, exploiting flaws in the environment design (a good design would involve penalty of longer time taken to reach goal). So, in this version of code, we just optimize for flexible horizon planning that tries to reach goal asap, and the planner will automatically come back to goal if it left the goal since staying is never in dataset. You can see new metrics we designed in wandb logging interface.
|
142 |
+
|
143 |
+
## Timeseries and Robotics
|
144 |
+
|
145 |
+
Please checkout `paper` branch for the code used by original paper. If I have time later, I will reimplement these two domains with transformer as well to complete this branch.
|
146 |
+
|
147 |
+
# Change Log
|
148 |
+
|
149 |
+
| Data | Notes |
|
150 |
+
| --------- | :---------------------------------------------------------------------------------------------: |
|
151 |
+
| Jul/30/24 | Upgrade RNN to temporal attention, move orignal code to 'paper' branch |
|
152 |
+
| Jul/03/24 | Initial release of the code. Email me if you have questions or find any errors in this version. |
|
153 |
+
|
154 |
+
# Infra instructions
|
155 |
+
|
156 |
+
This repo is forked from [Boyuan Chen](https://boyuan.space/)'s research template [repo](https://github.com/buoyancy99/research-template). By its MIT license, you must keep the above sentence in `README.md` and the `LICENSE` file to credit the author.
|
157 |
+
|
158 |
+
All experiments can be launched via `python -m main +name=xxxx {options}` where you can fine more details later in this article.
|
159 |
+
|
160 |
+
The code base will automatically use cuda or your Macbook M1 GPU when available.
|
161 |
+
|
162 |
+
For slurm clusters e.g. mit supercloud, you can run `python -m main cluster=mit_supercloud {options}` on login node.
|
163 |
+
It will automatically generate slurm scripts and run them for you on a compute node. Even if compute nodes are offline,
|
164 |
+
the script will still automatically sync wandb logging to cloud with <1min latency. It's also easy to add your own slurm
|
165 |
+
by following the `Add slurm clusters` section.
|
166 |
+
|
167 |
+
## Modify for your own project
|
168 |
+
|
169 |
+
First, create a new repository with this template. Make sure the new repository has the name you want to use for wandb
|
170 |
+
logging.
|
171 |
+
|
172 |
+
Add your method and baselines in `algorithms` following the `algorithms/README.md` as well as the example code in
|
173 |
+
`algorithms/diffusion_forcing/df_video.py`. For pytorch experiments, write your algorithm as a [pytorch lightning](https://github.com/Lightning-AI/lightning)
|
174 |
+
`pl.LightningModule` which has extensive
|
175 |
+
[documentation](https://lightning.ai/docs/pytorch/stable/). For a quick start, read "Define a LightningModule" in this [link](https://lightning.ai/docs/pytorch/stable/starter/introduction.html). Finally, add a yaml config file to `configurations/algorithm` imitating that of `configurations/algorithm/df_video.yaml`, for each algorithm you added.
|
176 |
+
|
177 |
+
Add your dataset in `datasets` following the `datasets/README.md` as well as the example code in
|
178 |
+
`datasets/video`. Finally, add a yaml config file to `configurations/dataset` imitating that of
|
179 |
+
`configurations/dataset/video_dmlab.yaml`, for each dataset you added.
|
180 |
+
|
181 |
+
Add your experiment in `experiments` following the `experiments/README.md` or following the example code in
|
182 |
+
`experiments/exp_video.py`. Then register your experiment in `experiments/__init__.py`.
|
183 |
+
Finally, add a yaml config file to `configurations/experiment` imitating that of
|
184 |
+
`configurations/experiment/exp_video.yaml`, for each experiment you added.
|
185 |
+
|
186 |
+
Modify `configurations/config.yaml` to set `algorithm` to the yaml file you want to use in `configurations/algorithm`;
|
187 |
+
set `experiment` to the yaml file you want to use in `configurations/experiment`; set `dataset` to the yaml file you
|
188 |
+
want to use in `configurations/dataset`, or to `null` if no dataset is needed; Notice the fields should not contain the
|
189 |
+
`.yaml` suffix.
|
190 |
+
|
191 |
+
You are all set!
|
192 |
+
|
193 |
+
`cd` into your project root. Now you can launch your new experiment with `python main.py +name=<name_your_experiment>`. You can run baselines or
|
194 |
+
different datasets by add arguments like `algorithm=xxx` or `dataset=xxx`. You can also override any `yaml` configurations by following the next section.
|
195 |
+
|
196 |
+
One special note, if your want to define a new task for your experiment, (e.g. other than `training` and `test`) you can define it as a method in your experiment class and use `experiment.tasks=[task_name]` to run it. Let's say you have a `generate_dataset` task before the task `training` and you implemented it in experiment class, you can then run `python -m main +name xxxx experiment.tasks=[generate_dataset,training]` to execute it before training.
|
197 |
+
|
198 |
+
## Pass in arguments
|
199 |
+
|
200 |
+
We use [hydra](https://hydra.cc) instead of `argparse` to configure arguments at every code level. You can both write a static config in `configuration` folder or, at runtime,
|
201 |
+
[override part of yur static config](https://hydra.cc/docs/tutorials/basic/your_first_app/simple_cli/) with command line arguments.
|
202 |
+
|
203 |
+
For example, arguments `algorithm=example_classifier experiment.lr=1e-3` will override the `lr` variable in `configurations/experiment/example_classifier.yaml`. The argument `wandb.mode` will override the `mode` under `wandb` namesspace in the file `configurations/config.yaml`.
|
204 |
+
|
205 |
+
All static config and runtime override will be logged to cloud automatically.
|
206 |
+
|
207 |
+
## Resume a checkpoint & logging
|
208 |
+
|
209 |
+
For machine learning experiments, all checkpoints and logs are logged to cloud automatically so you can resume them on another server. Simply append `resume={wandb_run_id}` to your command line arguments to resume it. The run_id can be founded in a url of a wandb run in wandb dashboard. By default, latest checkpoint in a run is stored indefinitely and earlier checkpoints in the run will be deleted after 5 days to save your storage.
|
210 |
+
|
211 |
+
On the other hand, sometimes you may want to start a new run with different run id but still load a prior ckpt. This can be done by setting the `load={wandb_run_id / ckpt path}` flag.
|
212 |
+
|
213 |
+
## Load a checkpoint to eval
|
214 |
+
|
215 |
+
The argument `experiment.tasks=[task_name1,task_name2]` (note the `[]` brackets here needed) allows to select a sequence of tasks to execute, such as `training`, `validation` and `test`. Therefore, for testing a machine learning ckpt, you may run `python -m main load={your_wandb_run_id} experiment.tasks=[test]`.
|
216 |
+
|
217 |
+
More generally, the task names are the corresponding method names of your experiment class. For `BaseLightningExperiment`, we already defined three methods `training`, `validation` and `test` for you, but you can also define your own tasks by creating methods to your experiment class under intended task names.
|
218 |
+
|
219 |
+
## Debug
|
220 |
+
|
221 |
+
We provide a useful debug flag which you can enable by `python main.py debug=True`. This will enable numerical error tracking as well as setting `cfg.debug` to `True` for your experiments, algorithms and datasets class. However, this debug flag will make ML code very slow as it automatically tracks all parameter / gradients!
|
222 |
+
|
223 |
+
## Add slurm clusters
|
224 |
+
|
225 |
+
It's very easy to add your own slurm clusters via adding a yaml file in `configurations/cluster`. You can take a look
|
226 |
+
at `configurations/cluster/mit_vision.yaml` for example.
|
__pycache__/app.cpython-310.pyc
ADDED
Binary file (11.5 kB). View file
|
|
algorithms/README.md
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# algorithms
|
2 |
+
|
3 |
+
`algorithms` folder is designed to contain implementation of algorithms or models.
|
4 |
+
Content in `algorithms` can be loosely grouped components (e.g. models) or an algorithm has already has all
|
5 |
+
components chained together (e.g. Lightning Module, RL algo).
|
6 |
+
You should create a folder name after your own algorithm or baselines in it.
|
7 |
+
|
8 |
+
Two example can be found in `examples` subfolder.
|
9 |
+
|
10 |
+
The `common` subfolder is designed to contain general purpose classes that's useful for many projects, e.g MLP.
|
11 |
+
|
12 |
+
You should not run any `.py` file from algorithms folder.
|
13 |
+
Instead, you write unit tests / debug python files in `debug` and launch script in `experiments`.
|
14 |
+
|
15 |
+
You are discouraged from putting visualization utilities in algorithms, as those should go to `utils` in project root.
|
16 |
+
|
17 |
+
Each algorithm class takes in a DictConfig file `cfg` in its `__init__`, which allows you to pass in arguments via configuration file in `configurations/algorithm` or [command line override](https://hydra.cc/docs/tutorials/basic/your_first_app/simple_cli/).
|
18 |
+
|
19 |
+
---
|
20 |
+
|
21 |
+
This repo is forked from [Boyuan Chen](https://boyuan.space/)'s research template [repo](https://github.com/buoyancy99/research-template). By its MIT license, you must keep the above sentence in `README.md` and the `LICENSE` file to credit the author.
|
algorithms/__init__.py
ADDED
File without changes
|
algorithms/__pycache__/__init__.cpython-310.pyc
ADDED
Binary file (150 Bytes). View file
|
|
algorithms/common/README.md
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
THis folder contains models / algorithms that are considered general for many algorithms.
|
2 |
+
|
3 |
+
---
|
4 |
+
|
5 |
+
This repo is forked from [Boyuan Chen](https://boyuan.space/)'s research template [repo](https://github.com/buoyancy99/research-template). By its MIT license, you must keep the above sentence in `README.md` and the `LICENSE` file to credit the author.
|
algorithms/common/__init__.py
ADDED
File without changes
|
algorithms/common/__pycache__/__init__.cpython-310.pyc
ADDED
Binary file (157 Bytes). View file
|
|
algorithms/common/__pycache__/base_pytorch_algo.cpython-310.pyc
ADDED
Binary file (9.12 kB). View file
|
|
algorithms/common/base_algo.py
ADDED
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from abc import ABC, abstractmethod
|
2 |
+
from typing import Any, Dict, List, Optional, Tuple, Union
|
3 |
+
|
4 |
+
from omegaconf import DictConfig
|
5 |
+
|
6 |
+
|
7 |
+
class BaseAlgo(ABC):
|
8 |
+
"""
|
9 |
+
A base class for generic algorithms.
|
10 |
+
"""
|
11 |
+
|
12 |
+
def __init__(self, cfg: DictConfig):
|
13 |
+
super().__init__()
|
14 |
+
self.cfg = cfg
|
15 |
+
self.debug = self.cfg.debug
|
16 |
+
|
17 |
+
@abstractmethod
|
18 |
+
def run(*args: Any, **kwargs: Any) -> Any:
|
19 |
+
"""
|
20 |
+
Run the algorithm.
|
21 |
+
"""
|
22 |
+
raise NotImplementedError
|
algorithms/common/base_pytorch_algo.py
ADDED
@@ -0,0 +1,253 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from abc import ABC, abstractmethod
|
2 |
+
import warnings
|
3 |
+
from typing import Any, Union, Sequence, Optional
|
4 |
+
|
5 |
+
from lightning.pytorch.utilities.types import STEP_OUTPUT
|
6 |
+
from omegaconf import DictConfig
|
7 |
+
import lightning.pytorch as pl
|
8 |
+
import torch
|
9 |
+
import numpy as np
|
10 |
+
from PIL import Image
|
11 |
+
import wandb
|
12 |
+
import einops
|
13 |
+
|
14 |
+
|
15 |
+
class BasePytorchAlgo(pl.LightningModule, ABC):
|
16 |
+
"""
|
17 |
+
A base class for Pytorch algorithms using Pytorch Lightning.
|
18 |
+
See https://lightning.ai/docs/pytorch/stable/starter/introduction.html for more details.
|
19 |
+
"""
|
20 |
+
|
21 |
+
def __init__(self, cfg: DictConfig):
|
22 |
+
super().__init__()
|
23 |
+
self.cfg = cfg
|
24 |
+
self.debug = self.cfg.debug
|
25 |
+
self._build_model()
|
26 |
+
|
27 |
+
@abstractmethod
|
28 |
+
def _build_model(self):
|
29 |
+
"""
|
30 |
+
Create all pytorch nn.Modules here.
|
31 |
+
"""
|
32 |
+
raise NotImplementedError
|
33 |
+
|
34 |
+
@abstractmethod
|
35 |
+
def training_step(self, *args: Any, **kwargs: Any) -> STEP_OUTPUT:
|
36 |
+
r"""Here you compute and return the training loss and some additional metrics for e.g. the progress bar or
|
37 |
+
logger.
|
38 |
+
|
39 |
+
Args:
|
40 |
+
batch: The output of your data iterable, normally a :class:`~torch.utils.data.DataLoader`.
|
41 |
+
batch_idx: The index of this batch.
|
42 |
+
dataloader_idx: (only if multiple dataloaders used) The index of the dataloader that produced this batch.
|
43 |
+
|
44 |
+
Return:
|
45 |
+
Any of these options:
|
46 |
+
- :class:`~torch.Tensor` - The loss tensor
|
47 |
+
- ``dict`` - A dictionary. Can include any keys, but must include the key ``'loss'``.
|
48 |
+
- ``None`` - Skip to the next batch. This is only supported for automatic optimization.
|
49 |
+
This is not supported for multi-GPU, TPU, IPU, or DeepSpeed.
|
50 |
+
|
51 |
+
In this step you'd normally do the forward pass and calculate the loss for a batch.
|
52 |
+
You can also do fancier things like multiple forward passes or something model specific.
|
53 |
+
|
54 |
+
Example::
|
55 |
+
|
56 |
+
def training_step(self, batch, batch_idx):
|
57 |
+
x, y, z = batch
|
58 |
+
out = self.encoder(x)
|
59 |
+
loss = self.loss(out, x)
|
60 |
+
return loss
|
61 |
+
|
62 |
+
To use multiple optimizers, you can switch to 'manual optimization' and control their stepping:
|
63 |
+
|
64 |
+
.. code-block:: python
|
65 |
+
|
66 |
+
def __init__(self):
|
67 |
+
super().__init__()
|
68 |
+
self.automatic_optimization = False
|
69 |
+
|
70 |
+
|
71 |
+
# Multiple optimizers (e.g.: GANs)
|
72 |
+
def training_step(self, batch, batch_idx):
|
73 |
+
opt1, opt2 = self.optimizers()
|
74 |
+
|
75 |
+
# do training_step with encoder
|
76 |
+
...
|
77 |
+
opt1.step()
|
78 |
+
# do training_step with decoder
|
79 |
+
...
|
80 |
+
opt2.step()
|
81 |
+
|
82 |
+
Note:
|
83 |
+
When ``accumulate_grad_batches`` > 1, the loss returned here will be automatically
|
84 |
+
normalized by ``accumulate_grad_batches`` internally.
|
85 |
+
|
86 |
+
"""
|
87 |
+
return super().training_step(*args, **kwargs)
|
88 |
+
|
89 |
+
def configure_optimizers(self):
|
90 |
+
"""
|
91 |
+
Return an optimizer. If you need to use more than one optimizer, refer to pytorch lightning documentation:
|
92 |
+
https://lightning.ai/docs/pytorch/stable/common/optimization.html
|
93 |
+
"""
|
94 |
+
parameters = self.parameters()
|
95 |
+
return torch.optim.Adam(parameters, lr=self.cfg.lr)
|
96 |
+
|
97 |
+
def log_video(
|
98 |
+
self,
|
99 |
+
key: str,
|
100 |
+
video: Union[np.ndarray, torch.Tensor],
|
101 |
+
mean: Union[np.ndarray, torch.Tensor, Sequence, float] = None,
|
102 |
+
std: Union[np.ndarray, torch.Tensor, Sequence, float] = None,
|
103 |
+
fps: int = 5,
|
104 |
+
format: str = "mp4",
|
105 |
+
):
|
106 |
+
"""
|
107 |
+
Log video to wandb. WandbLogger in pytorch lightning does not support video logging yet, so we call wandb directly.
|
108 |
+
|
109 |
+
Args:
|
110 |
+
video: a numpy array or tensor, either in form (time, channel, height, width) or in the form
|
111 |
+
(batch, time, channel, height, width). The content must be be in 0-255 if under dtype uint8
|
112 |
+
or [0, 1] otherwise.
|
113 |
+
mean: optional, the mean to unnormalize video tensor, assuming unnormalized data is in [0, 1].
|
114 |
+
std: optional, the std to unnormalize video tensor, assuming unnormalized data is in [0, 1].
|
115 |
+
key: the name of the video.
|
116 |
+
fps: the frame rate of the video.
|
117 |
+
format: the format of the video. Can be either "mp4" or "gif".
|
118 |
+
"""
|
119 |
+
|
120 |
+
if isinstance(video, torch.Tensor):
|
121 |
+
video = video.detach().cpu().numpy()
|
122 |
+
|
123 |
+
expand_shape = [1] * (len(video.shape) - 2) + [3, 1, 1]
|
124 |
+
if std is not None:
|
125 |
+
if isinstance(std, (float, int)):
|
126 |
+
std = [std] * 3
|
127 |
+
if isinstance(std, torch.Tensor):
|
128 |
+
std = std.detach().cpu().numpy()
|
129 |
+
std = np.array(std).reshape(*expand_shape)
|
130 |
+
video = video * std
|
131 |
+
if mean is not None:
|
132 |
+
if isinstance(mean, (float, int)):
|
133 |
+
mean = [mean] * 3
|
134 |
+
if isinstance(mean, torch.Tensor):
|
135 |
+
mean = mean.detach().cpu().numpy()
|
136 |
+
mean = np.array(mean).reshape(*expand_shape)
|
137 |
+
video = video + mean
|
138 |
+
|
139 |
+
if video.dtype != np.uint8:
|
140 |
+
video = np.clip(video, a_min=0, a_max=1) * 255
|
141 |
+
video = video.astype(np.uint8)
|
142 |
+
|
143 |
+
self.logger.experiment.log(
|
144 |
+
{
|
145 |
+
key: wandb.Video(video, fps=fps, format=format),
|
146 |
+
},
|
147 |
+
step=self.global_step,
|
148 |
+
)
|
149 |
+
|
150 |
+
def log_image(
|
151 |
+
self,
|
152 |
+
key: str,
|
153 |
+
image: Union[np.ndarray, torch.Tensor, Image.Image, Sequence[Image.Image]],
|
154 |
+
mean: Union[np.ndarray, torch.Tensor, Sequence, float] = None,
|
155 |
+
std: Union[np.ndarray, torch.Tensor, Sequence, float] = None,
|
156 |
+
**kwargs: Any,
|
157 |
+
):
|
158 |
+
"""
|
159 |
+
Log image(s) using WandbLogger.
|
160 |
+
Args:
|
161 |
+
key: the name of the video.
|
162 |
+
image: a single image or a batch of images. If a batch of images, the shape should be (batch, channel, height, width).
|
163 |
+
mean: optional, the mean to unnormalize image tensor, assuming unnormalized data is in [0, 1].
|
164 |
+
std: optional, the std to unnormalize tensor, assuming unnormalized data is in [0, 1].
|
165 |
+
kwargs: optional, WandbLogger log_image kwargs, such as captions=xxx.
|
166 |
+
"""
|
167 |
+
if isinstance(image, Image.Image):
|
168 |
+
image = [image]
|
169 |
+
elif len(image) and not isinstance(image[0], Image.Image):
|
170 |
+
if isinstance(image, torch.Tensor):
|
171 |
+
image = image.detach().cpu().numpy()
|
172 |
+
|
173 |
+
if len(image.shape) == 3:
|
174 |
+
image = image[None]
|
175 |
+
|
176 |
+
if image.shape[1] == 3:
|
177 |
+
if image.shape[-1] == 3:
|
178 |
+
warnings.warn(f"Two channels in shape {image.shape} have size 3, assuming channel first.")
|
179 |
+
image = einops.rearrange(image, "b c h w -> b h w c")
|
180 |
+
|
181 |
+
if std is not None:
|
182 |
+
if isinstance(std, (float, int)):
|
183 |
+
std = [std] * 3
|
184 |
+
if isinstance(std, torch.Tensor):
|
185 |
+
std = std.detach().cpu().numpy()
|
186 |
+
std = np.array(std)[None, None, None]
|
187 |
+
image = image * std
|
188 |
+
if mean is not None:
|
189 |
+
if isinstance(mean, (float, int)):
|
190 |
+
mean = [mean] * 3
|
191 |
+
if isinstance(mean, torch.Tensor):
|
192 |
+
mean = mean.detach().cpu().numpy()
|
193 |
+
mean = np.array(mean)[None, None, None]
|
194 |
+
image = image + mean
|
195 |
+
|
196 |
+
if image.dtype != np.uint8:
|
197 |
+
image = np.clip(image, a_min=0.0, a_max=1.0) * 255
|
198 |
+
image = image.astype(np.uint8)
|
199 |
+
image = [img for img in image]
|
200 |
+
|
201 |
+
self.logger.log_image(key=key, images=image, **kwargs)
|
202 |
+
|
203 |
+
def log_gradient_stats(self):
|
204 |
+
"""Log gradient statistics such as the mean or std of norm."""
|
205 |
+
|
206 |
+
with torch.no_grad():
|
207 |
+
grad_norms = []
|
208 |
+
gpr = [] # gradient-to-parameter ratio
|
209 |
+
for param in self.parameters():
|
210 |
+
if param.grad is not None:
|
211 |
+
grad_norms.append(torch.norm(param.grad).item())
|
212 |
+
gpr.append(torch.norm(param.grad) / torch.norm(param))
|
213 |
+
if len(grad_norms) == 0:
|
214 |
+
return
|
215 |
+
grad_norms = torch.tensor(grad_norms)
|
216 |
+
gpr = torch.tensor(gpr)
|
217 |
+
self.log_dict(
|
218 |
+
{
|
219 |
+
"train/grad_norm/min": grad_norms.min(),
|
220 |
+
"train/grad_norm/max": grad_norms.max(),
|
221 |
+
"train/grad_norm/std": grad_norms.std(),
|
222 |
+
"train/grad_norm/mean": grad_norms.mean(),
|
223 |
+
"train/grad_norm/median": torch.median(grad_norms),
|
224 |
+
"train/gpr/min": gpr.min(),
|
225 |
+
"train/gpr/max": gpr.max(),
|
226 |
+
"train/gpr/std": gpr.std(),
|
227 |
+
"train/gpr/mean": gpr.mean(),
|
228 |
+
"train/gpr/median": torch.median(gpr),
|
229 |
+
}
|
230 |
+
)
|
231 |
+
|
232 |
+
def register_data_mean_std(
|
233 |
+
self, mean: Union[str, float, Sequence], std: Union[str, float, Sequence], namespace: str = "data"
|
234 |
+
):
|
235 |
+
"""
|
236 |
+
Register mean and std of data as tensor buffer.
|
237 |
+
|
238 |
+
Args:
|
239 |
+
mean: the mean of data.
|
240 |
+
std: the std of data.
|
241 |
+
namespace: the namespace of the registered buffer.
|
242 |
+
"""
|
243 |
+
for k, v in [("mean", mean), ("std", std)]:
|
244 |
+
if isinstance(v, str):
|
245 |
+
if v.endswith(".npy"):
|
246 |
+
v = torch.from_numpy(np.load(v))
|
247 |
+
elif v.endswith(".pt"):
|
248 |
+
v = torch.load(v)
|
249 |
+
else:
|
250 |
+
raise ValueError(f"Unsupported file type {v.split('.')[-1]}.")
|
251 |
+
else:
|
252 |
+
v = torch.tensor(v)
|
253 |
+
self.register_buffer(f"{namespace}_{k}", v.float().to(self.device))
|
algorithms/common/metrics/__init__.py
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
from .fid import FrechetInceptionDistance
|
2 |
+
from .lpips import LearnedPerceptualImagePatchSimilarity
|
3 |
+
from .fvd import FrechetVideoDistance
|
algorithms/common/metrics/__pycache__/__init__.cpython-310.pyc
ADDED
Binary file (332 Bytes). View file
|
|
algorithms/common/metrics/__pycache__/fid.cpython-310.pyc
ADDED
Binary file (231 Bytes). View file
|
|
algorithms/common/metrics/__pycache__/fvd.cpython-310.pyc
ADDED
Binary file (4.56 kB). View file
|
|
algorithms/common/metrics/__pycache__/lpips.cpython-310.pyc
ADDED
Binary file (247 Bytes). View file
|
|
algorithms/common/metrics/fid.py
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
from torchmetrics.image.fid import FrechetInceptionDistance
|
algorithms/common/metrics/fvd.py
ADDED
@@ -0,0 +1,158 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
Adopted from https://github.com/cvpr2022-stylegan-v/stylegan-v
|
3 |
+
Verified to be the same as tf version by https://github.com/universome/fvd-comparison
|
4 |
+
"""
|
5 |
+
|
6 |
+
import io
|
7 |
+
import re
|
8 |
+
import requests
|
9 |
+
import html
|
10 |
+
import hashlib
|
11 |
+
import urllib
|
12 |
+
import urllib.request
|
13 |
+
from typing import Any, List, Tuple, Union, Dict
|
14 |
+
import scipy
|
15 |
+
|
16 |
+
import torch
|
17 |
+
import torch.nn as nn
|
18 |
+
import numpy as np
|
19 |
+
|
20 |
+
|
21 |
+
def open_url(
|
22 |
+
url: str,
|
23 |
+
num_attempts: int = 10,
|
24 |
+
verbose: bool = True,
|
25 |
+
return_filename: bool = False,
|
26 |
+
) -> Any:
|
27 |
+
"""Download the given URL and return a binary-mode file object to access the data."""
|
28 |
+
assert num_attempts >= 1
|
29 |
+
|
30 |
+
# Doesn't look like an URL scheme so interpret it as a local filename.
|
31 |
+
if not re.match("^[a-z]+://", url):
|
32 |
+
return url if return_filename else open(url, "rb")
|
33 |
+
|
34 |
+
# Handle file URLs. This code handles unusual file:// patterns that
|
35 |
+
# arise on Windows:
|
36 |
+
#
|
37 |
+
# file:///c:/foo.txt
|
38 |
+
#
|
39 |
+
# which would translate to a local '/c:/foo.txt' filename that's
|
40 |
+
# invalid. Drop the forward slash for such pathnames.
|
41 |
+
#
|
42 |
+
# If you touch this code path, you should test it on both Linux and
|
43 |
+
# Windows.
|
44 |
+
#
|
45 |
+
# Some internet resources suggest using urllib.request.url2pathname() but
|
46 |
+
# but that converts forward slashes to backslashes and this causes
|
47 |
+
# its own set of problems.
|
48 |
+
if url.startswith("file://"):
|
49 |
+
filename = urllib.parse.urlparse(url).path
|
50 |
+
if re.match(r"^/[a-zA-Z]:", filename):
|
51 |
+
filename = filename[1:]
|
52 |
+
return filename if return_filename else open(filename, "rb")
|
53 |
+
|
54 |
+
url_md5 = hashlib.md5(url.encode("utf-8")).hexdigest()
|
55 |
+
|
56 |
+
# Download.
|
57 |
+
url_name = None
|
58 |
+
url_data = None
|
59 |
+
with requests.Session() as session:
|
60 |
+
if verbose:
|
61 |
+
print("Downloading %s ..." % url, end="", flush=True)
|
62 |
+
for attempts_left in reversed(range(num_attempts)):
|
63 |
+
try:
|
64 |
+
with session.get(url) as res:
|
65 |
+
res.raise_for_status()
|
66 |
+
if len(res.content) == 0:
|
67 |
+
raise IOError("No data received")
|
68 |
+
|
69 |
+
if len(res.content) < 8192:
|
70 |
+
content_str = res.content.decode("utf-8")
|
71 |
+
if "download_warning" in res.headers.get("Set-Cookie", ""):
|
72 |
+
links = [
|
73 |
+
html.unescape(link)
|
74 |
+
for link in content_str.split('"')
|
75 |
+
if "export=download" in link
|
76 |
+
]
|
77 |
+
if len(links) == 1:
|
78 |
+
url = requests.compat.urljoin(url, links[0])
|
79 |
+
raise IOError("Google Drive virus checker nag")
|
80 |
+
if "Google Drive - Quota exceeded" in content_str:
|
81 |
+
raise IOError(
|
82 |
+
"Google Drive download quota exceeded -- please try again later"
|
83 |
+
)
|
84 |
+
|
85 |
+
match = re.search(
|
86 |
+
r'filename="([^"]*)"',
|
87 |
+
res.headers.get("Content-Disposition", ""),
|
88 |
+
)
|
89 |
+
url_name = match[1] if match else url
|
90 |
+
url_data = res.content
|
91 |
+
if verbose:
|
92 |
+
print(" done")
|
93 |
+
break
|
94 |
+
except KeyboardInterrupt:
|
95 |
+
raise
|
96 |
+
except:
|
97 |
+
if not attempts_left:
|
98 |
+
if verbose:
|
99 |
+
print(" failed")
|
100 |
+
raise
|
101 |
+
if verbose:
|
102 |
+
print(".", end="", flush=True)
|
103 |
+
|
104 |
+
# Return data as file object.
|
105 |
+
assert not return_filename
|
106 |
+
return io.BytesIO(url_data)
|
107 |
+
|
108 |
+
|
109 |
+
def compute_fvd(feats_fake: np.ndarray, feats_real: np.ndarray) -> float:
|
110 |
+
mu_gen, sigma_gen = compute_stats(feats_fake)
|
111 |
+
mu_real, sigma_real = compute_stats(feats_real)
|
112 |
+
|
113 |
+
m = np.square(mu_gen - mu_real).sum()
|
114 |
+
s, _ = scipy.linalg.sqrtm(
|
115 |
+
np.dot(sigma_gen, sigma_real), disp=False
|
116 |
+
) # pylint: disable=no-member
|
117 |
+
fid = np.real(m + np.trace(sigma_gen + sigma_real - s * 2))
|
118 |
+
|
119 |
+
return float(fid)
|
120 |
+
|
121 |
+
|
122 |
+
def compute_stats(feats: np.ndarray) -> Tuple[np.ndarray, np.ndarray]:
|
123 |
+
mu = feats.mean(axis=0) # [d]
|
124 |
+
sigma = np.cov(feats, rowvar=False) # [d, d]
|
125 |
+
|
126 |
+
return mu, sigma
|
127 |
+
|
128 |
+
|
129 |
+
class FrechetVideoDistance(nn.Module):
|
130 |
+
def __init__(self):
|
131 |
+
super().__init__()
|
132 |
+
detector_url = (
|
133 |
+
"https://www.dropbox.com/s/ge9e5ujwgetktms/i3d_torchscript.pt?dl=1"
|
134 |
+
)
|
135 |
+
# Return raw features before the softmax layer.
|
136 |
+
self.detector_kwargs = dict(rescale=False, resize=True, return_features=True)
|
137 |
+
with open_url(detector_url, verbose=False) as f:
|
138 |
+
self.detector = torch.jit.load(f).eval()
|
139 |
+
|
140 |
+
@torch.no_grad()
|
141 |
+
def compute(self, videos_fake: torch.Tensor, videos_real: torch.Tensor):
|
142 |
+
"""
|
143 |
+
:param videos_fake: predicted video tensor of shape (frame, batch, channel, height, width)
|
144 |
+
:param videos_real: ground-truth observation tensor of shape (frame, batch, channel, height, width)
|
145 |
+
:return:
|
146 |
+
"""
|
147 |
+
n_frames, batch_size, c, h, w = videos_fake.shape
|
148 |
+
if n_frames < 2:
|
149 |
+
raise ValueError("Video must have more than 1 frame for FVD")
|
150 |
+
|
151 |
+
videos_fake = videos_fake.permute(1, 2, 0, 3, 4).contiguous()
|
152 |
+
videos_real = videos_real.permute(1, 2, 0, 3, 4).contiguous()
|
153 |
+
|
154 |
+
# detector takes in tensors of shape [batch_size, c, video_len, h, w] with range -1 to 1
|
155 |
+
feats_fake = self.detector(videos_fake, **self.detector_kwargs).cpu().numpy()
|
156 |
+
feats_real = self.detector(videos_real, **self.detector_kwargs).cpu().numpy()
|
157 |
+
|
158 |
+
return compute_fvd(feats_fake, feats_real)
|
algorithms/common/metrics/lpips.py
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
from torchmetrics.image.lpip import LearnedPerceptualImagePatchSimilarity
|
algorithms/common/models/__init__.py
ADDED
File without changes
|
algorithms/common/models/cnn.py
ADDED
@@ -0,0 +1,141 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import math
|
2 |
+
import torch.nn as nn
|
3 |
+
from torch.nn import functional as F
|
4 |
+
|
5 |
+
|
6 |
+
def is_square_of_two(num):
|
7 |
+
if num <= 0:
|
8 |
+
return False
|
9 |
+
return num & (num - 1) == 0
|
10 |
+
|
11 |
+
class CnnEncoder(nn.Module):
|
12 |
+
"""
|
13 |
+
Simple cnn encoder that encodes a 64x64 image to embeddings
|
14 |
+
"""
|
15 |
+
def __init__(self, embedding_size, activation_function='relu'):
|
16 |
+
super().__init__()
|
17 |
+
self.act_fn = getattr(F, activation_function)
|
18 |
+
self.embedding_size = embedding_size
|
19 |
+
self.fc = nn.Linear(1024, self.embedding_size)
|
20 |
+
self.conv1 = nn.Conv2d(3, 32, 4, stride=2)
|
21 |
+
self.conv2 = nn.Conv2d(32, 64, 4, stride=2)
|
22 |
+
self.conv3 = nn.Conv2d(64, 128, 4, stride=2)
|
23 |
+
self.conv4 = nn.Conv2d(128, 256, 4, stride=2)
|
24 |
+
self.modules = [self.conv1, self.conv2, self.conv3, self.conv4]
|
25 |
+
|
26 |
+
def forward(self, observation):
|
27 |
+
batch_size = observation.shape[0]
|
28 |
+
hidden = self.act_fn(self.conv1(observation))
|
29 |
+
hidden = self.act_fn(self.conv2(hidden))
|
30 |
+
hidden = self.act_fn(self.conv3(hidden))
|
31 |
+
hidden = self.act_fn(self.conv4(hidden))
|
32 |
+
hidden = self.fc(hidden.view(batch_size, 1024))
|
33 |
+
return hidden
|
34 |
+
|
35 |
+
|
36 |
+
class CnnDecoder(nn.Module):
|
37 |
+
"""
|
38 |
+
Simple Cnn decoder that decodes an embedding to 64x64 images
|
39 |
+
"""
|
40 |
+
def __init__(self, embedding_size, activation_function='relu'):
|
41 |
+
super().__init__()
|
42 |
+
self.act_fn = getattr(F, activation_function)
|
43 |
+
self.embedding_size = embedding_size
|
44 |
+
self.fc = nn.Linear(embedding_size, 128)
|
45 |
+
self.conv1 = nn.ConvTranspose2d(128, 128, 5, stride=2)
|
46 |
+
self.conv2 = nn.ConvTranspose2d(128, 64, 5, stride=2)
|
47 |
+
self.conv3 = nn.ConvTranspose2d(64, 32, 6, stride=2)
|
48 |
+
self.conv4 = nn.ConvTranspose2d(32, 3, 6, stride=2)
|
49 |
+
self.modules = [self.conv1, self.conv2, self.conv3, self.conv4]
|
50 |
+
|
51 |
+
def forward(self, embedding):
|
52 |
+
batch_size = embedding.shape[0]
|
53 |
+
hidden = self.fc(embedding)
|
54 |
+
hidden = hidden.view(batch_size, 128, 1, 1)
|
55 |
+
hidden = self.act_fn(self.conv1(hidden))
|
56 |
+
hidden = self.act_fn(self.conv2(hidden))
|
57 |
+
hidden = self.act_fn(self.conv3(hidden))
|
58 |
+
observation = self.conv4(hidden)
|
59 |
+
return observation
|
60 |
+
|
61 |
+
|
62 |
+
class FullyConvEncoder(nn.Module):
|
63 |
+
"""
|
64 |
+
Simple fully convolutional encoder, with 2D input and 2D output
|
65 |
+
"""
|
66 |
+
def __init__(self,
|
67 |
+
input_shape=(3, 64, 64),
|
68 |
+
embedding_shape=(8, 16, 16),
|
69 |
+
activation_function='relu',
|
70 |
+
init_channels=16,
|
71 |
+
):
|
72 |
+
super().__init__()
|
73 |
+
|
74 |
+
assert len(input_shape) == 3, "input_shape must be a tuple of length 3"
|
75 |
+
assert len(embedding_shape) == 3, "embedding_shape must be a tuple of length 3"
|
76 |
+
assert input_shape[1] == input_shape[2] and is_square_of_two(input_shape[1]), "input_shape must be square"
|
77 |
+
assert embedding_shape[1] == embedding_shape[2], "embedding_shape must be square"
|
78 |
+
assert input_shape[1] % embedding_shape[1] == 0, "input_shape must be divisible by embedding_shape"
|
79 |
+
assert is_square_of_two(init_channels), "init_channels must be a square of 2"
|
80 |
+
|
81 |
+
depth = int(math.sqrt(input_shape[1] / embedding_shape[1])) + 1
|
82 |
+
channels_per_layer = [init_channels * (2 ** i) for i in range(depth)]
|
83 |
+
self.act_fn = getattr(F, activation_function)
|
84 |
+
|
85 |
+
self.downs = nn.ModuleList([])
|
86 |
+
self.downs.append(nn.Conv2d(input_shape[0], channels_per_layer[0], kernel_size=3, stride=1, padding=1))
|
87 |
+
|
88 |
+
for i in range(1, depth):
|
89 |
+
self.downs.append(nn.Conv2d(channels_per_layer[i-1], channels_per_layer[i],
|
90 |
+
kernel_size=3, stride=2, padding=1))
|
91 |
+
|
92 |
+
# Bottleneck layer
|
93 |
+
self.downs.append(nn.Conv2d(channels_per_layer[-1], embedding_shape[0], kernel_size=1, stride=1, padding=0))
|
94 |
+
|
95 |
+
def forward(self, observation):
|
96 |
+
hidden = observation
|
97 |
+
for layer in self.downs:
|
98 |
+
hidden = self.act_fn(layer(hidden))
|
99 |
+
return hidden
|
100 |
+
|
101 |
+
|
102 |
+
class FullyConvDecoder(nn.Module):
|
103 |
+
"""
|
104 |
+
Simple fully convolutional decoder, with 2D input and 2D output
|
105 |
+
"""
|
106 |
+
def __init__(self,
|
107 |
+
embedding_shape=(8, 16, 16),
|
108 |
+
output_shape=(3, 64, 64),
|
109 |
+
activation_function='relu',
|
110 |
+
init_channels=16,
|
111 |
+
):
|
112 |
+
super().__init__()
|
113 |
+
|
114 |
+
assert len(embedding_shape) == 3, "embedding_shape must be a tuple of length 3"
|
115 |
+
assert len(output_shape) == 3, "output_shape must be a tuple of length 3"
|
116 |
+
assert output_shape[1] == output_shape[2] and is_square_of_two(output_shape[1]), "output_shape must be square"
|
117 |
+
assert embedding_shape[1] == embedding_shape[2], "input_shape must be square"
|
118 |
+
assert output_shape[1] % embedding_shape[1] == 0, "output_shape must be divisible by input_shape"
|
119 |
+
assert is_square_of_two(init_channels), "init_channels must be a square of 2"
|
120 |
+
|
121 |
+
depth = int(math.sqrt(output_shape[1] / embedding_shape[1])) + 1
|
122 |
+
channels_per_layer = [init_channels * (2 ** i) for i in range(depth)]
|
123 |
+
self.act_fn = getattr(F, activation_function)
|
124 |
+
|
125 |
+
self.ups = nn.ModuleList([])
|
126 |
+
self.ups.append(nn.ConvTranspose2d(embedding_shape[0], channels_per_layer[-1],
|
127 |
+
kernel_size=1, stride=1, padding=0))
|
128 |
+
|
129 |
+
for i in range(1, depth):
|
130 |
+
self.ups.append(nn.ConvTranspose2d(channels_per_layer[-i], channels_per_layer[-i-1],
|
131 |
+
kernel_size=3, stride=2, padding=1, output_padding=1))
|
132 |
+
|
133 |
+
self.output_layer = nn.ConvTranspose2d(channels_per_layer[0], output_shape[0],
|
134 |
+
kernel_size=3, stride=1, padding=1)
|
135 |
+
|
136 |
+
def forward(self, embedding):
|
137 |
+
hidden = embedding
|
138 |
+
for layer in self.ups:
|
139 |
+
hidden = self.act_fn(layer(hidden))
|
140 |
+
|
141 |
+
return self.output_layer(hidden)
|
algorithms/common/models/mlp.py
ADDED
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from typing import Type, Optional
|
2 |
+
|
3 |
+
import torch
|
4 |
+
from torch import nn as nn
|
5 |
+
|
6 |
+
|
7 |
+
class SimpleMlp(nn.Module):
|
8 |
+
"""
|
9 |
+
A class for very simple multi layer perceptron
|
10 |
+
"""
|
11 |
+
def __init__(self, in_dim=2, out_dim=1, hidden_dim=64, n_layers=2,
|
12 |
+
activation: Type[nn.Module] = nn.ReLU, output_activation: Optional[Type[nn.Module]] = None):
|
13 |
+
super(SimpleMlp, self).__init__()
|
14 |
+
layers = [nn.Linear(in_dim, hidden_dim), activation()]
|
15 |
+
layers.extend([nn.Linear(hidden_dim, hidden_dim), activation()] * (n_layers - 2))
|
16 |
+
layers.append(nn.Linear(hidden_dim, out_dim))
|
17 |
+
if output_activation:
|
18 |
+
layers.append(output_activation())
|
19 |
+
self.net = nn.Sequential(*layers)
|
20 |
+
|
21 |
+
def forward(self, x):
|
22 |
+
return self.net(x)
|
algorithms/worldmem/__init__.py
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
from .df_video import WorldMemMinecraft
|
2 |
+
from .pose_prediction import PosePrediction
|
algorithms/worldmem/__pycache__/__init__.cpython-310.pyc
ADDED
Binary file (263 Bytes). View file
|
|
algorithms/worldmem/__pycache__/df_base.cpython-310.pyc
ADDED
Binary file (9.66 kB). View file
|
|
algorithms/worldmem/__pycache__/df_video.cpython-310.pyc
ADDED
Binary file (23.5 kB). View file
|
|
algorithms/worldmem/__pycache__/pose_prediction.cpython-310.pyc
ADDED
Binary file (9.49 kB). View file
|
|
algorithms/worldmem/df_base.py
ADDED
@@ -0,0 +1,307 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
This repo is forked from [Boyuan Chen](https://boyuan.space/)'s research
|
3 |
+
template [repo](https://github.com/buoyancy99/research-template).
|
4 |
+
By its MIT license, you must keep the above sentence in `README.md`
|
5 |
+
and the `LICENSE` file to credit the author.
|
6 |
+
"""
|
7 |
+
|
8 |
+
from typing import Optional
|
9 |
+
from tqdm import tqdm
|
10 |
+
from omegaconf import DictConfig
|
11 |
+
import numpy as np
|
12 |
+
import torch
|
13 |
+
import torch.nn.functional as F
|
14 |
+
from typing import Any
|
15 |
+
from einops import rearrange
|
16 |
+
|
17 |
+
from lightning.pytorch.utilities.types import STEP_OUTPUT
|
18 |
+
|
19 |
+
from algorithms.common.base_pytorch_algo import BasePytorchAlgo
|
20 |
+
from .models.diffusion import Diffusion
|
21 |
+
|
22 |
+
|
23 |
+
class DiffusionForcingBase(BasePytorchAlgo):
|
24 |
+
def __init__(self, cfg: DictConfig):
|
25 |
+
self.cfg = cfg
|
26 |
+
self.x_shape = cfg.x_shape
|
27 |
+
self.frame_stack = cfg.frame_stack
|
28 |
+
self.x_stacked_shape = list(self.x_shape)
|
29 |
+
self.x_stacked_shape[0] *= cfg.frame_stack
|
30 |
+
self.guidance_scale = cfg.guidance_scale
|
31 |
+
self.context_frames = cfg.context_frames
|
32 |
+
self.chunk_size = cfg.chunk_size
|
33 |
+
self.action_cond_dim = cfg.action_cond_dim
|
34 |
+
self.causal = cfg.causal
|
35 |
+
|
36 |
+
self.uncertainty_scale = cfg.uncertainty_scale
|
37 |
+
self.timesteps = cfg.diffusion.timesteps
|
38 |
+
self.sampling_timesteps = cfg.diffusion.sampling_timesteps
|
39 |
+
self.clip_noise = cfg.diffusion.clip_noise
|
40 |
+
|
41 |
+
self.cfg.diffusion.cum_snr_decay = self.cfg.diffusion.cum_snr_decay ** (self.frame_stack * cfg.frame_skip)
|
42 |
+
|
43 |
+
self.validation_step_outputs = []
|
44 |
+
super().__init__(cfg)
|
45 |
+
|
46 |
+
def _build_model(self):
|
47 |
+
self.diffusion_model = Diffusion(
|
48 |
+
x_shape=self.x_stacked_shape,
|
49 |
+
action_cond_dim=self.action_cond_dim,
|
50 |
+
is_causal=self.causal,
|
51 |
+
cfg=self.cfg.diffusion,
|
52 |
+
)
|
53 |
+
self.register_data_mean_std(self.cfg.data_mean, self.cfg.data_std)
|
54 |
+
|
55 |
+
def configure_optimizers(self):
|
56 |
+
params = tuple(self.diffusion_model.parameters())
|
57 |
+
optimizer_dynamics = torch.optim.AdamW(
|
58 |
+
params, lr=self.cfg.lr, weight_decay=self.cfg.weight_decay, betas=self.cfg.optimizer_beta
|
59 |
+
)
|
60 |
+
return optimizer_dynamics
|
61 |
+
|
62 |
+
def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_closure):
|
63 |
+
# update params
|
64 |
+
optimizer.step(closure=optimizer_closure)
|
65 |
+
|
66 |
+
# manually warm up lr without a scheduler
|
67 |
+
if self.trainer.global_step < self.cfg.warmup_steps:
|
68 |
+
lr_scale = min(1.0, float(self.trainer.global_step + 1) / self.cfg.warmup_steps)
|
69 |
+
for pg in optimizer.param_groups:
|
70 |
+
pg["lr"] = lr_scale * self.cfg.lr
|
71 |
+
|
72 |
+
def training_step(self, batch, batch_idx) -> STEP_OUTPUT:
|
73 |
+
xs, conditions, masks = self._preprocess_batch(batch)
|
74 |
+
|
75 |
+
rand_length = torch.randint(3,xs.shape[0]-2, (1,))[0].item()
|
76 |
+
xs = torch.cat([xs[:rand_length], xs[rand_length-3:rand_length-1]])
|
77 |
+
conditions = torch.cat([conditions[:rand_length], conditions[rand_length-3:rand_length-1]])
|
78 |
+
masks = torch.cat([masks[:rand_length], masks[rand_length-3:rand_length-1]])
|
79 |
+
noise_levels=self._generate_noise_levels(xs)
|
80 |
+
noise_levels[:rand_length] = 15 # stable_noise_levels
|
81 |
+
noise_levels[rand_length+1:] = 15 # stable_noise_levels
|
82 |
+
|
83 |
+
xs_pred, loss = self.diffusion_model(xs, conditions, noise_levels=noise_levels)
|
84 |
+
loss = self.reweight_loss(loss, masks)
|
85 |
+
|
86 |
+
# log the loss
|
87 |
+
if batch_idx % 20 == 0:
|
88 |
+
self.log("training/loss", loss)
|
89 |
+
|
90 |
+
xs = self._unstack_and_unnormalize(xs)
|
91 |
+
xs_pred = self._unstack_and_unnormalize(xs_pred)
|
92 |
+
|
93 |
+
output_dict = {
|
94 |
+
"loss": loss,
|
95 |
+
"xs_pred": xs_pred,
|
96 |
+
"xs": xs,
|
97 |
+
}
|
98 |
+
|
99 |
+
return output_dict
|
100 |
+
|
101 |
+
@torch.no_grad()
|
102 |
+
def validation_step(self, batch, batch_idx, namespace="validation") -> STEP_OUTPUT:
|
103 |
+
xs, conditions, masks = self._preprocess_batch(batch)
|
104 |
+
n_frames, batch_size, *_ = xs.shape
|
105 |
+
xs_pred = []
|
106 |
+
curr_frame = 0
|
107 |
+
|
108 |
+
# context
|
109 |
+
n_context_frames = self.context_frames // self.frame_stack
|
110 |
+
xs_pred = xs[:n_context_frames].clone()
|
111 |
+
curr_frame += n_context_frames
|
112 |
+
|
113 |
+
if self.condtion_similar_length:
|
114 |
+
n_frames -= self.condtion_similar_length
|
115 |
+
|
116 |
+
pbar = tqdm(total=n_frames, initial=curr_frame, desc="Sampling")
|
117 |
+
while curr_frame < n_frames:
|
118 |
+
if self.chunk_size > 0:
|
119 |
+
horizon = min(n_frames - curr_frame, self.chunk_size)
|
120 |
+
else:
|
121 |
+
horizon = n_frames - curr_frame
|
122 |
+
assert horizon <= self.n_tokens, "horizon exceeds the number of tokens."
|
123 |
+
scheduling_matrix = self._generate_scheduling_matrix(horizon)
|
124 |
+
|
125 |
+
chunk = torch.randn((horizon, batch_size, *self.x_stacked_shape), device=self.device)
|
126 |
+
chunk = torch.clamp(chunk, -self.clip_noise, self.clip_noise)
|
127 |
+
xs_pred = torch.cat([xs_pred, chunk], 0)
|
128 |
+
|
129 |
+
# sliding window: only input the last n_tokens frames
|
130 |
+
start_frame = max(0, curr_frame + horizon - self.n_tokens)
|
131 |
+
|
132 |
+
pbar.set_postfix(
|
133 |
+
{
|
134 |
+
"start": start_frame,
|
135 |
+
"end": curr_frame + horizon,
|
136 |
+
}
|
137 |
+
)
|
138 |
+
|
139 |
+
if self.condtion_similar_length:
|
140 |
+
xs_pred = torch.cat([xs_pred, xs[curr_frame-self.condtion_similar_length:curr_frame].clone()], 0)
|
141 |
+
|
142 |
+
for m in range(scheduling_matrix.shape[0] - 1):
|
143 |
+
|
144 |
+
from_noise_levels = np.concatenate((np.zeros((curr_frame,), dtype=np.int64), scheduling_matrix[m]))[
|
145 |
+
:, None
|
146 |
+
].repeat(batch_size, axis=1)
|
147 |
+
to_noise_levels = np.concatenate(
|
148 |
+
(
|
149 |
+
np.zeros((curr_frame,), dtype=np.int64),
|
150 |
+
scheduling_matrix[m + 1],
|
151 |
+
)
|
152 |
+
)[
|
153 |
+
:, None
|
154 |
+
].repeat(batch_size, axis=1)
|
155 |
+
|
156 |
+
if self.condtion_similar_length:
|
157 |
+
from_noise_levels = np.concatenate([from_noise_levels, np.array([[0,0,0,0]*self.condtion_similar_length])], axis=0)
|
158 |
+
to_noise_levels = np.concatenate([to_noise_levels, np.array([[0,0,0,0]*self.condtion_similar_length])], axis=0)
|
159 |
+
|
160 |
+
from_noise_levels = torch.from_numpy(from_noise_levels).to(self.device)
|
161 |
+
to_noise_levels = torch.from_numpy(to_noise_levels).to(self.device)
|
162 |
+
|
163 |
+
# update xs_pred by DDIM or DDPM sampling
|
164 |
+
# input frames within the sliding window
|
165 |
+
|
166 |
+
try:
|
167 |
+
input_condition = conditions[start_frame : curr_frame + horizon].clone()
|
168 |
+
except:
|
169 |
+
import pdb;pdb.set_trace()
|
170 |
+
if self.condtion_similar_length:
|
171 |
+
input_condition = torch.cat([conditions[start_frame : curr_frame + horizon], conditions[-self.condtion_similar_length:]], dim=0)
|
172 |
+
xs_pred[start_frame:] = self.diffusion_model.sample_step(
|
173 |
+
xs_pred[start_frame:],
|
174 |
+
input_condition,
|
175 |
+
from_noise_levels[start_frame:],
|
176 |
+
to_noise_levels[start_frame:],
|
177 |
+
)
|
178 |
+
|
179 |
+
if self.condtion_similar_length:
|
180 |
+
xs_pred = xs_pred[:-self.condtion_similar_length]
|
181 |
+
|
182 |
+
curr_frame += horizon
|
183 |
+
pbar.update(horizon)
|
184 |
+
|
185 |
+
if self.condtion_similar_length:
|
186 |
+
xs = xs[:-self.condtion_similar_length]
|
187 |
+
# FIXME: loss
|
188 |
+
loss = F.mse_loss(xs_pred, xs, reduction="none")
|
189 |
+
loss = self.reweight_loss(loss, masks)
|
190 |
+
self.validation_step_outputs.append((xs_pred.detach().cpu(), xs.detach().cpu()))
|
191 |
+
|
192 |
+
return loss
|
193 |
+
|
194 |
+
def test_step(self, *args: Any, **kwargs: Any) -> STEP_OUTPUT:
|
195 |
+
return self.validation_step(*args, **kwargs, namespace="test")
|
196 |
+
|
197 |
+
def test_epoch_end(self) -> None:
|
198 |
+
self.on_validation_epoch_end(namespace="test")
|
199 |
+
|
200 |
+
def _generate_noise_levels(self, xs: torch.Tensor, masks: Optional[torch.Tensor] = None) -> torch.Tensor:
|
201 |
+
"""
|
202 |
+
Generate noise levels for training.
|
203 |
+
"""
|
204 |
+
num_frames, batch_size, *_ = xs.shape
|
205 |
+
match self.cfg.noise_level:
|
206 |
+
case "random_all": # entirely random noise levels
|
207 |
+
noise_levels = torch.randint(0, self.timesteps, (num_frames, batch_size), device=xs.device)
|
208 |
+
case "same":
|
209 |
+
noise_levels = torch.randint(0, self.timesteps, (num_frames, batch_size), device=xs.device)
|
210 |
+
noise_levels[1:] = noise_levels[0]
|
211 |
+
|
212 |
+
if masks is not None:
|
213 |
+
# for frames that are not available, treat as full noise
|
214 |
+
discard = torch.all(~rearrange(masks.bool(), "(t fs) b -> t b fs", fs=self.frame_stack), -1)
|
215 |
+
noise_levels = torch.where(discard, torch.full_like(noise_levels, self.timesteps - 1), noise_levels)
|
216 |
+
|
217 |
+
return noise_levels
|
218 |
+
|
219 |
+
def _generate_scheduling_matrix(self, horizon: int):
|
220 |
+
match self.cfg.scheduling_matrix:
|
221 |
+
case "pyramid":
|
222 |
+
return self._generate_pyramid_scheduling_matrix(horizon, self.uncertainty_scale)
|
223 |
+
case "full_sequence":
|
224 |
+
return np.arange(self.sampling_timesteps, -1, -1)[:, None].repeat(horizon, axis=1)
|
225 |
+
case "autoregressive":
|
226 |
+
return self._generate_pyramid_scheduling_matrix(horizon, self.sampling_timesteps)
|
227 |
+
case "trapezoid":
|
228 |
+
return self._generate_trapezoid_scheduling_matrix(horizon, self.uncertainty_scale)
|
229 |
+
|
230 |
+
def _generate_pyramid_scheduling_matrix(self, horizon: int, uncertainty_scale: float):
|
231 |
+
height = self.sampling_timesteps + int((horizon - 1) * uncertainty_scale) + 1
|
232 |
+
scheduling_matrix = np.zeros((height, horizon), dtype=np.int64)
|
233 |
+
for m in range(height):
|
234 |
+
for t in range(horizon):
|
235 |
+
scheduling_matrix[m, t] = self.sampling_timesteps + int(t * uncertainty_scale) - m
|
236 |
+
|
237 |
+
return np.clip(scheduling_matrix, 0, self.sampling_timesteps)
|
238 |
+
|
239 |
+
def _generate_trapezoid_scheduling_matrix(self, horizon: int, uncertainty_scale: float):
|
240 |
+
height = self.sampling_timesteps + int((horizon + 1) // 2 * uncertainty_scale)
|
241 |
+
scheduling_matrix = np.zeros((height, horizon), dtype=np.int64)
|
242 |
+
for m in range(height):
|
243 |
+
for t in range((horizon + 1) // 2):
|
244 |
+
scheduling_matrix[m, t] = self.sampling_timesteps + int(t * uncertainty_scale) - m
|
245 |
+
scheduling_matrix[m, -t] = self.sampling_timesteps + int(t * uncertainty_scale) - m
|
246 |
+
|
247 |
+
return np.clip(scheduling_matrix, 0, self.sampling_timesteps)
|
248 |
+
|
249 |
+
def reweight_loss(self, loss, weight=None):
|
250 |
+
# Note there is another part of loss reweighting (fused_snr) inside the Diffusion class!
|
251 |
+
loss = rearrange(loss, "t b (fs c) ... -> t b fs c ...", fs=self.frame_stack)
|
252 |
+
if weight is not None:
|
253 |
+
expand_dim = len(loss.shape) - len(weight.shape) - 1
|
254 |
+
weight = rearrange(
|
255 |
+
weight,
|
256 |
+
"(t fs) b ... -> t b fs ..." + " 1" * expand_dim,
|
257 |
+
fs=self.frame_stack,
|
258 |
+
)
|
259 |
+
loss = loss * weight
|
260 |
+
|
261 |
+
return loss.mean()
|
262 |
+
|
263 |
+
def _preprocess_batch(self, batch):
|
264 |
+
xs = batch[0]
|
265 |
+
batch_size, n_frames = xs.shape[:2]
|
266 |
+
|
267 |
+
if n_frames % self.frame_stack != 0:
|
268 |
+
raise ValueError("Number of frames must be divisible by frame stack size")
|
269 |
+
if self.context_frames % self.frame_stack != 0:
|
270 |
+
raise ValueError("Number of context frames must be divisible by frame stack size")
|
271 |
+
|
272 |
+
masks = torch.ones(n_frames, batch_size).to(xs.device)
|
273 |
+
n_frames = n_frames // self.frame_stack
|
274 |
+
|
275 |
+
if self.action_cond_dim:
|
276 |
+
conditions = batch[1]
|
277 |
+
conditions = torch.cat([torch.zeros_like(conditions[:, :1]), conditions[:, 1:]], 1)
|
278 |
+
conditions = rearrange(conditions, "b (t fs) d -> t b (fs d)", fs=self.frame_stack).contiguous()
|
279 |
+
|
280 |
+
# f, _, _ = conditions.shape
|
281 |
+
# predefined_1 = torch.tensor([0,0,0,1]).to(conditions.device)
|
282 |
+
# predefined_2 = torch.tensor([0,0,1,0]).to(conditions.device)
|
283 |
+
# conditions[:f//2] = predefined_1
|
284 |
+
# conditions[f//2:] = predefined_2
|
285 |
+
else:
|
286 |
+
conditions = [None for _ in range(n_frames)]
|
287 |
+
|
288 |
+
xs = self._normalize_x(xs)
|
289 |
+
xs = rearrange(xs, "b (t fs) c ... -> t b (fs c) ...", fs=self.frame_stack).contiguous()
|
290 |
+
|
291 |
+
return xs, conditions, masks
|
292 |
+
|
293 |
+
def _normalize_x(self, xs):
|
294 |
+
shape = [1] * (xs.ndim - self.data_mean.ndim) + list(self.data_mean.shape)
|
295 |
+
mean = self.data_mean.reshape(shape)
|
296 |
+
std = self.data_std.reshape(shape)
|
297 |
+
return (xs - mean) / std
|
298 |
+
|
299 |
+
def _unnormalize_x(self, xs):
|
300 |
+
shape = [1] * (xs.ndim - self.data_mean.ndim) + list(self.data_mean.shape)
|
301 |
+
mean = self.data_mean.reshape(shape)
|
302 |
+
std = self.data_std.reshape(shape)
|
303 |
+
return xs * std + mean
|
304 |
+
|
305 |
+
def _unstack_and_unnormalize(self, xs):
|
306 |
+
xs = rearrange(xs, "t b (fs c) ... -> (t fs) b c ...", fs=self.frame_stack)
|
307 |
+
return self._unnormalize_x(xs)
|
algorithms/worldmem/df_video.py
ADDED
@@ -0,0 +1,908 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import random
|
2 |
+
import math
|
3 |
+
import numpy as np
|
4 |
+
import torch
|
5 |
+
import torch.nn.functional as F
|
6 |
+
import torchvision.transforms.functional as TF
|
7 |
+
from torchvision.transforms import InterpolationMode
|
8 |
+
from PIL import Image
|
9 |
+
from packaging import version as pver
|
10 |
+
from einops import rearrange
|
11 |
+
from tqdm import tqdm
|
12 |
+
from omegaconf import DictConfig
|
13 |
+
from lightning.pytorch.utilities.types import STEP_OUTPUT
|
14 |
+
from algorithms.common.metrics import (
|
15 |
+
LearnedPerceptualImagePatchSimilarity,
|
16 |
+
)
|
17 |
+
from utils.logging_utils import log_video, get_validation_metrics_for_videos
|
18 |
+
from .df_base import DiffusionForcingBase
|
19 |
+
from .models.vae import VAE_models
|
20 |
+
from .models.diffusion import Diffusion
|
21 |
+
from .models.pose_prediction import PosePredictionNet
|
22 |
+
|
23 |
+
|
24 |
+
# Utility Functions
|
25 |
+
def euler_to_rotation_matrix(pitch, yaw):
|
26 |
+
"""
|
27 |
+
Convert pitch and yaw angles (in radians) to a 3x3 rotation matrix.
|
28 |
+
Supports batch input.
|
29 |
+
|
30 |
+
Args:
|
31 |
+
pitch (torch.Tensor): Pitch angles in radians.
|
32 |
+
yaw (torch.Tensor): Yaw angles in radians.
|
33 |
+
|
34 |
+
Returns:
|
35 |
+
torch.Tensor: Rotation matrix of shape (batch_size, 3, 3).
|
36 |
+
"""
|
37 |
+
cos_pitch, sin_pitch = torch.cos(pitch), torch.sin(pitch)
|
38 |
+
cos_yaw, sin_yaw = torch.cos(yaw), torch.sin(yaw)
|
39 |
+
|
40 |
+
R_pitch = torch.stack([
|
41 |
+
torch.ones_like(pitch), torch.zeros_like(pitch), torch.zeros_like(pitch),
|
42 |
+
torch.zeros_like(pitch), cos_pitch, -sin_pitch,
|
43 |
+
torch.zeros_like(pitch), sin_pitch, cos_pitch
|
44 |
+
], dim=-1).reshape(-1, 3, 3)
|
45 |
+
|
46 |
+
R_yaw = torch.stack([
|
47 |
+
cos_yaw, torch.zeros_like(yaw), sin_yaw,
|
48 |
+
torch.zeros_like(yaw), torch.ones_like(yaw), torch.zeros_like(yaw),
|
49 |
+
-sin_yaw, torch.zeros_like(yaw), cos_yaw
|
50 |
+
], dim=-1).reshape(-1, 3, 3)
|
51 |
+
|
52 |
+
return torch.matmul(R_yaw, R_pitch)
|
53 |
+
|
54 |
+
|
55 |
+
def euler_to_camera_to_world_matrix(pose):
|
56 |
+
"""
|
57 |
+
Convert (x, y, z, pitch, yaw) to a 4x4 camera-to-world transformation matrix using torch.
|
58 |
+
Supports both (5,) and (f, b, 5) shaped inputs.
|
59 |
+
|
60 |
+
Args:
|
61 |
+
pose (torch.Tensor): Pose tensor of shape (5,) or (f, b, 5).
|
62 |
+
|
63 |
+
Returns:
|
64 |
+
torch.Tensor: Camera-to-world transformation matrix of shape (4, 4).
|
65 |
+
"""
|
66 |
+
|
67 |
+
origin_dim = pose.ndim
|
68 |
+
if origin_dim == 1:
|
69 |
+
pose = pose.unsqueeze(0).unsqueeze(0) # Convert (5,) -> (1, 1, 5)
|
70 |
+
elif origin_dim == 2:
|
71 |
+
pose = pose.unsqueeze(0)
|
72 |
+
|
73 |
+
x, y, z, pitch, yaw = pose[..., 0], pose[..., 1], pose[..., 2], pose[..., 3], pose[..., 4]
|
74 |
+
pitch, yaw = torch.deg2rad(pitch), torch.deg2rad(yaw)
|
75 |
+
|
76 |
+
# Compute rotation matrix (batch mode)
|
77 |
+
R = euler_to_rotation_matrix(pitch, yaw) # Shape (f*b, 3, 3)
|
78 |
+
|
79 |
+
# Create the 4x4 transformation matrix
|
80 |
+
eye = torch.eye(4, dtype=torch.float32, device=pose.device)
|
81 |
+
camera_to_world = eye.repeat(R.shape[0], 1, 1) # Shape (f*b, 4, 4)
|
82 |
+
|
83 |
+
# Assign rotation
|
84 |
+
camera_to_world[:, :3, :3] = R
|
85 |
+
|
86 |
+
# Assign translation
|
87 |
+
camera_to_world[:, :3, 3] = torch.stack([x.reshape(-1), y.reshape(-1), z.reshape(-1)], dim=-1)
|
88 |
+
|
89 |
+
# Reshape back to (f, b, 4, 4) if needed
|
90 |
+
if origin_dim == 3:
|
91 |
+
return camera_to_world.view(pose.shape[0], pose.shape[1], 4, 4)
|
92 |
+
elif origin_dim == 2:
|
93 |
+
return camera_to_world.view(pose.shape[0], 4, 4)
|
94 |
+
else:
|
95 |
+
return camera_to_world.squeeze(0).squeeze(0) # Convert (1,1,4,4) -> (4,4)
|
96 |
+
|
97 |
+
def is_inside_fov_3d_hv(points, center, center_pitch, center_yaw, fov_half_h, fov_half_v):
|
98 |
+
"""
|
99 |
+
Check whether points are within a given 3D field of view (FOV)
|
100 |
+
with separately defined horizontal and vertical ranges.
|
101 |
+
|
102 |
+
The center view direction is specified by pitch and yaw (in degrees).
|
103 |
+
|
104 |
+
:param points: (N, B, 3) Sample point coordinates
|
105 |
+
:param center: (3,) Center coordinates of the FOV
|
106 |
+
:param center_pitch: Pitch angle of the center view (in degrees)
|
107 |
+
:param center_yaw: Yaw angle of the center view (in degrees)
|
108 |
+
:param fov_half_h: Horizontal half-FOV angle (in degrees)
|
109 |
+
:param fov_half_v: Vertical half-FOV angle (in degrees)
|
110 |
+
:return: Boolean tensor (N, B), indicating whether each point is inside the FOV
|
111 |
+
"""
|
112 |
+
# Compute vectors relative to the center
|
113 |
+
vectors = points - center # shape (N, B, 3)
|
114 |
+
x = vectors[..., 0]
|
115 |
+
y = vectors[..., 1]
|
116 |
+
z = vectors[..., 2]
|
117 |
+
|
118 |
+
# Compute horizontal angle (yaw): measured with respect to the z-axis as the forward direction,
|
119 |
+
# and the x-axis as left-right, resulting in a range of -180 to 180 degrees.
|
120 |
+
azimuth = torch.atan2(x, z) * (180 / math.pi)
|
121 |
+
|
122 |
+
# Compute vertical angle (pitch): measured with respect to the horizontal plane,
|
123 |
+
# resulting in a range of -90 to 90 degrees.
|
124 |
+
elevation = torch.atan2(y, torch.sqrt(x**2 + z**2)) * (180 / math.pi)
|
125 |
+
|
126 |
+
# Compute the angular difference from the center view (handling circular angle wrap-around)
|
127 |
+
diff_azimuth = (azimuth - center_yaw).abs() % 360
|
128 |
+
diff_elevation = (elevation - center_pitch).abs() % 360
|
129 |
+
|
130 |
+
# Adjust values greater than 180 degrees to the shorter angular difference
|
131 |
+
diff_azimuth = torch.where(diff_azimuth > 180, 360 - diff_azimuth, diff_azimuth)
|
132 |
+
diff_elevation = torch.where(diff_elevation > 180, 360 - diff_elevation, diff_elevation)
|
133 |
+
|
134 |
+
# Check if both horizontal and vertical angles are within their respective FOV limits
|
135 |
+
return (diff_azimuth < fov_half_h) & (diff_elevation < fov_half_v)
|
136 |
+
|
137 |
+
def generate_points_in_sphere(n_points, radius):
|
138 |
+
# Sample three independent uniform distributions
|
139 |
+
samples_r = torch.rand(n_points) # For radius distribution
|
140 |
+
samples_phi = torch.rand(n_points) # For azimuthal angle phi
|
141 |
+
samples_u = torch.rand(n_points) # For polar angle theta
|
142 |
+
|
143 |
+
# Apply cube root to ensure uniform volumetric distribution
|
144 |
+
r = radius * torch.pow(samples_r, 1/3)
|
145 |
+
# Azimuthal angle phi uniformly distributed in [0, 2π]
|
146 |
+
phi = 2 * math.pi * samples_phi
|
147 |
+
# Convert u to theta to ensure cos(theta) is uniformly distributed
|
148 |
+
theta = torch.acos(1 - 2 * samples_u)
|
149 |
+
|
150 |
+
# Convert spherical coordinates to Cartesian coordinates
|
151 |
+
x = r * torch.sin(theta) * torch.cos(phi)
|
152 |
+
y = r * torch.sin(theta) * torch.sin(phi)
|
153 |
+
z = r * torch.cos(theta)
|
154 |
+
|
155 |
+
points = torch.stack((x, y, z), dim=1)
|
156 |
+
return points
|
157 |
+
|
158 |
+
def tensor_max_with_number(tensor, number):
|
159 |
+
number_tensor = torch.tensor(number, dtype=tensor.dtype, device=tensor.device)
|
160 |
+
result = torch.max(tensor, number_tensor)
|
161 |
+
return result
|
162 |
+
|
163 |
+
def custom_meshgrid(*args):
|
164 |
+
# ref: https://pytorch.org/docs/stable/generated/torch.meshgrid.html?highlight=meshgrid#torch.meshgrid
|
165 |
+
if pver.parse(torch.__version__) < pver.parse('1.10'):
|
166 |
+
return torch.meshgrid(*args)
|
167 |
+
else:
|
168 |
+
return torch.meshgrid(*args, indexing='ij')
|
169 |
+
|
170 |
+
def camera_to_world_to_world_to_camera(camera_to_world: torch.Tensor) -> torch.Tensor:
|
171 |
+
"""
|
172 |
+
Convert Camera-to-World matrices to World-to-Camera matrices for a tensor with shape (f, b, 4, 4).
|
173 |
+
|
174 |
+
Args:
|
175 |
+
camera_to_world (torch.Tensor): A tensor of shape (f, b, 4, 4), where:
|
176 |
+
f = number of frames,
|
177 |
+
b = batch size.
|
178 |
+
|
179 |
+
Returns:
|
180 |
+
torch.Tensor: A tensor of shape (f, b, 4, 4) representing the World-to-Camera matrices.
|
181 |
+
"""
|
182 |
+
# Ensure input is a 4D tensor
|
183 |
+
assert camera_to_world.ndim == 4 and camera_to_world.shape[2:] == (4, 4), \
|
184 |
+
"Input must be of shape (f, b, 4, 4)"
|
185 |
+
|
186 |
+
# Extract the rotation (R) and translation (T) parts
|
187 |
+
R = camera_to_world[:, :, :3, :3] # Shape: (f, b, 3, 3)
|
188 |
+
T = camera_to_world[:, :, :3, 3] # Shape: (f, b, 3)
|
189 |
+
|
190 |
+
# Initialize an identity matrix for the output
|
191 |
+
world_to_camera = torch.eye(4, device=camera_to_world.device).unsqueeze(0).unsqueeze(0)
|
192 |
+
world_to_camera = world_to_camera.repeat(camera_to_world.size(0), camera_to_world.size(1), 1, 1) # Shape: (f, b, 4, 4)
|
193 |
+
|
194 |
+
# Compute the rotation (transpose of R)
|
195 |
+
world_to_camera[:, :, :3, :3] = R.transpose(2, 3)
|
196 |
+
|
197 |
+
# Compute the translation (-R^T * T)
|
198 |
+
world_to_camera[:, :, :3, 3] = -torch.matmul(R.transpose(2, 3), T.unsqueeze(-1)).squeeze(-1)
|
199 |
+
|
200 |
+
return world_to_camera.to(camera_to_world.dtype)
|
201 |
+
|
202 |
+
def convert_to_plucker(poses, curr_frame, focal_length, image_width, image_height):
|
203 |
+
|
204 |
+
intrinsic = np.asarray([focal_length * image_width,
|
205 |
+
focal_length * image_height,
|
206 |
+
0.5 * image_width,
|
207 |
+
0.5 * image_height], dtype=np.float32)
|
208 |
+
|
209 |
+
c2ws = get_relative_pose(poses, zero_first_frame_scale=curr_frame)
|
210 |
+
c2ws = rearrange(c2ws, "t b m n -> b t m n")
|
211 |
+
|
212 |
+
K = torch.as_tensor(intrinsic, device=poses.device, dtype=poses.dtype).repeat(c2ws.shape[0],c2ws.shape[1],1) # [B, F, 4]
|
213 |
+
plucker_embedding = ray_condition(K, c2ws, image_height, image_width, device=c2ws.device)
|
214 |
+
plucker_embedding = rearrange(plucker_embedding, "b t h w d -> t b h w d").contiguous()
|
215 |
+
|
216 |
+
return plucker_embedding
|
217 |
+
|
218 |
+
|
219 |
+
def get_relative_pose(abs_c2ws, zero_first_frame_scale):
|
220 |
+
abs_w2cs = camera_to_world_to_world_to_camera(abs_c2ws)
|
221 |
+
target_cam_c2w = torch.tensor([
|
222 |
+
[1, 0, 0, 0],
|
223 |
+
[0, 1, 0, 0],
|
224 |
+
[0, 0, 1, 0],
|
225 |
+
[0, 0, 0, 1]
|
226 |
+
]).to(abs_c2ws.device).to(abs_c2ws.dtype)
|
227 |
+
abs2rel = target_cam_c2w @ abs_w2cs[zero_first_frame_scale]
|
228 |
+
ret_poses = [abs2rel @ abs_c2w for abs_c2w in abs_c2ws]
|
229 |
+
ret_poses = torch.stack(ret_poses)
|
230 |
+
return ret_poses
|
231 |
+
|
232 |
+
def ray_condition(K, c2w, H, W, device):
|
233 |
+
# c2w: B, V, 4, 4
|
234 |
+
# K: B, V, 4
|
235 |
+
|
236 |
+
B = K.shape[0]
|
237 |
+
|
238 |
+
j, i = custom_meshgrid(
|
239 |
+
torch.linspace(0, H - 1, H, device=device, dtype=c2w.dtype),
|
240 |
+
torch.linspace(0, W - 1, W, device=device, dtype=c2w.dtype),
|
241 |
+
)
|
242 |
+
i = i.reshape([1, 1, H * W]).expand([B, 1, H * W]) + 0.5 # [B, HxW]
|
243 |
+
j = j.reshape([1, 1, H * W]).expand([B, 1, H * W]) + 0.5 # [B, HxW]
|
244 |
+
|
245 |
+
fx, fy, cx, cy = K.chunk(4, dim=-1) # B,V, 1
|
246 |
+
|
247 |
+
zs = torch.ones_like(i, device=device, dtype=c2w.dtype) # [B, HxW]
|
248 |
+
xs = -(i - cx) / fx * zs
|
249 |
+
ys = -(j - cy) / fy * zs
|
250 |
+
|
251 |
+
zs = zs.expand_as(ys)
|
252 |
+
|
253 |
+
directions = torch.stack((xs, ys, zs), dim=-1) # B, V, HW, 3
|
254 |
+
directions = directions / directions.norm(dim=-1, keepdim=True) # B, V, HW, 3
|
255 |
+
|
256 |
+
rays_d = directions @ c2w[..., :3, :3].transpose(-1, -2) # B, V, 3, HW
|
257 |
+
rays_o = c2w[..., :3, 3] # B, V, 3
|
258 |
+
rays_o = rays_o[:, :, None].expand_as(rays_d) # B, V, 3, HW
|
259 |
+
# c2w @ dirctions
|
260 |
+
rays_dxo = torch.linalg.cross(rays_o, rays_d)
|
261 |
+
plucker = torch.cat([rays_dxo, rays_d], dim=-1)
|
262 |
+
plucker = plucker.reshape(B, c2w.shape[1], H, W, 6) # B, V, H, W, 6
|
263 |
+
|
264 |
+
return plucker
|
265 |
+
|
266 |
+
def random_transform(tensor):
|
267 |
+
"""
|
268 |
+
Apply the same random translation, rotation, and scaling to all frames in the batch.
|
269 |
+
|
270 |
+
Args:
|
271 |
+
tensor (torch.Tensor): Input tensor of shape (F, B, 3, H, W).
|
272 |
+
|
273 |
+
Returns:
|
274 |
+
torch.Tensor: Transformed tensor of shape (F, B, 3, H, W).
|
275 |
+
"""
|
276 |
+
if tensor.ndim != 5:
|
277 |
+
raise ValueError("Input tensor must have shape (F, B, 3, H, W)")
|
278 |
+
|
279 |
+
F, B, C, H, W = tensor.shape
|
280 |
+
|
281 |
+
# Generate random transformation parameters
|
282 |
+
max_translate = 0.2 # Translate up to 20% of width/height
|
283 |
+
max_rotate = 30 # Rotate up to 30 degrees
|
284 |
+
max_scale = 0.2 # Scale change by up to +/- 20%
|
285 |
+
|
286 |
+
translate_x = random.uniform(-max_translate, max_translate) * W
|
287 |
+
translate_y = random.uniform(-max_translate, max_translate) * H
|
288 |
+
rotate_angle = random.uniform(-max_rotate, max_rotate)
|
289 |
+
scale_factor = 1 + random.uniform(-max_scale, max_scale)
|
290 |
+
|
291 |
+
# Apply the same transformation to all frames and batches
|
292 |
+
|
293 |
+
tensor = tensor.reshape(F*B, C, H, W)
|
294 |
+
transformed_tensor = TF.affine(
|
295 |
+
tensor,
|
296 |
+
angle=rotate_angle,
|
297 |
+
translate=(translate_x, translate_y),
|
298 |
+
scale=scale_factor,
|
299 |
+
shear=(0, 0),
|
300 |
+
interpolation=InterpolationMode.BILINEAR,
|
301 |
+
fill=0
|
302 |
+
)
|
303 |
+
|
304 |
+
transformed_tensor = transformed_tensor.reshape(F, B, C, H, W)
|
305 |
+
return transformed_tensor
|
306 |
+
|
307 |
+
def save_tensor_as_png(tensor, file_path):
|
308 |
+
"""
|
309 |
+
Save a 3*H*W tensor as a PNG image.
|
310 |
+
|
311 |
+
Args:
|
312 |
+
tensor (torch.Tensor): Input tensor of shape (3, H, W).
|
313 |
+
file_path (str): Path to save the PNG file.
|
314 |
+
"""
|
315 |
+
if tensor.ndim != 3 or tensor.shape[0] != 3:
|
316 |
+
raise ValueError("Input tensor must have shape (3, H, W)")
|
317 |
+
|
318 |
+
# Convert tensor to PIL Image
|
319 |
+
image = TF.to_pil_image(tensor)
|
320 |
+
|
321 |
+
# Save image
|
322 |
+
image.save(file_path)
|
323 |
+
|
324 |
+
class WorldMemMinecraft(DiffusionForcingBase):
|
325 |
+
"""
|
326 |
+
Video generation for MineCraft with memory.
|
327 |
+
"""
|
328 |
+
|
329 |
+
def __init__(self, cfg: DictConfig):
|
330 |
+
"""
|
331 |
+
Initialize the WorldMemMinecraft class with the given configuration.
|
332 |
+
|
333 |
+
Args:
|
334 |
+
cfg (DictConfig): Configuration object.
|
335 |
+
"""
|
336 |
+
# self.metrics = cfg.metrics
|
337 |
+
self.n_tokens = cfg.n_frames // cfg.frame_stack # number of max tokens for the model
|
338 |
+
self.n_frames = cfg.n_frames
|
339 |
+
if hasattr(cfg, "n_tokens"):
|
340 |
+
self.n_tokens = cfg.n_tokens // cfg.frame_stack
|
341 |
+
self.condition_similar_length = cfg.condition_similar_length
|
342 |
+
self.pose_cond_dim = cfg.pose_cond_dim
|
343 |
+
|
344 |
+
self.use_plucker = cfg.use_plucker
|
345 |
+
self.relative_embedding = cfg.relative_embedding
|
346 |
+
self.cond_only_on_qk = cfg.cond_only_on_qk
|
347 |
+
self.use_reference_attention = cfg.use_reference_attention
|
348 |
+
self.add_frame_timestep_embedder = cfg.add_frame_timestep_embedder
|
349 |
+
self.ref_mode = getattr(cfg, "ref_mode", 'sequential')
|
350 |
+
self.log_curve = getattr(cfg, "log_curve", False)
|
351 |
+
self.focal_length = cfg.focal_length
|
352 |
+
self.log_video = cfg.log_video
|
353 |
+
self.self_consistency_eval = getattr(cfg, "self_consistency_eval", False)
|
354 |
+
|
355 |
+
self.is_interactive = cfg.get("is_interactive", False)
|
356 |
+
if self.is_interactive:
|
357 |
+
self.frames = None
|
358 |
+
self.poses = None
|
359 |
+
self.memory_c2w = None
|
360 |
+
self.frame_idx = None
|
361 |
+
|
362 |
+
super().__init__(cfg)
|
363 |
+
|
364 |
+
def _build_model(self):
|
365 |
+
|
366 |
+
self.diffusion_model = Diffusion(
|
367 |
+
reference_length=self.condition_similar_length,
|
368 |
+
x_shape=self.x_stacked_shape,
|
369 |
+
action_cond_dim=self.action_cond_dim,
|
370 |
+
pose_cond_dim=self.pose_cond_dim,
|
371 |
+
is_causal=self.causal,
|
372 |
+
cfg=self.cfg.diffusion,
|
373 |
+
is_dit=True,
|
374 |
+
use_plucker=self.use_plucker,
|
375 |
+
relative_embedding=self.relative_embedding,
|
376 |
+
cond_only_on_qk=self.cond_only_on_qk,
|
377 |
+
use_reference_attention=self.use_reference_attention,
|
378 |
+
add_frame_timestep_embedder=self.add_frame_timestep_embedder,
|
379 |
+
ref_mode=self.ref_mode
|
380 |
+
)
|
381 |
+
|
382 |
+
self.register_data_mean_std(self.cfg.data_mean, self.cfg.data_std)
|
383 |
+
self.validation_lpips_model = LearnedPerceptualImagePatchSimilarity()
|
384 |
+
|
385 |
+
vae = VAE_models["vit-l-20-shallow-encoder"]()
|
386 |
+
self.vae = vae.eval()
|
387 |
+
|
388 |
+
self.pose_prediction_model = PosePredictionNet()
|
389 |
+
|
390 |
+
def _generate_noise_levels(self, xs: torch.Tensor, masks = None) -> torch.Tensor:
|
391 |
+
"""
|
392 |
+
Generate noise levels for training.
|
393 |
+
"""
|
394 |
+
num_frames, batch_size, *_ = xs.shape
|
395 |
+
match self.cfg.noise_level:
|
396 |
+
case "random_all": # entirely random noise levels
|
397 |
+
noise_levels = torch.randint(0, self.timesteps, (num_frames, batch_size), device=xs.device)
|
398 |
+
case "same":
|
399 |
+
noise_levels = torch.randint(0, self.timesteps, (num_frames, batch_size), device=xs.device)
|
400 |
+
noise_levels[1:] = noise_levels[0]
|
401 |
+
|
402 |
+
if masks is not None:
|
403 |
+
# for frames that are not available, treat as full noise
|
404 |
+
discard = torch.all(~rearrange(masks.bool(), "(t fs) b -> t b fs", fs=self.frame_stack), -1)
|
405 |
+
noise_levels = torch.where(discard, torch.full_like(noise_levels, self.timesteps - 1), noise_levels)
|
406 |
+
|
407 |
+
return noise_levels
|
408 |
+
|
409 |
+
def training_step(self, batch, batch_idx) -> STEP_OUTPUT:
|
410 |
+
"""
|
411 |
+
Perform a single training step.
|
412 |
+
|
413 |
+
This function processes the input batch,
|
414 |
+
encodes the input frames, generates noise levels, and computes the loss using the diffusion model.
|
415 |
+
|
416 |
+
Args:
|
417 |
+
batch: Input batch of data containing frames, conditions, poses, etc.
|
418 |
+
batch_idx: Index of the current batch.
|
419 |
+
|
420 |
+
Returns:
|
421 |
+
dict: A dictionary containing the training loss.
|
422 |
+
"""
|
423 |
+
xs, conditions, pose_conditions, c2w_mat, frame_idx = self._preprocess_batch(batch)
|
424 |
+
|
425 |
+
if self.use_plucker:
|
426 |
+
if self.relative_embedding:
|
427 |
+
input_pose_condition = []
|
428 |
+
frame_idx_list = []
|
429 |
+
for i in range(self.n_frames):
|
430 |
+
input_pose_condition.append(
|
431 |
+
convert_to_plucker(
|
432 |
+
torch.cat([c2w_mat[i:i + 1], c2w_mat[-self.condition_similar_length:]]).clone(),
|
433 |
+
0,
|
434 |
+
focal_length=self.focal_length,
|
435 |
+
image_height=xs.shape[-2],image_width=xs.shape[-1]
|
436 |
+
).to(xs.dtype)
|
437 |
+
)
|
438 |
+
frame_idx_list.append(
|
439 |
+
torch.cat([
|
440 |
+
frame_idx[i:i + 1] - frame_idx[i:i + 1],
|
441 |
+
frame_idx[-self.condition_similar_length:] - frame_idx[i:i + 1]
|
442 |
+
]).clone()
|
443 |
+
)
|
444 |
+
input_pose_condition = torch.cat(input_pose_condition)
|
445 |
+
frame_idx_list = torch.cat(frame_idx_list)
|
446 |
+
else:
|
447 |
+
input_pose_condition = convert_to_plucker(
|
448 |
+
c2w_mat, 0, focal_length=self.focal_length
|
449 |
+
).to(xs.dtype)
|
450 |
+
frame_idx_list = frame_idx
|
451 |
+
else:
|
452 |
+
input_pose_condition = pose_conditions.to(xs.dtype)
|
453 |
+
frame_idx_list = None
|
454 |
+
|
455 |
+
xs = self.encode(xs)
|
456 |
+
|
457 |
+
noise_levels = self._generate_noise_levels(xs)
|
458 |
+
|
459 |
+
if self.condition_similar_length:
|
460 |
+
noise_levels[-self.condition_similar_length:] = self.diffusion_model.stabilization_level
|
461 |
+
conditions[-self.condition_similar_length:] *= 0
|
462 |
+
|
463 |
+
_, loss = self.diffusion_model(
|
464 |
+
xs,
|
465 |
+
conditions,
|
466 |
+
input_pose_condition,
|
467 |
+
noise_levels=noise_levels,
|
468 |
+
reference_length=self.condition_similar_length,
|
469 |
+
frame_idx=frame_idx_list
|
470 |
+
)
|
471 |
+
|
472 |
+
if self.condition_similar_length:
|
473 |
+
loss = loss[:-self.condition_similar_length]
|
474 |
+
|
475 |
+
loss = self.reweight_loss(loss, None)
|
476 |
+
|
477 |
+
if batch_idx % 20 == 0:
|
478 |
+
self.log("training/loss", loss.cpu())
|
479 |
+
|
480 |
+
return {"loss": loss}
|
481 |
+
|
482 |
+
|
483 |
+
def on_validation_epoch_end(self, namespace="validation") -> None:
|
484 |
+
if not self.validation_step_outputs:
|
485 |
+
return
|
486 |
+
|
487 |
+
xs_pred = []
|
488 |
+
xs = []
|
489 |
+
for pred, gt in self.validation_step_outputs:
|
490 |
+
xs_pred.append(pred)
|
491 |
+
xs.append(gt)
|
492 |
+
|
493 |
+
xs_pred = torch.cat(xs_pred, 1)
|
494 |
+
if gt is not None:
|
495 |
+
xs = torch.cat(xs, 1)
|
496 |
+
else:
|
497 |
+
xs = None
|
498 |
+
|
499 |
+
if self.logger and self.log_video:
|
500 |
+
log_video(
|
501 |
+
xs_pred,
|
502 |
+
xs,
|
503 |
+
step=None if namespace == "test" else self.global_step,
|
504 |
+
namespace=namespace + "_vis",
|
505 |
+
context_frames=self.context_frames,
|
506 |
+
logger=self.logger.experiment,
|
507 |
+
)
|
508 |
+
|
509 |
+
if xs is not None:
|
510 |
+
metric_dict = get_validation_metrics_for_videos(
|
511 |
+
xs_pred, xs,
|
512 |
+
lpips_model=self.validation_lpips_model)
|
513 |
+
|
514 |
+
self.log_dict(
|
515 |
+
{"mse": metric_dict['mse'],
|
516 |
+
"psnr": metric_dict['psnr'],
|
517 |
+
"lpips": metric_dict['lpips']},
|
518 |
+
sync_dist=True
|
519 |
+
)
|
520 |
+
|
521 |
+
if self.log_curve:
|
522 |
+
psnr_values = metric_dict['frame_wise_psnr'].cpu().tolist()
|
523 |
+
frames = list(range(len(psnr_values)))
|
524 |
+
line_plot = wandb.plot.line_series(
|
525 |
+
xs = frames,
|
526 |
+
ys = [psnr_values],
|
527 |
+
keys = ["PSNR"],
|
528 |
+
title = "Frame-wise PSNR",
|
529 |
+
xname = "Frame index"
|
530 |
+
)
|
531 |
+
|
532 |
+
self.logger.experiment.log({"frame_wise_psnr_plot": line_plot})
|
533 |
+
|
534 |
+
elif self.self_consistency_eval:
|
535 |
+
metric_dict = get_validation_metrics_for_videos(
|
536 |
+
xs_pred[:1],
|
537 |
+
xs_pred[-1:],
|
538 |
+
lpips_model=self.validation_lpips_model,
|
539 |
+
)
|
540 |
+
self.log_dict(
|
541 |
+
{"lpips": metric_dict['lpips'],
|
542 |
+
"mse": metric_dict['mse'],
|
543 |
+
"psnr": metric_dict['psnr']},
|
544 |
+
sync_dist=True
|
545 |
+
)
|
546 |
+
|
547 |
+
self.validation_step_outputs.clear()
|
548 |
+
|
549 |
+
def _preprocess_batch(self, batch):
|
550 |
+
|
551 |
+
xs, conditions, pose_conditions, frame_index = batch
|
552 |
+
|
553 |
+
if self.action_cond_dim:
|
554 |
+
conditions = torch.cat([torch.zeros_like(conditions[:, :1]), conditions[:, 1:]], 1)
|
555 |
+
conditions = rearrange(conditions, "b t d -> t b d").contiguous()
|
556 |
+
else:
|
557 |
+
raise NotImplementedError("Only support external cond.")
|
558 |
+
|
559 |
+
pose_conditions = rearrange(pose_conditions, "b t d -> t b d").contiguous()
|
560 |
+
c2w_mat = euler_to_camera_to_world_matrix(pose_conditions)
|
561 |
+
xs = rearrange(xs, "b t c ... -> t b c ...").contiguous()
|
562 |
+
frame_index = rearrange(frame_index, "b t -> t b").contiguous()
|
563 |
+
|
564 |
+
return xs, conditions, pose_conditions, c2w_mat, frame_index
|
565 |
+
|
566 |
+
def encode(self, x):
|
567 |
+
# vae encoding
|
568 |
+
T = x.shape[0]
|
569 |
+
H, W = x.shape[-2:]
|
570 |
+
scaling_factor = 0.07843137255
|
571 |
+
|
572 |
+
x = rearrange(x, "t b c h w -> (t b) c h w")
|
573 |
+
with torch.no_grad():
|
574 |
+
x = self.vae.encode(x * 2 - 1).mean * scaling_factor
|
575 |
+
x = rearrange(x, "(t b) (h w) c -> t b c h w", t=T, h=H // self.vae.patch_size, w=W // self.vae.patch_size)
|
576 |
+
return x
|
577 |
+
|
578 |
+
def decode(self, x):
|
579 |
+
total_frames = x.shape[0]
|
580 |
+
scaling_factor = 0.07843137255
|
581 |
+
x = rearrange(x, "t b c h w -> (t b) (h w) c")
|
582 |
+
with torch.no_grad():
|
583 |
+
x = (self.vae.decode(x / scaling_factor) + 1) / 2
|
584 |
+
x = rearrange(x, "(t b) c h w-> t b c h w", t=total_frames)
|
585 |
+
return x
|
586 |
+
|
587 |
+
def _generate_condition_indices(self, curr_frame, condition_similar_length, xs_pred, pose_conditions, frame_idx):
|
588 |
+
"""
|
589 |
+
Generate indices for condition similarity based on the current frame and pose conditions.
|
590 |
+
"""
|
591 |
+
if curr_frame < condition_similar_length:
|
592 |
+
random_idx = [i for i in range(curr_frame)] + [0] * (condition_similar_length - curr_frame)
|
593 |
+
random_idx = np.repeat(np.array(random_idx)[:, None], xs_pred.shape[1], -1)
|
594 |
+
else:
|
595 |
+
# Generate points in a sphere and filter based on field of view
|
596 |
+
num_samples = 10000
|
597 |
+
radius = 30
|
598 |
+
points = generate_points_in_sphere(num_samples, radius).to(pose_conditions.device)
|
599 |
+
points = points[:, None].repeat(1, pose_conditions.shape[1], 1)
|
600 |
+
points += pose_conditions[curr_frame, :, :3][None]
|
601 |
+
fov_half_h = torch.tensor(105 / 2, device=pose_conditions.device)
|
602 |
+
fov_half_v = torch.tensor(75 / 2, device=pose_conditions.device)
|
603 |
+
in_fov1 = is_inside_fov_3d_hv(
|
604 |
+
points, pose_conditions[curr_frame, :, :3],
|
605 |
+
pose_conditions[curr_frame, :, -2], pose_conditions[curr_frame, :, -1],
|
606 |
+
fov_half_h, fov_half_v
|
607 |
+
)
|
608 |
+
|
609 |
+
# Compute overlap ratios and select indices
|
610 |
+
in_fov_list = torch.stack([
|
611 |
+
is_inside_fov_3d_hv(points, pc[:, :3], pc[:, -2], pc[:, -1], fov_half_h, fov_half_v)
|
612 |
+
for pc in pose_conditions[:curr_frame]
|
613 |
+
])
|
614 |
+
random_idx = []
|
615 |
+
for _ in range(condition_similar_length):
|
616 |
+
overlap_ratio = ((in_fov1.bool() & in_fov_list).sum(1)) / in_fov1.sum()
|
617 |
+
|
618 |
+
# if curr_frame == 54:
|
619 |
+
# import pdb;pdb.set_trace()
|
620 |
+
confidence = overlap_ratio + (curr_frame - frame_idx[:curr_frame]) / curr_frame * (-0.2)
|
621 |
+
|
622 |
+
if len(random_idx) > 0:
|
623 |
+
confidence[torch.cat(random_idx)] = -1e10
|
624 |
+
_, r_idx = torch.topk(confidence, k=1, dim=0)
|
625 |
+
random_idx.append(r_idx[0])
|
626 |
+
|
627 |
+
occupied_mask = in_fov_list[r_idx[0, range(in_fov1.shape[-1])], :, range(in_fov1.shape[-1])].permute(1,0)
|
628 |
+
|
629 |
+
in_fov1 = in_fov1 & ~occupied_mask
|
630 |
+
|
631 |
+
# cos_sim = F.cosine_similarity(xs_pred.to(r_idx.device)[r_idx[:, range(in_fov1.shape[1])],
|
632 |
+
# range(in_fov1.shape[1])], xs_pred.to(r_idx.device)[:curr_frame], dim=2)
|
633 |
+
# cos_sim = cos_sim.mean((-2,-1))
|
634 |
+
|
635 |
+
# mask_sim = cos_sim>0.9
|
636 |
+
# in_fov_list = in_fov_list & ~mask_sim[:,None].to(in_fov_list.device)
|
637 |
+
|
638 |
+
random_idx = torch.stack(random_idx).cpu()
|
639 |
+
|
640 |
+
print(random_idx)
|
641 |
+
|
642 |
+
return random_idx
|
643 |
+
|
644 |
+
def _prepare_conditions(self,
|
645 |
+
start_frame, curr_frame, horizon, conditions,
|
646 |
+
pose_conditions, c2w_mat, frame_idx, random_idx,
|
647 |
+
image_width, image_height):
|
648 |
+
"""
|
649 |
+
Prepare input conditions and pose conditions for sampling.
|
650 |
+
"""
|
651 |
+
|
652 |
+
padding = torch.zeros((len(random_idx),) + conditions.shape[1:], device=conditions.device, dtype=conditions.dtype)
|
653 |
+
input_condition = torch.cat([conditions[start_frame:curr_frame + horizon], padding], dim=0)
|
654 |
+
|
655 |
+
batch_size = conditions.shape[1]
|
656 |
+
|
657 |
+
if self.use_plucker:
|
658 |
+
if self.relative_embedding:
|
659 |
+
frame_idx_list = []
|
660 |
+
input_pose_condition = []
|
661 |
+
for i in range(start_frame, curr_frame + horizon):
|
662 |
+
input_pose_condition.append(convert_to_plucker(torch.cat([c2w_mat[i:i+1],c2w_mat[random_idx[:,range(batch_size)], range(batch_size)]]).clone(), 0, focal_length=self.focal_length,
|
663 |
+
image_width=image_width, image_height=image_height).to(conditions.dtype))
|
664 |
+
frame_idx_list.append(torch.cat([frame_idx[i:i+1]-frame_idx[i:i+1], frame_idx[random_idx[:,range(batch_size)], range(batch_size)]-frame_idx[i:i+1]]))
|
665 |
+
input_pose_condition = torch.cat(input_pose_condition)
|
666 |
+
frame_idx_list = torch.cat(frame_idx_list)
|
667 |
+
|
668 |
+
else:
|
669 |
+
input_pose_condition = torch.cat([c2w_mat[start_frame : curr_frame + horizon], c2w_mat[random_idx[:,range(batch_size)], range(batch_size)]], dim=0).clone()
|
670 |
+
input_pose_condition = convert_to_plucker(input_pose_condition, 0, focal_length=self.focal_length)
|
671 |
+
frame_idx_list = None
|
672 |
+
else:
|
673 |
+
input_pose_condition = torch.cat([pose_conditions[start_frame : curr_frame + horizon], pose_conditions[random_idx[:,range(batch_size)], range(batch_size)]], dim=0).clone()
|
674 |
+
frame_idx_list = None
|
675 |
+
|
676 |
+
return input_condition, input_pose_condition, frame_idx_list
|
677 |
+
|
678 |
+
def _prepare_noise_levels(self, scheduling_matrix, m, curr_frame, batch_size, condition_similar_length):
|
679 |
+
"""
|
680 |
+
Prepare noise levels for the current sampling step.
|
681 |
+
"""
|
682 |
+
from_noise_levels = np.concatenate((np.zeros((curr_frame,), dtype=np.int64), scheduling_matrix[m]))[:, None].repeat(batch_size, axis=1)
|
683 |
+
to_noise_levels = np.concatenate((np.zeros((curr_frame,), dtype=np.int64), scheduling_matrix[m + 1]))[:, None].repeat(batch_size, axis=1)
|
684 |
+
if condition_similar_length:
|
685 |
+
from_noise_levels = np.concatenate([from_noise_levels, np.zeros((condition_similar_length, from_noise_levels.shape[-1]), dtype=np.int32)], axis=0)
|
686 |
+
to_noise_levels = np.concatenate([to_noise_levels, np.zeros((condition_similar_length, from_noise_levels.shape[-1]), dtype=np.int32)], axis=0)
|
687 |
+
from_noise_levels = torch.from_numpy(from_noise_levels).to(self.device)
|
688 |
+
to_noise_levels = torch.from_numpy(to_noise_levels).to(self.device)
|
689 |
+
return from_noise_levels, to_noise_levels
|
690 |
+
|
691 |
+
def validation_step(self, batch, batch_idx, namespace="validation") -> STEP_OUTPUT:
|
692 |
+
"""
|
693 |
+
Perform a single validation step.
|
694 |
+
|
695 |
+
This function processes the input batch, encodes frames, generates predictions using a sliding window approach,
|
696 |
+
and handles condition similarity logic for sampling. The results are decoded and stored for evaluation.
|
697 |
+
|
698 |
+
Args:
|
699 |
+
batch: Input batch of data containing frames, conditions, poses, etc.
|
700 |
+
batch_idx: Index of the current batch.
|
701 |
+
namespace: Namespace for logging (default: "validation").
|
702 |
+
|
703 |
+
Returns:
|
704 |
+
None: Appends the predicted and ground truth frames to `self.validation_step_outputs`.
|
705 |
+
"""
|
706 |
+
# Preprocess the input batch
|
707 |
+
condition_similar_length = self.condition_similar_length
|
708 |
+
xs_raw, conditions, pose_conditions, c2w_mat, frame_idx = self._preprocess_batch(batch)
|
709 |
+
|
710 |
+
# Encode frames in chunks if necessary
|
711 |
+
total_frame = xs_raw.shape[0]
|
712 |
+
if total_frame > 10:
|
713 |
+
xs = torch.cat([
|
714 |
+
self.encode(xs_raw[int(total_frame * i / 10):int(total_frame * (i + 1) / 10)]).cpu()
|
715 |
+
for i in range(10)
|
716 |
+
])
|
717 |
+
else:
|
718 |
+
xs = self.encode(xs_raw).cpu()
|
719 |
+
|
720 |
+
n_frames, batch_size, *_ = xs.shape
|
721 |
+
curr_frame = 0
|
722 |
+
|
723 |
+
# Initialize context frames
|
724 |
+
n_context_frames = self.context_frames // self.frame_stack
|
725 |
+
xs_pred = xs[:n_context_frames].clone()
|
726 |
+
curr_frame += n_context_frames
|
727 |
+
|
728 |
+
# Progress bar for sampling
|
729 |
+
pbar = tqdm(total=n_frames, initial=curr_frame, desc="Sampling")
|
730 |
+
|
731 |
+
while curr_frame < n_frames:
|
732 |
+
# Determine the horizon for the current chunk
|
733 |
+
horizon = min(n_frames - curr_frame, self.chunk_size) if self.chunk_size > 0 else n_frames - curr_frame
|
734 |
+
assert horizon <= self.n_tokens, "Horizon exceeds the number of tokens."
|
735 |
+
|
736 |
+
# Generate scheduling matrix and initialize noise
|
737 |
+
scheduling_matrix = self._generate_scheduling_matrix(horizon)
|
738 |
+
chunk = torch.randn((horizon, batch_size, *xs_pred.shape[2:]))
|
739 |
+
chunk = torch.clamp(chunk, -self.clip_noise, self.clip_noise).to(xs_pred.device)
|
740 |
+
xs_pred = torch.cat([xs_pred, chunk], 0)
|
741 |
+
|
742 |
+
# Sliding window: only input the last `n_tokens` frames
|
743 |
+
start_frame = max(0, curr_frame + horizon - self.n_tokens)
|
744 |
+
pbar.set_postfix({"start": start_frame, "end": curr_frame + horizon})
|
745 |
+
|
746 |
+
# Handle condition similarity logic
|
747 |
+
if condition_similar_length:
|
748 |
+
random_idx = self._generate_condition_indices(
|
749 |
+
curr_frame, condition_similar_length, xs_pred, pose_conditions, frame_idx
|
750 |
+
)
|
751 |
+
|
752 |
+
xs_pred = torch.cat([xs_pred, xs_pred[random_idx[:, range(xs_pred.shape[1])], range(xs_pred.shape[1])].clone()], 0)
|
753 |
+
|
754 |
+
# Prepare input conditions and pose conditions
|
755 |
+
input_condition, input_pose_condition, frame_idx_list = self._prepare_conditions(
|
756 |
+
start_frame, curr_frame, horizon, conditions, pose_conditions, c2w_mat, frame_idx, random_idx,
|
757 |
+
image_width=xs_raw.shape[-1], image_height=xs_raw.shape[-2]
|
758 |
+
)
|
759 |
+
|
760 |
+
# Perform sampling for each step in the scheduling matrix
|
761 |
+
for m in range(scheduling_matrix.shape[0] - 1):
|
762 |
+
from_noise_levels, to_noise_levels = self._prepare_noise_levels(
|
763 |
+
scheduling_matrix, m, curr_frame, batch_size, condition_similar_length
|
764 |
+
)
|
765 |
+
|
766 |
+
xs_pred[start_frame:] = self.diffusion_model.sample_step(
|
767 |
+
xs_pred[start_frame:].to(input_condition.device),
|
768 |
+
input_condition,
|
769 |
+
input_pose_condition,
|
770 |
+
from_noise_levels[start_frame:],
|
771 |
+
to_noise_levels[start_frame:],
|
772 |
+
current_frame=curr_frame,
|
773 |
+
mode="validation",
|
774 |
+
reference_length=condition_similar_length,
|
775 |
+
frame_idx=frame_idx_list
|
776 |
+
).cpu()
|
777 |
+
|
778 |
+
# Remove condition similarity frames if applicable
|
779 |
+
if condition_similar_length:
|
780 |
+
xs_pred = xs_pred[:-condition_similar_length]
|
781 |
+
|
782 |
+
curr_frame += horizon
|
783 |
+
pbar.update(horizon)
|
784 |
+
|
785 |
+
# Decode predictions and ground truth
|
786 |
+
xs_pred = self.decode(xs_pred[n_context_frames:].to(conditions.device))
|
787 |
+
xs_decode = self.decode(xs[n_context_frames:].to(conditions.device))
|
788 |
+
|
789 |
+
# Store results for evaluation
|
790 |
+
self.validation_step_outputs.append((xs_pred, xs_decode))
|
791 |
+
return
|
792 |
+
|
793 |
+
@torch.no_grad()
|
794 |
+
def interactive(self, first_frame, curr_actions, first_pose, context_frames_idx, device):
|
795 |
+
condition_similar_length = self.condition_similar_length
|
796 |
+
|
797 |
+
if self.frames is None:
|
798 |
+
first_frame_encode = self.encode(first_frame[None, None].to(device))
|
799 |
+
self.frames = first_frame_encode.cpu()
|
800 |
+
self.actions = curr_actions[None, None].to(device)
|
801 |
+
self.poses = first_pose[None, None].to(device)
|
802 |
+
new_c2w_mat = euler_to_camera_to_world_matrix(first_pose)
|
803 |
+
self.memory_c2w = new_c2w_mat[None, None].to(device)
|
804 |
+
self.frame_idx = torch.tensor([[context_frames_idx]]).to(device)
|
805 |
+
return first_frame
|
806 |
+
else:
|
807 |
+
last_frame = self.frames[-1].clone()
|
808 |
+
last_pose_condition = self.poses[-1].clone()
|
809 |
+
last_pose_condition[:,3:] = last_pose_condition[:,3:] // 15
|
810 |
+
new_pose_condition_offset = self.pose_prediction_model(last_frame.to(device), curr_actions[None].to(device), last_pose_condition)
|
811 |
+
|
812 |
+
new_pose_condition_offset[:,3:] = torch.round(new_pose_condition_offset[:,3:])
|
813 |
+
new_pose_condition = last_pose_condition + new_pose_condition_offset
|
814 |
+
new_pose_condition[:,3:] = new_pose_condition[:,3:] * 15
|
815 |
+
new_pose_condition[:,3:] %= 360
|
816 |
+
print(new_pose_condition)
|
817 |
+
self.actions = torch.cat([self.actions, curr_actions[None, None].to(device)])
|
818 |
+
self.poses = torch.cat([self.poses, new_pose_condition[None].to(device)])
|
819 |
+
new_c2w_mat = euler_to_camera_to_world_matrix(new_pose_condition)
|
820 |
+
self.memory_c2w = torch.cat([self.memory_c2w, new_c2w_mat[None].to(device)])
|
821 |
+
self.frame_idx = torch.cat([self.frame_idx, torch.tensor([[context_frames_idx]]).to(device)])
|
822 |
+
|
823 |
+
conditions = self.actions.clone()
|
824 |
+
pose_conditions = self.poses.clone()
|
825 |
+
c2w_mat = self.memory_c2w .clone()
|
826 |
+
frame_idx = self.frame_idx.clone()
|
827 |
+
|
828 |
+
|
829 |
+
curr_frame = 0
|
830 |
+
horizon = 1
|
831 |
+
batch_size = 1
|
832 |
+
n_frames = curr_frame + horizon
|
833 |
+
# context
|
834 |
+
n_context_frames = context_frames_idx // self.frame_stack
|
835 |
+
xs_pred = self.frames[:n_context_frames].clone()
|
836 |
+
curr_frame += n_context_frames
|
837 |
+
|
838 |
+
pbar = tqdm(total=n_frames, initial=curr_frame, desc="Sampling")
|
839 |
+
|
840 |
+
# generation on frame
|
841 |
+
scheduling_matrix = self._generate_scheduling_matrix(horizon)
|
842 |
+
chunk = torch.randn((horizon, batch_size, *xs_pred.shape[2:])).to(xs_pred.device)
|
843 |
+
chunk = torch.clamp(chunk, -self.clip_noise, self.clip_noise)
|
844 |
+
|
845 |
+
xs_pred = torch.cat([xs_pred, chunk], 0)
|
846 |
+
|
847 |
+
# sliding window: only input the last n_tokens frames
|
848 |
+
start_frame = max(0, curr_frame + horizon - self.n_tokens)
|
849 |
+
|
850 |
+
pbar.set_postfix(
|
851 |
+
{
|
852 |
+
"start": start_frame,
|
853 |
+
"end": curr_frame + horizon,
|
854 |
+
}
|
855 |
+
)
|
856 |
+
|
857 |
+
# Handle condition similarity logic
|
858 |
+
if condition_similar_length:
|
859 |
+
random_idx = self._generate_condition_indices(
|
860 |
+
curr_frame, condition_similar_length, xs_pred, pose_conditions, frame_idx
|
861 |
+
)
|
862 |
+
|
863 |
+
# random_idx = np.unique(random_idx)[:, None]
|
864 |
+
# condition_similar_length = len(random_idx)
|
865 |
+
xs_pred = torch.cat([xs_pred, xs_pred[random_idx[:, range(xs_pred.shape[1])], range(xs_pred.shape[1])].clone()], 0)
|
866 |
+
|
867 |
+
# Prepare input conditions and pose conditions
|
868 |
+
input_condition, input_pose_condition, frame_idx_list = self._prepare_conditions(
|
869 |
+
start_frame, curr_frame, horizon, conditions, pose_conditions, c2w_mat, frame_idx, random_idx,
|
870 |
+
image_width=first_frame.shape[-1], image_height=first_frame.shape[-2]
|
871 |
+
)
|
872 |
+
|
873 |
+
# Perform sampling for each step in the scheduling matrix
|
874 |
+
for m in range(scheduling_matrix.shape[0] - 1):
|
875 |
+
from_noise_levels, to_noise_levels = self._prepare_noise_levels(
|
876 |
+
scheduling_matrix, m, curr_frame, batch_size, condition_similar_length
|
877 |
+
)
|
878 |
+
|
879 |
+
xs_pred[start_frame:] = self.diffusion_model.sample_step(
|
880 |
+
xs_pred[start_frame:].to(input_condition.device),
|
881 |
+
input_condition,
|
882 |
+
input_pose_condition,
|
883 |
+
from_noise_levels[start_frame:],
|
884 |
+
to_noise_levels[start_frame:],
|
885 |
+
current_frame=curr_frame,
|
886 |
+
mode="validation",
|
887 |
+
reference_length=condition_similar_length,
|
888 |
+
frame_idx=frame_idx_list
|
889 |
+
).cpu()
|
890 |
+
|
891 |
+
|
892 |
+
if condition_similar_length:
|
893 |
+
xs_pred = xs_pred[:-condition_similar_length]
|
894 |
+
|
895 |
+
curr_frame += horizon
|
896 |
+
pbar.update(horizon)
|
897 |
+
|
898 |
+
self.frames = torch.cat([self.frames, xs_pred[n_context_frames:]])
|
899 |
+
|
900 |
+
xs_pred = self.decode(xs_pred[n_context_frames:].to(device)).cpu()
|
901 |
+
return xs_pred[-1,0]
|
902 |
+
|
903 |
+
|
904 |
+
def reset(self):
|
905 |
+
self.frames = None
|
906 |
+
self.poses = None
|
907 |
+
self.memory_c2w = None
|
908 |
+
self.frame_idx = None
|
algorithms/worldmem/models/__pycache__/attention.cpython-310.pyc
ADDED
Binary file (8.03 kB). View file
|
|
algorithms/worldmem/models/__pycache__/cameractrl_module.cpython-310.pyc
ADDED
Binary file (846 Bytes). View file
|
|
algorithms/worldmem/models/__pycache__/diffusion.cpython-310.pyc
ADDED
Binary file (11 kB). View file
|
|
algorithms/worldmem/models/__pycache__/dit.cpython-310.pyc
ADDED
Binary file (14.7 kB). View file
|
|
algorithms/worldmem/models/__pycache__/my_rotary_embedding_torch.cpython-310.pyc
ADDED
Binary file (7.94 kB). View file
|
|
algorithms/worldmem/models/__pycache__/pose_prediction.cpython-310.pyc
ADDED
Binary file (1.5 kB). View file
|
|
algorithms/worldmem/models/__pycache__/rotary_embedding_torch.cpython-310.pyc
ADDED
Binary file (7.94 kB). View file
|
|
algorithms/worldmem/models/__pycache__/utils.cpython-310.pyc
ADDED
Binary file (4.94 kB). View file
|
|
algorithms/worldmem/models/__pycache__/vae.cpython-310.pyc
ADDED
Binary file (8.7 kB). View file
|
|
algorithms/worldmem/models/attention.py
ADDED
@@ -0,0 +1,351 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
Based on https://github.com/buoyancy99/diffusion-forcing/blob/main/algorithms/diffusion_forcing/models/attention.py
|
3 |
+
"""
|
4 |
+
|
5 |
+
from typing import Optional
|
6 |
+
from collections import namedtuple
|
7 |
+
import torch
|
8 |
+
from torch import nn
|
9 |
+
from torch.nn import functional as F
|
10 |
+
from einops import rearrange
|
11 |
+
from .rotary_embedding_torch import RotaryEmbedding, apply_rotary_emb
|
12 |
+
import numpy as np
|
13 |
+
|
14 |
+
def create_attention_bias(f1, f2, device=None, dtype=torch.float32):
|
15 |
+
f = f1 + f2
|
16 |
+
mask = torch.zeros((f, f), dtype=dtype, device=device)
|
17 |
+
if f1 > 0:
|
18 |
+
mask[:f1, :f1] = float('-inf')
|
19 |
+
if f2 > 0:
|
20 |
+
mask[f1:, f1:] = float('-inf')
|
21 |
+
return mask
|
22 |
+
|
23 |
+
class TemporalAxialAttention(nn.Module):
|
24 |
+
def __init__(
|
25 |
+
self,
|
26 |
+
dim: int,
|
27 |
+
heads: int,
|
28 |
+
dim_head: int,
|
29 |
+
reference_length: int,
|
30 |
+
rotary_emb: RotaryEmbedding,
|
31 |
+
is_causal: bool = True,
|
32 |
+
is_temporal_independent: bool = False,
|
33 |
+
use_domain_adapter = False
|
34 |
+
):
|
35 |
+
super().__init__()
|
36 |
+
self.inner_dim = dim_head * heads
|
37 |
+
self.heads = heads
|
38 |
+
self.head_dim = dim_head
|
39 |
+
self.inner_dim = dim_head * heads
|
40 |
+
self.to_qkv = nn.Linear(dim, self.inner_dim * 3, bias=False)
|
41 |
+
|
42 |
+
self.use_domain_adapter = use_domain_adapter
|
43 |
+
if self.use_domain_adapter:
|
44 |
+
lora_rank = 8
|
45 |
+
self.lora_A = nn.Linear(dim, lora_rank, bias=False)
|
46 |
+
self.lora_B = nn.Linear(lora_rank, self.inner_dim * 3, bias=False)
|
47 |
+
|
48 |
+
self.to_out = nn.Linear(self.inner_dim, dim)
|
49 |
+
|
50 |
+
self.rotary_emb = rotary_emb
|
51 |
+
self.is_causal = is_causal
|
52 |
+
self.is_temporal_independent = is_temporal_independent
|
53 |
+
|
54 |
+
self.reference_length = reference_length
|
55 |
+
|
56 |
+
def forward(self, x: torch.Tensor):
|
57 |
+
B, T, H, W, D = x.shape
|
58 |
+
|
59 |
+
# if T>=9:
|
60 |
+
# try:
|
61 |
+
# # x = torch.cat([x[:,:-1],x[:,16-T:17-T],x[:,-1:]], dim=1)
|
62 |
+
# x = torch.cat([x[:,16-T:17-T],x], dim=1)
|
63 |
+
# except:
|
64 |
+
# import pdb;pdb.set_trace()
|
65 |
+
# print("="*50)
|
66 |
+
# print(x.shape)
|
67 |
+
|
68 |
+
B, T, H, W, D = x.shape
|
69 |
+
|
70 |
+
q, k, v = self.to_qkv(x).chunk(3, dim=-1)
|
71 |
+
|
72 |
+
if self.use_domain_adapter:
|
73 |
+
q_lora, k_lora, v_lora = self.lora_B(self.lora_A(x)).chunk(3, dim=-1)
|
74 |
+
q = q+q_lora
|
75 |
+
k = k+k_lora
|
76 |
+
v = v+v_lora
|
77 |
+
|
78 |
+
q = rearrange(q, "B T H W (h d) -> (B H W) h T d", h=self.heads)
|
79 |
+
k = rearrange(k, "B T H W (h d) -> (B H W) h T d", h=self.heads)
|
80 |
+
v = rearrange(v, "B T H W (h d) -> (B H W) h T d", h=self.heads)
|
81 |
+
|
82 |
+
q = self.rotary_emb.rotate_queries_or_keys(q, self.rotary_emb.freqs)
|
83 |
+
k = self.rotary_emb.rotate_queries_or_keys(k, self.rotary_emb.freqs)
|
84 |
+
|
85 |
+
q, k, v = map(lambda t: t.contiguous(), (q, k, v))
|
86 |
+
|
87 |
+
if self.is_temporal_independent:
|
88 |
+
attn_bias = torch.ones((T, T), dtype=q.dtype, device=q.device)
|
89 |
+
attn_bias = attn_bias.masked_fill(attn_bias == 1, float('-inf'))
|
90 |
+
attn_bias[range(T), range(T)] = 0
|
91 |
+
elif self.is_causal:
|
92 |
+
attn_bias = torch.triu(torch.ones((T, T), dtype=q.dtype, device=q.device), diagonal=1)
|
93 |
+
attn_bias = attn_bias.masked_fill(attn_bias == 1, float('-inf'))
|
94 |
+
attn_bias[(T-self.reference_length):] = float('-inf')
|
95 |
+
attn_bias[range(T), range(T)] = 0
|
96 |
+
else:
|
97 |
+
attn_bias = None
|
98 |
+
|
99 |
+
try:
|
100 |
+
x = F.scaled_dot_product_attention(query=q, key=k, value=v, attn_mask=attn_bias)
|
101 |
+
except:
|
102 |
+
import pdb;pdb.set_trace()
|
103 |
+
|
104 |
+
x = rearrange(x, "(B H W) h T d -> B T H W (h d)", B=B, H=H, W=W)
|
105 |
+
x = x.to(q.dtype)
|
106 |
+
|
107 |
+
# linear proj
|
108 |
+
x = self.to_out(x)
|
109 |
+
|
110 |
+
# if T>=10:
|
111 |
+
# try:
|
112 |
+
# # x = torch.cat([x[:,:-2],x[:,-1:]], dim=1)
|
113 |
+
# x = x[:,1:]
|
114 |
+
# except:
|
115 |
+
# import pdb;pdb.set_trace()
|
116 |
+
# print(x.shape)
|
117 |
+
return x
|
118 |
+
|
119 |
+
class SpatialAxialAttention(nn.Module):
|
120 |
+
def __init__(
|
121 |
+
self,
|
122 |
+
dim: int,
|
123 |
+
heads: int,
|
124 |
+
dim_head: int,
|
125 |
+
rotary_emb: RotaryEmbedding,
|
126 |
+
use_domain_adapter = False
|
127 |
+
):
|
128 |
+
super().__init__()
|
129 |
+
self.inner_dim = dim_head * heads
|
130 |
+
self.heads = heads
|
131 |
+
self.head_dim = dim_head
|
132 |
+
self.inner_dim = dim_head * heads
|
133 |
+
self.to_qkv = nn.Linear(dim, self.inner_dim * 3, bias=False)
|
134 |
+
self.use_domain_adapter = use_domain_adapter
|
135 |
+
if self.use_domain_adapter:
|
136 |
+
lora_rank = 8
|
137 |
+
self.lora_A = nn.Linear(dim, lora_rank, bias=False)
|
138 |
+
self.lora_B = nn.Linear(lora_rank, self.inner_dim * 3, bias=False)
|
139 |
+
|
140 |
+
self.to_out = nn.Linear(self.inner_dim, dim)
|
141 |
+
|
142 |
+
self.rotary_emb = rotary_emb
|
143 |
+
|
144 |
+
def forward(self, x: torch.Tensor):
|
145 |
+
B, T, H, W, D = x.shape
|
146 |
+
|
147 |
+
q, k, v = self.to_qkv(x).chunk(3, dim=-1)
|
148 |
+
|
149 |
+
if self.use_domain_adapter:
|
150 |
+
q_lora, k_lora, v_lora = self.lora_B(self.lora_A(x)).chunk(3, dim=-1)
|
151 |
+
q = q+q_lora
|
152 |
+
k = k+k_lora
|
153 |
+
v = v+v_lora
|
154 |
+
|
155 |
+
q = rearrange(q, "B T H W (h d) -> (B T) h H W d", h=self.heads)
|
156 |
+
k = rearrange(k, "B T H W (h d) -> (B T) h H W d", h=self.heads)
|
157 |
+
v = rearrange(v, "B T H W (h d) -> (B T) h H W d", h=self.heads)
|
158 |
+
|
159 |
+
freqs = self.rotary_emb.get_axial_freqs(H, W)
|
160 |
+
q = apply_rotary_emb(freqs, q)
|
161 |
+
k = apply_rotary_emb(freqs, k)
|
162 |
+
|
163 |
+
# prepare for attn
|
164 |
+
q = rearrange(q, "(B T) h H W d -> (B T) h (H W) d", B=B, T=T, h=self.heads)
|
165 |
+
k = rearrange(k, "(B T) h H W d -> (B T) h (H W) d", B=B, T=T, h=self.heads)
|
166 |
+
v = rearrange(v, "(B T) h H W d -> (B T) h (H W) d", B=B, T=T, h=self.heads)
|
167 |
+
|
168 |
+
x = F.scaled_dot_product_attention(query=q, key=k, value=v, is_causal=False)
|
169 |
+
|
170 |
+
x = rearrange(x, "(B T) h (H W) d -> B T H W (h d)", B=B, H=H, W=W)
|
171 |
+
x = x.to(q.dtype)
|
172 |
+
|
173 |
+
# linear proj
|
174 |
+
x = self.to_out(x)
|
175 |
+
return x
|
176 |
+
|
177 |
+
class MemTemporalAxialAttention(nn.Module):
|
178 |
+
def __init__(
|
179 |
+
self,
|
180 |
+
dim: int,
|
181 |
+
heads: int,
|
182 |
+
dim_head: int,
|
183 |
+
rotary_emb: RotaryEmbedding,
|
184 |
+
is_causal: bool = True,
|
185 |
+
):
|
186 |
+
super().__init__()
|
187 |
+
self.inner_dim = dim_head * heads
|
188 |
+
self.heads = heads
|
189 |
+
self.head_dim = dim_head
|
190 |
+
self.inner_dim = dim_head * heads
|
191 |
+
self.to_qkv = nn.Linear(dim, self.inner_dim * 3, bias=False)
|
192 |
+
self.to_out = nn.Linear(self.inner_dim, dim)
|
193 |
+
|
194 |
+
self.rotary_emb = rotary_emb
|
195 |
+
self.is_causal = is_causal
|
196 |
+
|
197 |
+
self.reference_length = 3
|
198 |
+
|
199 |
+
def forward(self, x: torch.Tensor):
|
200 |
+
B, T, H, W, D = x.shape
|
201 |
+
|
202 |
+
q, k, v = self.to_qkv(x).chunk(3, dim=-1)
|
203 |
+
|
204 |
+
|
205 |
+
q = rearrange(q, "B T H W (h d) -> (B H W) h T d", h=self.heads)
|
206 |
+
k = rearrange(k, "B T H W (h d) -> (B H W) h T d", h=self.heads)
|
207 |
+
v = rearrange(v, "B T H W (h d) -> (B H W) h T d", h=self.heads)
|
208 |
+
|
209 |
+
|
210 |
+
|
211 |
+
# q = self.rotary_emb.rotate_queries_or_keys(q, self.rotary_emb.freqs)
|
212 |
+
# k = self.rotary_emb.rotate_queries_or_keys(k, self.rotary_emb.freqs)
|
213 |
+
|
214 |
+
q, k, v = map(lambda t: t.contiguous(), (q, k, v))
|
215 |
+
|
216 |
+
# if T == 21000:
|
217 |
+
# # 手动计算缩放点积分数
|
218 |
+
# _, _, _, d_k = q.shape
|
219 |
+
# scores = torch.einsum("b h n d, b h m d -> b h n m", q, k) / (d_k ** 0.5) # Shape: (B, T_q, T_k)
|
220 |
+
|
221 |
+
# # 计算注意力图 (Attention Map)
|
222 |
+
# attention_map = F.softmax(scores, dim=-1) # Shape: (B, T_q, T_k)
|
223 |
+
# b_, h_, n_, m_ = attention_map.shape
|
224 |
+
# attention_map = attention_map.reshape(1, int(np.sqrt(b_/1)), int(np.sqrt(b_/1)), h_, n_, m_)
|
225 |
+
# attention_map = attention_map.mean(3)
|
226 |
+
|
227 |
+
# attn_bias = torch.zeros((T, T), dtype=q.dtype, device=q.device)
|
228 |
+
# T_origin = T - self.reference_length
|
229 |
+
# attn_bias[:T_origin, T_origin:] = 1
|
230 |
+
# attn_bias[range(T), range(T)] = 1
|
231 |
+
|
232 |
+
# attention_map = attention_map * attn_bias
|
233 |
+
|
234 |
+
# # print 注意力图
|
235 |
+
# import matplotlib.pyplot as plt
|
236 |
+
# fig, axes = plt.subplots(21000, 21000, figsize=(9, 9)) # 调整figsize以适配图像大小
|
237 |
+
|
238 |
+
# # 遍历3*3维度
|
239 |
+
# for i in range(21000):
|
240 |
+
# for j in range(21000):
|
241 |
+
# # 取出第(i, j)个子图像
|
242 |
+
# img = attention_map[0, :, :, i, j].cpu().numpy()
|
243 |
+
# axes[i, j].imshow(img, cmap='viridis') # 可以自定义cmap
|
244 |
+
# axes[i, j].axis('off') # 隐藏坐标轴
|
245 |
+
|
246 |
+
# # 调整子图间距
|
247 |
+
# plt.tight_layout()
|
248 |
+
# plt.savefig('attention_map.png')
|
249 |
+
# import pdb; pdb.set_trace()
|
250 |
+
# plt.close()
|
251 |
+
|
252 |
+
attn_bias = torch.zeros((T, T), dtype=q.dtype, device=q.device)
|
253 |
+
attn_bias = attn_bias.masked_fill(attn_bias == 0, float('-inf'))
|
254 |
+
T_origin = T - self.reference_length
|
255 |
+
attn_bias[:T_origin, T_origin:] = 0
|
256 |
+
attn_bias[range(T), range(T)] = 0
|
257 |
+
|
258 |
+
# if T==121000:
|
259 |
+
# import pdb;pdb.set_trace()
|
260 |
+
|
261 |
+
try:
|
262 |
+
x = F.scaled_dot_product_attention(query=q, key=k, value=v, attn_mask=attn_bias)
|
263 |
+
except:
|
264 |
+
import pdb;pdb.set_trace()
|
265 |
+
|
266 |
+
x = rearrange(x, "(B H W) h T d -> B T H W (h d)", B=B, H=H, W=W)
|
267 |
+
x = x.to(q.dtype)
|
268 |
+
|
269 |
+
# linear proj
|
270 |
+
x = self.to_out(x)
|
271 |
+
return x
|
272 |
+
|
273 |
+
class MemFullAttention(nn.Module):
|
274 |
+
def __init__(
|
275 |
+
self,
|
276 |
+
dim: int,
|
277 |
+
heads: int,
|
278 |
+
dim_head: int,
|
279 |
+
reference_length: int,
|
280 |
+
rotary_emb: RotaryEmbedding,
|
281 |
+
is_causal: bool = True
|
282 |
+
):
|
283 |
+
super().__init__()
|
284 |
+
self.inner_dim = dim_head * heads
|
285 |
+
self.heads = heads
|
286 |
+
self.head_dim = dim_head
|
287 |
+
self.inner_dim = dim_head * heads
|
288 |
+
self.to_qkv = nn.Linear(dim, self.inner_dim * 3, bias=False)
|
289 |
+
self.to_out = nn.Linear(self.inner_dim, dim)
|
290 |
+
|
291 |
+
self.rotary_emb = rotary_emb
|
292 |
+
self.is_causal = is_causal
|
293 |
+
|
294 |
+
self.reference_length = reference_length
|
295 |
+
|
296 |
+
self.store = None
|
297 |
+
|
298 |
+
def forward(self, x: torch.Tensor, relative_embedding=False,
|
299 |
+
extra_condition=None,
|
300 |
+
cond_only_on_qk=False,
|
301 |
+
reference_length=None):
|
302 |
+
|
303 |
+
B, T, H, W, D = x.shape
|
304 |
+
|
305 |
+
if cond_only_on_qk:
|
306 |
+
q, k, _ = self.to_qkv(x+extra_condition).chunk(3, dim=-1)
|
307 |
+
_, _, v = self.to_qkv(x).chunk(3, dim=-1)
|
308 |
+
else:
|
309 |
+
q, k, v = self.to_qkv(x).chunk(3, dim=-1)
|
310 |
+
|
311 |
+
if relative_embedding:
|
312 |
+
length = reference_length+1
|
313 |
+
n_frames = T // length
|
314 |
+
x = x.reshape(B, n_frames, length, H, W, D)
|
315 |
+
|
316 |
+
x_list = []
|
317 |
+
|
318 |
+
for i in range(n_frames):
|
319 |
+
if i == n_frames-1:
|
320 |
+
q_i = rearrange(q[:, i*length:], "B T H W (h d) -> B h (T H W) d", h=self.heads)
|
321 |
+
k_i = rearrange(k[:, i*length+1:(i+1)*length], "B T H W (h d) -> B h (T H W) d", h=self.heads)
|
322 |
+
v_i = rearrange(v[:, i*length+1:(i+1)*length], "B T H W (h d) -> B h (T H W) d", h=self.heads)
|
323 |
+
else:
|
324 |
+
q_i = rearrange(q[:, i*length:i*length+1], "B T H W (h d) -> B h (T H W) d", h=self.heads)
|
325 |
+
k_i = rearrange(k[:, i*length+1:(i+1)*length], "B T H W (h d) -> B h (T H W) d", h=self.heads)
|
326 |
+
v_i = rearrange(v[:, i*length+1:(i+1)*length], "B T H W (h d) -> B h (T H W) d", h=self.heads)
|
327 |
+
|
328 |
+
q_i, k_i, v_i = map(lambda t: t.contiguous(), (q_i, k_i, v_i))
|
329 |
+
x_i = F.scaled_dot_product_attention(query=q_i, key=k_i, value=v_i)
|
330 |
+
x_i = rearrange(x_i, "B h (T H W) d -> B T H W (h d)", B=B, H=H, W=W)
|
331 |
+
x_i = x_i.to(q.dtype)
|
332 |
+
x_list.append(x_i)
|
333 |
+
|
334 |
+
x = torch.cat(x_list, dim=1)
|
335 |
+
|
336 |
+
|
337 |
+
else:
|
338 |
+
T_ = T - reference_length
|
339 |
+
q = rearrange(q, "B T H W (h d) -> B h (T H W) d", h=self.heads)
|
340 |
+
k = rearrange(k[:, T_:], "B T H W (h d) -> B h (T H W) d", h=self.heads)
|
341 |
+
v = rearrange(v[:, T_:], "B T H W (h d) -> B h (T H W) d", h=self.heads)
|
342 |
+
|
343 |
+
q, k, v = map(lambda t: t.contiguous(), (q, k, v))
|
344 |
+
x = F.scaled_dot_product_attention(query=q, key=k, value=v)
|
345 |
+
x = rearrange(x, "B h (T H W) d -> B T H W (h d)", B=B, H=H, W=W)
|
346 |
+
x = x.to(q.dtype)
|
347 |
+
|
348 |
+
# linear proj
|
349 |
+
x = self.to_out(x)
|
350 |
+
|
351 |
+
return x
|
algorithms/worldmem/models/cameractrl_module.py
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import torch.nn as nn
|
2 |
+
class SimpleCameraPoseEncoder(nn.Module):
|
3 |
+
def __init__(self, c_in, c_out, hidden_dim=128):
|
4 |
+
super(SimpleCameraPoseEncoder, self).__init__()
|
5 |
+
self.model = nn.Sequential(
|
6 |
+
nn.Linear(c_in, hidden_dim),
|
7 |
+
nn.ReLU(),
|
8 |
+
nn.Linear(hidden_dim, c_out)
|
9 |
+
)
|
10 |
+
def forward(self, x):
|
11 |
+
return self.model(x)
|
12 |
+
|
algorithms/worldmem/models/diffusion.py
ADDED
@@ -0,0 +1,520 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from typing import Optional, Callable
|
2 |
+
from collections import namedtuple
|
3 |
+
from omegaconf import DictConfig
|
4 |
+
import torch
|
5 |
+
from torch import nn
|
6 |
+
from torch.nn import functional as F
|
7 |
+
from einops import rearrange
|
8 |
+
from .utils import linear_beta_schedule, cosine_beta_schedule, sigmoid_beta_schedule, extract
|
9 |
+
from .dit import DiT_models
|
10 |
+
|
11 |
+
ModelPrediction = namedtuple("ModelPrediction", ["pred_noise", "pred_x_start", "model_out"])
|
12 |
+
|
13 |
+
|
14 |
+
class Diffusion(nn.Module):
|
15 |
+
# Special thanks to lucidrains for the implementation of the base Diffusion model
|
16 |
+
# https://github.com/lucidrains/denoising-diffusion-pytorch
|
17 |
+
|
18 |
+
def __init__(
|
19 |
+
self,
|
20 |
+
x_shape: torch.Size,
|
21 |
+
reference_length: int,
|
22 |
+
action_cond_dim: int,
|
23 |
+
pose_cond_dim,
|
24 |
+
is_causal: bool,
|
25 |
+
cfg: DictConfig,
|
26 |
+
is_dit: bool=False,
|
27 |
+
use_plucker=False,
|
28 |
+
relative_embedding=False,
|
29 |
+
cond_only_on_qk=False,
|
30 |
+
use_reference_attention=False,
|
31 |
+
add_frame_timestep_embedder=False,
|
32 |
+
ref_mode='sequential'
|
33 |
+
):
|
34 |
+
super().__init__()
|
35 |
+
self.cfg = cfg
|
36 |
+
|
37 |
+
self.x_shape = x_shape
|
38 |
+
self.action_cond_dim = action_cond_dim
|
39 |
+
self.timesteps = cfg.timesteps
|
40 |
+
self.sampling_timesteps = cfg.sampling_timesteps
|
41 |
+
self.beta_schedule = cfg.beta_schedule
|
42 |
+
self.schedule_fn_kwargs = cfg.schedule_fn_kwargs
|
43 |
+
self.objective = cfg.objective
|
44 |
+
self.use_fused_snr = cfg.use_fused_snr
|
45 |
+
self.snr_clip = cfg.snr_clip
|
46 |
+
self.cum_snr_decay = cfg.cum_snr_decay
|
47 |
+
self.ddim_sampling_eta = cfg.ddim_sampling_eta
|
48 |
+
self.clip_noise = cfg.clip_noise
|
49 |
+
self.arch = cfg.architecture
|
50 |
+
self.stabilization_level = cfg.stabilization_level
|
51 |
+
self.is_causal = is_causal
|
52 |
+
self.is_dit = is_dit
|
53 |
+
self.reference_length = reference_length
|
54 |
+
self.pose_cond_dim = pose_cond_dim
|
55 |
+
self.use_plucker = use_plucker
|
56 |
+
self.relative_embedding = relative_embedding
|
57 |
+
self.cond_only_on_qk = cond_only_on_qk
|
58 |
+
self.use_reference_attention = use_reference_attention
|
59 |
+
self.add_frame_timestep_embedder = add_frame_timestep_embedder
|
60 |
+
self.ref_mode = ref_mode
|
61 |
+
|
62 |
+
self._build_model()
|
63 |
+
self._build_buffer()
|
64 |
+
|
65 |
+
def _build_model(self):
|
66 |
+
x_channel = self.x_shape[0]
|
67 |
+
if self.is_dit:
|
68 |
+
self.model = DiT_models["DiT-S/2"](action_cond_dim=self.action_cond_dim,
|
69 |
+
pose_cond_dim=self.pose_cond_dim, reference_length=self.reference_length,
|
70 |
+
use_plucker=self.use_plucker,
|
71 |
+
relative_embedding=self.relative_embedding,
|
72 |
+
cond_only_on_qk=self.cond_only_on_qk,
|
73 |
+
use_reference_attention=self.use_reference_attention,
|
74 |
+
add_frame_timestep_embedder=self.add_frame_timestep_embedder,
|
75 |
+
ref_mode=self.ref_mode)
|
76 |
+
else:
|
77 |
+
raise NotImplementedError
|
78 |
+
|
79 |
+
def _build_buffer(self):
|
80 |
+
if self.beta_schedule == "linear":
|
81 |
+
beta_schedule_fn = linear_beta_schedule
|
82 |
+
elif self.beta_schedule == "cosine":
|
83 |
+
beta_schedule_fn = cosine_beta_schedule
|
84 |
+
elif self.beta_schedule == "sigmoid":
|
85 |
+
beta_schedule_fn = sigmoid_beta_schedule
|
86 |
+
else:
|
87 |
+
raise ValueError(f"unknown beta schedule {self.beta_schedule}")
|
88 |
+
|
89 |
+
betas = beta_schedule_fn(self.timesteps, **self.schedule_fn_kwargs)
|
90 |
+
|
91 |
+
alphas = 1.0 - betas
|
92 |
+
alphas_cumprod = torch.cumprod(alphas, dim=0)
|
93 |
+
alphas_cumprod_prev = F.pad(alphas_cumprod[:-1], (1, 0), value=1.0)
|
94 |
+
|
95 |
+
# sampling related parameters
|
96 |
+
assert self.sampling_timesteps <= self.timesteps
|
97 |
+
self.is_ddim_sampling = self.sampling_timesteps < self.timesteps
|
98 |
+
|
99 |
+
# helper function to register buffer from float64 to float32
|
100 |
+
register_buffer = lambda name, val: self.register_buffer(name, val.to(torch.float32))
|
101 |
+
|
102 |
+
register_buffer("betas", betas)
|
103 |
+
register_buffer("alphas_cumprod", alphas_cumprod)
|
104 |
+
register_buffer("alphas_cumprod_prev", alphas_cumprod_prev)
|
105 |
+
|
106 |
+
# calculations for diffusion q(x_t | x_{t-1}) and others
|
107 |
+
|
108 |
+
register_buffer("sqrt_alphas_cumprod", torch.sqrt(alphas_cumprod))
|
109 |
+
register_buffer("sqrt_one_minus_alphas_cumprod", torch.sqrt(1.0 - alphas_cumprod))
|
110 |
+
register_buffer("log_one_minus_alphas_cumprod", torch.log(1.0 - alphas_cumprod))
|
111 |
+
register_buffer("sqrt_recip_alphas_cumprod", torch.sqrt(1.0 / alphas_cumprod))
|
112 |
+
register_buffer("sqrt_recipm1_alphas_cumprod", torch.sqrt(1.0 / alphas_cumprod - 1))
|
113 |
+
|
114 |
+
# calculations for posterior q(x_{t-1} | x_t, x_0)
|
115 |
+
|
116 |
+
posterior_variance = betas * (1.0 - alphas_cumprod_prev) / (1.0 - alphas_cumprod)
|
117 |
+
|
118 |
+
# above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t)
|
119 |
+
|
120 |
+
register_buffer("posterior_variance", posterior_variance)
|
121 |
+
|
122 |
+
# below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain
|
123 |
+
|
124 |
+
register_buffer(
|
125 |
+
"posterior_log_variance_clipped",
|
126 |
+
torch.log(posterior_variance.clamp(min=1e-20)),
|
127 |
+
)
|
128 |
+
register_buffer(
|
129 |
+
"posterior_mean_coef1",
|
130 |
+
betas * torch.sqrt(alphas_cumprod_prev) / (1.0 - alphas_cumprod),
|
131 |
+
)
|
132 |
+
register_buffer(
|
133 |
+
"posterior_mean_coef2",
|
134 |
+
(1.0 - alphas_cumprod_prev) * torch.sqrt(alphas) / (1.0 - alphas_cumprod),
|
135 |
+
)
|
136 |
+
|
137 |
+
# calculate p2 reweighting
|
138 |
+
|
139 |
+
# register_buffer(
|
140 |
+
# "p2_loss_weight",
|
141 |
+
# (self.p2_loss_weight_k + alphas_cumprod / (1 - alphas_cumprod))
|
142 |
+
# ** -self.p2_loss_weight_gamma,
|
143 |
+
# )
|
144 |
+
|
145 |
+
# derive loss weight
|
146 |
+
# https://arxiv.org/abs/2303.09556
|
147 |
+
# snr: signal noise ratio
|
148 |
+
snr = alphas_cumprod / (1 - alphas_cumprod)
|
149 |
+
clipped_snr = snr.clone()
|
150 |
+
clipped_snr.clamp_(max=self.snr_clip)
|
151 |
+
|
152 |
+
register_buffer("clipped_snr", clipped_snr)
|
153 |
+
register_buffer("snr", snr)
|
154 |
+
|
155 |
+
def add_shape_channels(self, x):
|
156 |
+
return rearrange(x, f"... -> ...{' 1' * len(self.x_shape)}")
|
157 |
+
|
158 |
+
def model_predictions(self, x, t, action_cond=None, current_frame=None,
|
159 |
+
pose_cond=None, mode="training", reference_length=None, frame_idx=None):
|
160 |
+
x = x.permute(1,0,2,3,4)
|
161 |
+
action_cond = action_cond.permute(1,0,2)
|
162 |
+
if pose_cond is not None and pose_cond[0] is not None:
|
163 |
+
try:
|
164 |
+
pose_cond = pose_cond.permute(1,0,2)
|
165 |
+
except:
|
166 |
+
pass
|
167 |
+
t = t.permute(1,0)
|
168 |
+
model_output = self.model(x, t, action_cond, current_frame=current_frame, pose_cond=pose_cond,
|
169 |
+
mode=mode, reference_length=reference_length, frame_idx=frame_idx)
|
170 |
+
model_output = model_output.permute(1,0,2,3,4)
|
171 |
+
x = x.permute(1,0,2,3,4)
|
172 |
+
t = t.permute(1,0)
|
173 |
+
|
174 |
+
if self.objective == "pred_noise":
|
175 |
+
pred_noise = torch.clamp(model_output, -self.clip_noise, self.clip_noise)
|
176 |
+
x_start = self.predict_start_from_noise(x, t, pred_noise)
|
177 |
+
|
178 |
+
elif self.objective == "pred_x0":
|
179 |
+
x_start = model_output
|
180 |
+
pred_noise = self.predict_noise_from_start(x, t, x_start)
|
181 |
+
|
182 |
+
elif self.objective == "pred_v":
|
183 |
+
v = model_output
|
184 |
+
x_start = self.predict_start_from_v(x, t, v)
|
185 |
+
pred_noise = self.predict_noise_from_start(x, t, x_start)
|
186 |
+
|
187 |
+
|
188 |
+
return ModelPrediction(pred_noise, x_start, model_output)
|
189 |
+
|
190 |
+
def predict_start_from_noise(self, x_t, t, noise):
|
191 |
+
return (
|
192 |
+
extract(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t
|
193 |
+
- extract(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise
|
194 |
+
)
|
195 |
+
|
196 |
+
def predict_noise_from_start(self, x_t, t, x0):
|
197 |
+
return (extract(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - x0) / extract(
|
198 |
+
self.sqrt_recipm1_alphas_cumprod, t, x_t.shape
|
199 |
+
)
|
200 |
+
|
201 |
+
def predict_v(self, x_start, t, noise):
|
202 |
+
return (
|
203 |
+
extract(self.sqrt_alphas_cumprod, t, x_start.shape) * noise
|
204 |
+
- extract(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * x_start
|
205 |
+
)
|
206 |
+
|
207 |
+
def predict_start_from_v(self, x_t, t, v):
|
208 |
+
return (
|
209 |
+
extract(self.sqrt_alphas_cumprod, t, x_t.shape) * x_t
|
210 |
+
- extract(self.sqrt_one_minus_alphas_cumprod, t, x_t.shape) * v
|
211 |
+
)
|
212 |
+
|
213 |
+
def q_mean_variance(self, x_start, t):
|
214 |
+
mean = extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start
|
215 |
+
variance = extract(1.0 - self.alphas_cumprod, t, x_start.shape)
|
216 |
+
log_variance = extract(self.log_one_minus_alphas_cumprod, t, x_start.shape)
|
217 |
+
return mean, variance, log_variance
|
218 |
+
|
219 |
+
def q_posterior(self, x_start, x_t, t):
|
220 |
+
posterior_mean = (
|
221 |
+
extract(self.posterior_mean_coef1, t, x_t.shape) * x_start
|
222 |
+
+ extract(self.posterior_mean_coef2, t, x_t.shape) * x_t
|
223 |
+
)
|
224 |
+
posterior_variance = extract(self.posterior_variance, t, x_t.shape)
|
225 |
+
posterior_log_variance_clipped = extract(self.posterior_log_variance_clipped, t, x_t.shape)
|
226 |
+
return posterior_mean, posterior_variance, posterior_log_variance_clipped
|
227 |
+
|
228 |
+
def q_sample(self, x_start, t, noise=None):
|
229 |
+
if noise is None:
|
230 |
+
noise = torch.randn_like(x_start)
|
231 |
+
noise = torch.clamp(noise, -self.clip_noise, self.clip_noise)
|
232 |
+
return (
|
233 |
+
extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start
|
234 |
+
+ extract(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise
|
235 |
+
)
|
236 |
+
|
237 |
+
def p_mean_variance(self, x, t, action_cond=None, pose_cond=None, reference_length=None):
|
238 |
+
model_pred = self.model_predictions(x=x, t=t, action_cond=action_cond,
|
239 |
+
pose_cond=pose_cond, reference_length=reference_length,
|
240 |
+
frame_idx=frame_idx)
|
241 |
+
x_start = model_pred.pred_x_start
|
242 |
+
return self.q_posterior(x_start=x_start, x_t=x, t=t)
|
243 |
+
|
244 |
+
def compute_loss_weights(self, noise_levels: torch.Tensor):
|
245 |
+
|
246 |
+
snr = self.snr[noise_levels]
|
247 |
+
clipped_snr = self.clipped_snr[noise_levels]
|
248 |
+
normalized_clipped_snr = clipped_snr / self.snr_clip
|
249 |
+
normalized_snr = snr / self.snr_clip
|
250 |
+
|
251 |
+
if not self.use_fused_snr:
|
252 |
+
# min SNR reweighting
|
253 |
+
match self.objective:
|
254 |
+
case "pred_noise":
|
255 |
+
return clipped_snr / snr
|
256 |
+
case "pred_x0":
|
257 |
+
return clipped_snr
|
258 |
+
case "pred_v":
|
259 |
+
return clipped_snr / (snr + 1)
|
260 |
+
|
261 |
+
cum_snr = torch.zeros_like(normalized_snr)
|
262 |
+
for t in range(0, noise_levels.shape[0]):
|
263 |
+
if t == 0:
|
264 |
+
cum_snr[t] = normalized_clipped_snr[t]
|
265 |
+
else:
|
266 |
+
cum_snr[t] = self.cum_snr_decay * cum_snr[t - 1] + (1 - self.cum_snr_decay) * normalized_clipped_snr[t]
|
267 |
+
|
268 |
+
cum_snr = F.pad(cum_snr[:-1], (0, 0, 1, 0), value=0.0)
|
269 |
+
clipped_fused_snr = 1 - (1 - cum_snr * self.cum_snr_decay) * (1 - normalized_clipped_snr)
|
270 |
+
fused_snr = 1 - (1 - cum_snr * self.cum_snr_decay) * (1 - normalized_snr)
|
271 |
+
|
272 |
+
match self.objective:
|
273 |
+
case "pred_noise":
|
274 |
+
return clipped_fused_snr / fused_snr
|
275 |
+
case "pred_x0":
|
276 |
+
return clipped_fused_snr * self.snr_clip
|
277 |
+
case "pred_v":
|
278 |
+
return clipped_fused_snr * self.snr_clip / (fused_snr * self.snr_clip + 1)
|
279 |
+
case _:
|
280 |
+
raise ValueError(f"unknown objective {self.objective}")
|
281 |
+
|
282 |
+
def forward(
|
283 |
+
self,
|
284 |
+
x: torch.Tensor,
|
285 |
+
action_cond: Optional[torch.Tensor],
|
286 |
+
pose_cond,
|
287 |
+
noise_levels: torch.Tensor,
|
288 |
+
reference_length,
|
289 |
+
frame_idx=None
|
290 |
+
):
|
291 |
+
noise = torch.randn_like(x)
|
292 |
+
noise = torch.clamp(noise, -self.clip_noise, self.clip_noise)
|
293 |
+
|
294 |
+
noised_x = self.q_sample(x_start=x, t=noise_levels, noise=noise)
|
295 |
+
|
296 |
+
model_pred = self.model_predictions(x=noised_x, t=noise_levels, action_cond=action_cond,
|
297 |
+
pose_cond=pose_cond,reference_length=reference_length, frame_idx=frame_idx)
|
298 |
+
|
299 |
+
pred = model_pred.model_out
|
300 |
+
x_pred = model_pred.pred_x_start
|
301 |
+
|
302 |
+
if self.objective == "pred_noise":
|
303 |
+
target = noise
|
304 |
+
elif self.objective == "pred_x0":
|
305 |
+
target = x
|
306 |
+
elif self.objective == "pred_v":
|
307 |
+
target = self.predict_v(x, noise_levels, noise)
|
308 |
+
else:
|
309 |
+
raise ValueError(f"unknown objective {self.objective}")
|
310 |
+
|
311 |
+
# 训练的时候每个frame随便给噪声
|
312 |
+
loss = F.mse_loss(pred, target.detach(), reduction="none")
|
313 |
+
loss_weight = self.compute_loss_weights(noise_levels)
|
314 |
+
|
315 |
+
loss_weight = loss_weight.view(*loss_weight.shape, *((1,) * (loss.ndim - 2)))
|
316 |
+
|
317 |
+
loss = loss * loss_weight
|
318 |
+
|
319 |
+
return x_pred, loss
|
320 |
+
|
321 |
+
def sample_step(
|
322 |
+
self,
|
323 |
+
x: torch.Tensor,
|
324 |
+
action_cond: Optional[torch.Tensor],
|
325 |
+
pose_cond,
|
326 |
+
curr_noise_level: torch.Tensor,
|
327 |
+
next_noise_level: torch.Tensor,
|
328 |
+
guidance_fn: Optional[Callable] = None,
|
329 |
+
current_frame=None,
|
330 |
+
mode="training",
|
331 |
+
reference_length=None,
|
332 |
+
frame_idx=None
|
333 |
+
):
|
334 |
+
real_steps = torch.linspace(-1, self.timesteps - 1, steps=self.sampling_timesteps + 1, device=x.device).long()
|
335 |
+
|
336 |
+
# convert noise levels (0 ~ sampling_timesteps) to real noise levels (-1 ~ timesteps - 1)
|
337 |
+
curr_noise_level = real_steps[curr_noise_level]
|
338 |
+
next_noise_level = real_steps[next_noise_level]
|
339 |
+
|
340 |
+
if self.is_ddim_sampling:
|
341 |
+
return self.ddim_sample_step(
|
342 |
+
x=x,
|
343 |
+
action_cond=action_cond,
|
344 |
+
pose_cond=pose_cond,
|
345 |
+
curr_noise_level=curr_noise_level,
|
346 |
+
next_noise_level=next_noise_level,
|
347 |
+
guidance_fn=guidance_fn,
|
348 |
+
current_frame=current_frame,
|
349 |
+
mode=mode,
|
350 |
+
reference_length=reference_length,
|
351 |
+
frame_idx=frame_idx
|
352 |
+
)
|
353 |
+
|
354 |
+
# FIXME: temporary code for checking ddpm sampling
|
355 |
+
assert torch.all(
|
356 |
+
(curr_noise_level - 1 == next_noise_level) | ((curr_noise_level == -1) & (next_noise_level == -1))
|
357 |
+
), "Wrong noise level given for ddpm sampling."
|
358 |
+
|
359 |
+
assert (
|
360 |
+
self.sampling_timesteps == self.timesteps
|
361 |
+
), "sampling_timesteps should be equal to timesteps for ddpm sampling."
|
362 |
+
|
363 |
+
return self.ddpm_sample_step(
|
364 |
+
x=x,
|
365 |
+
action_cond=action_cond,
|
366 |
+
pose_cond=pose_cond,
|
367 |
+
curr_noise_level=curr_noise_level,
|
368 |
+
guidance_fn=guidance_fn,
|
369 |
+
reference_length=reference_length,
|
370 |
+
frame_idx=frame_idx
|
371 |
+
)
|
372 |
+
|
373 |
+
def ddpm_sample_step(
|
374 |
+
self,
|
375 |
+
x: torch.Tensor,
|
376 |
+
action_cond: Optional[torch.Tensor],
|
377 |
+
pose_cond,
|
378 |
+
curr_noise_level: torch.Tensor,
|
379 |
+
guidance_fn: Optional[Callable] = None,
|
380 |
+
reference_length=None,
|
381 |
+
frame_idx=None,
|
382 |
+
):
|
383 |
+
clipped_curr_noise_level = torch.where(
|
384 |
+
curr_noise_level < 0,
|
385 |
+
torch.full_like(curr_noise_level, self.stabilization_level - 1, dtype=torch.long),
|
386 |
+
curr_noise_level,
|
387 |
+
)
|
388 |
+
|
389 |
+
# treating as stabilization would require us to scale with sqrt of alpha_cum
|
390 |
+
orig_x = x.clone().detach()
|
391 |
+
scaled_context = self.q_sample(
|
392 |
+
x,
|
393 |
+
clipped_curr_noise_level,
|
394 |
+
noise=torch.zeros_like(x),
|
395 |
+
)
|
396 |
+
x = torch.where(self.add_shape_channels(curr_noise_level < 0), scaled_context, orig_x)
|
397 |
+
|
398 |
+
if guidance_fn is not None:
|
399 |
+
raise NotImplementedError("Guidance function is not implemented for ddpm sampling yet.")
|
400 |
+
|
401 |
+
else:
|
402 |
+
model_mean, _, model_log_variance = self.p_mean_variance(
|
403 |
+
x=x,
|
404 |
+
t=clipped_curr_noise_level,
|
405 |
+
action_cond=action_cond,
|
406 |
+
pose_cond=pose_cond,
|
407 |
+
reference_length=reference_length,
|
408 |
+
frame_idx=frame_idx
|
409 |
+
)
|
410 |
+
|
411 |
+
noise = torch.where(
|
412 |
+
self.add_shape_channels(clipped_curr_noise_level > 0),
|
413 |
+
torch.randn_like(x),
|
414 |
+
0,
|
415 |
+
)
|
416 |
+
noise = torch.clamp(noise, -self.clip_noise, self.clip_noise)
|
417 |
+
x_pred = model_mean + torch.exp(0.5 * model_log_variance) * noise
|
418 |
+
|
419 |
+
# only update frames where the noise level decreases
|
420 |
+
return torch.where(self.add_shape_channels(curr_noise_level == -1), orig_x, x_pred)
|
421 |
+
|
422 |
+
def ddim_sample_step(
|
423 |
+
self,
|
424 |
+
x: torch.Tensor,
|
425 |
+
action_cond: Optional[torch.Tensor],
|
426 |
+
pose_cond,
|
427 |
+
curr_noise_level: torch.Tensor,
|
428 |
+
next_noise_level: torch.Tensor,
|
429 |
+
guidance_fn: Optional[Callable] = None,
|
430 |
+
current_frame=None,
|
431 |
+
mode="training",
|
432 |
+
reference_length=None,
|
433 |
+
frame_idx=None
|
434 |
+
):
|
435 |
+
# convert noise level -1 to self.stabilization_level - 1
|
436 |
+
clipped_curr_noise_level = torch.where(
|
437 |
+
curr_noise_level < 0,
|
438 |
+
torch.full_like(curr_noise_level, self.stabilization_level - 1, dtype=torch.long),
|
439 |
+
curr_noise_level,
|
440 |
+
)
|
441 |
+
|
442 |
+
# treating as stabilization would require us to scale with sqrt of alpha_cum
|
443 |
+
orig_x = x.clone().detach()
|
444 |
+
scaled_context = self.q_sample(
|
445 |
+
x,
|
446 |
+
clipped_curr_noise_level,
|
447 |
+
noise=torch.zeros_like(x),
|
448 |
+
)
|
449 |
+
x = torch.where(self.add_shape_channels(curr_noise_level < 0), scaled_context, orig_x)
|
450 |
+
|
451 |
+
alpha = self.alphas_cumprod[clipped_curr_noise_level]
|
452 |
+
alpha_next = torch.where(
|
453 |
+
next_noise_level < 0,
|
454 |
+
torch.ones_like(next_noise_level),
|
455 |
+
self.alphas_cumprod[next_noise_level],
|
456 |
+
)
|
457 |
+
sigma = torch.where(
|
458 |
+
next_noise_level < 0,
|
459 |
+
torch.zeros_like(next_noise_level),
|
460 |
+
self.ddim_sampling_eta * ((1 - alpha / alpha_next) * (1 - alpha_next) / (1 - alpha)).sqrt(),
|
461 |
+
)
|
462 |
+
c = (1 - alpha_next - sigma**2).sqrt()
|
463 |
+
|
464 |
+
alpha_next = self.add_shape_channels(alpha_next)
|
465 |
+
c = self.add_shape_channels(c)
|
466 |
+
sigma = self.add_shape_channels(sigma)
|
467 |
+
|
468 |
+
if guidance_fn is not None:
|
469 |
+
with torch.enable_grad():
|
470 |
+
x = x.detach().requires_grad_()
|
471 |
+
|
472 |
+
model_pred = self.model_predictions(
|
473 |
+
x=x,
|
474 |
+
t=clipped_curr_noise_level,
|
475 |
+
action_cond=action_cond,
|
476 |
+
pose_cond=pose_cond,
|
477 |
+
current_frame=current_frame,
|
478 |
+
mode=mode,
|
479 |
+
reference_length=reference_length,
|
480 |
+
frame_idx=frame_idx
|
481 |
+
)
|
482 |
+
|
483 |
+
guidance_loss = guidance_fn(model_pred.pred_x_start)
|
484 |
+
grad = -torch.autograd.grad(
|
485 |
+
guidance_loss,
|
486 |
+
x,
|
487 |
+
)[0]
|
488 |
+
|
489 |
+
pred_noise = model_pred.pred_noise + (1 - alpha_next).sqrt() * grad
|
490 |
+
x_start = self.predict_start_from_noise(x, clipped_curr_noise_level, pred_noise)
|
491 |
+
|
492 |
+
else:
|
493 |
+
# print(clipped_curr_noise_level)
|
494 |
+
model_pred = self.model_predictions(
|
495 |
+
x=x,
|
496 |
+
t=clipped_curr_noise_level,
|
497 |
+
action_cond=action_cond,
|
498 |
+
pose_cond=pose_cond,
|
499 |
+
current_frame=current_frame,
|
500 |
+
mode=mode,
|
501 |
+
reference_length=reference_length,
|
502 |
+
frame_idx=frame_idx
|
503 |
+
)
|
504 |
+
x_start = model_pred.pred_x_start
|
505 |
+
pred_noise = model_pred.pred_noise
|
506 |
+
|
507 |
+
noise = torch.randn_like(x)
|
508 |
+
noise = torch.clamp(noise, -self.clip_noise, self.clip_noise)
|
509 |
+
|
510 |
+
x_pred = x_start * alpha_next.sqrt() + pred_noise * c + sigma * noise
|
511 |
+
|
512 |
+
# only update frames where the noise level decreases
|
513 |
+
mask = curr_noise_level == next_noise_level
|
514 |
+
x_pred = torch.where(
|
515 |
+
self.add_shape_channels(mask),
|
516 |
+
orig_x,
|
517 |
+
x_pred,
|
518 |
+
)
|
519 |
+
|
520 |
+
return x_pred
|
algorithms/worldmem/models/dit.py
ADDED
@@ -0,0 +1,577 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
References:
|
3 |
+
- DiT: https://github.com/facebookresearch/DiT/blob/main/models.py
|
4 |
+
- Diffusion Forcing: https://github.com/buoyancy99/diffusion-forcing/blob/main/algorithms/diffusion_forcing/models/unet3d.py
|
5 |
+
- Latte: https://github.com/Vchitect/Latte/blob/main/models/latte.py
|
6 |
+
"""
|
7 |
+
|
8 |
+
from typing import Optional, Literal
|
9 |
+
import torch
|
10 |
+
from torch import nn
|
11 |
+
from .rotary_embedding_torch import RotaryEmbedding
|
12 |
+
from einops import rearrange
|
13 |
+
from .attention import SpatialAxialAttention, TemporalAxialAttention, MemTemporalAxialAttention, MemFullAttention
|
14 |
+
from timm.models.vision_transformer import Mlp
|
15 |
+
from timm.layers.helpers import to_2tuple
|
16 |
+
import math
|
17 |
+
from collections import namedtuple
|
18 |
+
from typing import Optional, Callable
|
19 |
+
from .cameractrl_module import SimpleCameraPoseEncoder
|
20 |
+
|
21 |
+
def modulate(x, shift, scale):
|
22 |
+
fixed_dims = [1] * len(shift.shape[1:])
|
23 |
+
shift = shift.repeat(x.shape[0] // shift.shape[0], *fixed_dims)
|
24 |
+
scale = scale.repeat(x.shape[0] // scale.shape[0], *fixed_dims)
|
25 |
+
while shift.dim() < x.dim():
|
26 |
+
shift = shift.unsqueeze(-2)
|
27 |
+
scale = scale.unsqueeze(-2)
|
28 |
+
return x * (1 + scale) + shift
|
29 |
+
|
30 |
+
def gate(x, g):
|
31 |
+
fixed_dims = [1] * len(g.shape[1:])
|
32 |
+
g = g.repeat(x.shape[0] // g.shape[0], *fixed_dims)
|
33 |
+
while g.dim() < x.dim():
|
34 |
+
g = g.unsqueeze(-2)
|
35 |
+
return g * x
|
36 |
+
|
37 |
+
|
38 |
+
class PatchEmbed(nn.Module):
|
39 |
+
"""2D Image to Patch Embedding"""
|
40 |
+
|
41 |
+
def __init__(
|
42 |
+
self,
|
43 |
+
img_height=256,
|
44 |
+
img_width=256,
|
45 |
+
patch_size=16,
|
46 |
+
in_chans=3,
|
47 |
+
embed_dim=768,
|
48 |
+
norm_layer=None,
|
49 |
+
flatten=True,
|
50 |
+
):
|
51 |
+
super().__init__()
|
52 |
+
img_size = (img_height, img_width)
|
53 |
+
patch_size = to_2tuple(patch_size)
|
54 |
+
self.img_size = img_size
|
55 |
+
self.patch_size = patch_size
|
56 |
+
self.grid_size = (img_size[0] // patch_size[0], img_size[1] // patch_size[1])
|
57 |
+
self.num_patches = self.grid_size[0] * self.grid_size[1]
|
58 |
+
self.flatten = flatten
|
59 |
+
|
60 |
+
self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
|
61 |
+
self.norm = norm_layer(embed_dim) if norm_layer else nn.Identity()
|
62 |
+
|
63 |
+
def forward(self, x, random_sample=False):
|
64 |
+
B, C, H, W = x.shape
|
65 |
+
assert random_sample or (H == self.img_size[0] and W == self.img_size[1]), f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
|
66 |
+
|
67 |
+
x = self.proj(x)
|
68 |
+
if self.flatten:
|
69 |
+
x = rearrange(x, "B C H W -> B (H W) C")
|
70 |
+
else:
|
71 |
+
x = rearrange(x, "B C H W -> B H W C")
|
72 |
+
x = self.norm(x)
|
73 |
+
return x
|
74 |
+
|
75 |
+
|
76 |
+
class TimestepEmbedder(nn.Module):
|
77 |
+
"""
|
78 |
+
Embeds scalar timesteps into vector representations.
|
79 |
+
"""
|
80 |
+
|
81 |
+
def __init__(self, hidden_size, frequency_embedding_size=256, freq_type='time_step'):
|
82 |
+
super().__init__()
|
83 |
+
self.mlp = nn.Sequential(
|
84 |
+
nn.Linear(frequency_embedding_size, hidden_size, bias=True), # hidden_size is diffusion model hidden size
|
85 |
+
nn.SiLU(),
|
86 |
+
nn.Linear(hidden_size, hidden_size, bias=True),
|
87 |
+
)
|
88 |
+
self.frequency_embedding_size = frequency_embedding_size
|
89 |
+
self.freq_type = freq_type
|
90 |
+
|
91 |
+
@staticmethod
|
92 |
+
def timestep_embedding(t, dim, max_period=10000, freq_type='time_step'):
|
93 |
+
"""
|
94 |
+
Create sinusoidal timestep embeddings.
|
95 |
+
:param t: a 1-D Tensor of N indices, one per batch element.
|
96 |
+
These may be fractional.
|
97 |
+
:param dim: the dimension of the output.
|
98 |
+
:param max_period: controls the minimum frequency of the embeddings.
|
99 |
+
:return: an (N, D) Tensor of positional embeddings.
|
100 |
+
"""
|
101 |
+
# https://github.com/openai/glide-text2im/blob/main/glide_text2im/nn.py
|
102 |
+
half = dim // 2
|
103 |
+
|
104 |
+
if freq_type == 'time_step':
|
105 |
+
freqs = torch.exp(-math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half).to(device=t.device)
|
106 |
+
elif freq_type == 'spatial': # ~(-5 5)
|
107 |
+
freqs = torch.linspace(1.0, half, half).to(device=t.device) * torch.pi
|
108 |
+
elif freq_type == 'angle': # 0-360
|
109 |
+
freqs = torch.linspace(1.0, half, half).to(device=t.device) * torch.pi / 180
|
110 |
+
|
111 |
+
|
112 |
+
args = t[:, None].float() * freqs[None]
|
113 |
+
|
114 |
+
embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1)
|
115 |
+
if dim % 2:
|
116 |
+
embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1)
|
117 |
+
return embedding
|
118 |
+
|
119 |
+
def forward(self, t):
|
120 |
+
t_freq = self.timestep_embedding(t, self.frequency_embedding_size, freq_type=self.freq_type)
|
121 |
+
t_emb = self.mlp(t_freq)
|
122 |
+
return t_emb
|
123 |
+
|
124 |
+
|
125 |
+
class FinalLayer(nn.Module):
|
126 |
+
"""
|
127 |
+
The final layer of DiT.
|
128 |
+
"""
|
129 |
+
|
130 |
+
def __init__(self, hidden_size, patch_size, out_channels):
|
131 |
+
super().__init__()
|
132 |
+
self.norm_final = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6)
|
133 |
+
self.linear = nn.Linear(hidden_size, patch_size * patch_size * out_channels, bias=True)
|
134 |
+
self.adaLN_modulation = nn.Sequential(nn.SiLU(), nn.Linear(hidden_size, 2 * hidden_size, bias=True))
|
135 |
+
|
136 |
+
def forward(self, x, c):
|
137 |
+
shift, scale = self.adaLN_modulation(c).chunk(2, dim=-1)
|
138 |
+
x = modulate(self.norm_final(x), shift, scale)
|
139 |
+
x = self.linear(x)
|
140 |
+
return x
|
141 |
+
|
142 |
+
|
143 |
+
class SpatioTemporalDiTBlock(nn.Module):
|
144 |
+
def __init__(
|
145 |
+
self,
|
146 |
+
hidden_size,
|
147 |
+
num_heads,
|
148 |
+
reference_length,
|
149 |
+
mlp_ratio=4.0,
|
150 |
+
is_causal=True,
|
151 |
+
spatial_rotary_emb: Optional[RotaryEmbedding] = None,
|
152 |
+
temporal_rotary_emb: Optional[RotaryEmbedding] = None,
|
153 |
+
reference_rotary_emb=None,
|
154 |
+
use_plucker=False,
|
155 |
+
relative_embedding=False,
|
156 |
+
cond_only_on_qk=False,
|
157 |
+
use_reference_attention=False,
|
158 |
+
ref_mode='sequential'
|
159 |
+
):
|
160 |
+
super().__init__()
|
161 |
+
self.is_causal = is_causal
|
162 |
+
mlp_hidden_dim = int(hidden_size * mlp_ratio)
|
163 |
+
approx_gelu = lambda: nn.GELU(approximate="tanh")
|
164 |
+
|
165 |
+
self.s_norm1 = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6)
|
166 |
+
self.s_attn = SpatialAxialAttention(
|
167 |
+
hidden_size,
|
168 |
+
heads=num_heads,
|
169 |
+
dim_head=hidden_size // num_heads,
|
170 |
+
rotary_emb=spatial_rotary_emb
|
171 |
+
)
|
172 |
+
self.s_norm2 = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6)
|
173 |
+
self.s_mlp = Mlp(
|
174 |
+
in_features=hidden_size,
|
175 |
+
hidden_features=mlp_hidden_dim,
|
176 |
+
act_layer=approx_gelu,
|
177 |
+
drop=0,
|
178 |
+
)
|
179 |
+
self.s_adaLN_modulation = nn.Sequential(nn.SiLU(), nn.Linear(hidden_size, 6 * hidden_size, bias=True))
|
180 |
+
|
181 |
+
self.t_norm1 = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6)
|
182 |
+
self.t_attn = TemporalAxialAttention(
|
183 |
+
hidden_size,
|
184 |
+
heads=num_heads,
|
185 |
+
dim_head=hidden_size // num_heads,
|
186 |
+
is_causal=is_causal,
|
187 |
+
rotary_emb=temporal_rotary_emb,
|
188 |
+
reference_length=reference_length
|
189 |
+
)
|
190 |
+
self.t_norm2 = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6)
|
191 |
+
self.t_mlp = Mlp(
|
192 |
+
in_features=hidden_size,
|
193 |
+
hidden_features=mlp_hidden_dim,
|
194 |
+
act_layer=approx_gelu,
|
195 |
+
drop=0,
|
196 |
+
)
|
197 |
+
self.t_adaLN_modulation = nn.Sequential(nn.SiLU(), nn.Linear(hidden_size, 6 * hidden_size, bias=True))
|
198 |
+
|
199 |
+
self.use_reference_attention = use_reference_attention
|
200 |
+
if self.use_reference_attention:
|
201 |
+
self.r_norm1 = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6)
|
202 |
+
self.ref_type = "full_ref"
|
203 |
+
if self.ref_type == "temporal_ref":
|
204 |
+
self.r_attn = MemTemporalAxialAttention(
|
205 |
+
hidden_size,
|
206 |
+
heads=num_heads,
|
207 |
+
dim_head=hidden_size // num_heads,
|
208 |
+
is_causal=is_causal,
|
209 |
+
rotary_emb=None
|
210 |
+
)
|
211 |
+
elif self.ref_type == "full_ref":
|
212 |
+
self.r_attn = MemFullAttention(
|
213 |
+
hidden_size,
|
214 |
+
heads=num_heads,
|
215 |
+
dim_head=hidden_size // num_heads,
|
216 |
+
is_causal=is_causal,
|
217 |
+
rotary_emb=reference_rotary_emb,
|
218 |
+
reference_length=reference_length
|
219 |
+
)
|
220 |
+
self.r_norm2 = nn.LayerNorm(hidden_size, elementwise_affine=False, eps=1e-6)
|
221 |
+
self.r_mlp = Mlp(
|
222 |
+
in_features=hidden_size,
|
223 |
+
hidden_features=mlp_hidden_dim,
|
224 |
+
act_layer=approx_gelu,
|
225 |
+
drop=0,
|
226 |
+
)
|
227 |
+
self.r_adaLN_modulation = nn.Sequential(nn.SiLU(), nn.Linear(hidden_size, 6 * hidden_size, bias=True))
|
228 |
+
|
229 |
+
self.use_plucker = use_plucker
|
230 |
+
if use_plucker:
|
231 |
+
self.pose_cond_mlp = nn.Linear(hidden_size, hidden_size)
|
232 |
+
self.temporal_pose_cond_mlp = nn.Linear(hidden_size, hidden_size)
|
233 |
+
|
234 |
+
self.reference_length = reference_length
|
235 |
+
self.relative_embedding = relative_embedding
|
236 |
+
self.cond_only_on_qk = cond_only_on_qk
|
237 |
+
|
238 |
+
self.ref_mode = ref_mode
|
239 |
+
|
240 |
+
if self.ref_mode == 'parallel':
|
241 |
+
self.parallel_map = nn.Linear(hidden_size, hidden_size)
|
242 |
+
|
243 |
+
def forward(self, x, c, current_frame=None, timestep=None, is_last_block=False,
|
244 |
+
pose_cond=None, mode="training", c_action_cond=None, reference_length=None):
|
245 |
+
B, T, H, W, D = x.shape
|
246 |
+
|
247 |
+
# spatial block
|
248 |
+
|
249 |
+
s_shift_msa, s_scale_msa, s_gate_msa, s_shift_mlp, s_scale_mlp, s_gate_mlp = self.s_adaLN_modulation(c).chunk(6, dim=-1)
|
250 |
+
x = x + gate(self.s_attn(modulate(self.s_norm1(x), s_shift_msa, s_scale_msa)), s_gate_msa)
|
251 |
+
x = x + gate(self.s_mlp(modulate(self.s_norm2(x), s_shift_mlp, s_scale_mlp)), s_gate_mlp)
|
252 |
+
|
253 |
+
# temporal block
|
254 |
+
if c_action_cond is not None:
|
255 |
+
t_shift_msa, t_scale_msa, t_gate_msa, t_shift_mlp, t_scale_mlp, t_gate_mlp = self.t_adaLN_modulation(c_action_cond).chunk(6, dim=-1)
|
256 |
+
else:
|
257 |
+
t_shift_msa, t_scale_msa, t_gate_msa, t_shift_mlp, t_scale_mlp, t_gate_mlp = self.t_adaLN_modulation(c).chunk(6, dim=-1)
|
258 |
+
|
259 |
+
x_t = x + gate(self.t_attn(modulate(self.t_norm1(x), t_shift_msa, t_scale_msa)), t_gate_msa)
|
260 |
+
x_t = x_t + gate(self.t_mlp(modulate(self.t_norm2(x_t), t_shift_mlp, t_scale_mlp)), t_gate_mlp)
|
261 |
+
|
262 |
+
if self.ref_mode == 'sequential':
|
263 |
+
x = x_t
|
264 |
+
|
265 |
+
# memory block
|
266 |
+
relative_embedding = self.relative_embedding # and mode == "training"
|
267 |
+
|
268 |
+
if self.use_reference_attention:
|
269 |
+
r_shift_msa, r_scale_msa, r_gate_msa, r_shift_mlp, r_scale_mlp, r_gate_mlp = self.r_adaLN_modulation(c).chunk(6, dim=-1)
|
270 |
+
|
271 |
+
if pose_cond is not None:
|
272 |
+
if self.use_plucker:
|
273 |
+
input_cond = self.pose_cond_mlp(pose_cond)
|
274 |
+
|
275 |
+
if relative_embedding:
|
276 |
+
n_frames = x.shape[1] - reference_length
|
277 |
+
x1_relative_embedding = []
|
278 |
+
r_shift_msa_relative_embedding = []
|
279 |
+
r_scale_msa_relative_embedding = []
|
280 |
+
for i in range(n_frames):
|
281 |
+
x1_relative_embedding.append(torch.cat([x[:,i:i+1], x[:, -reference_length:]], dim=1).clone())
|
282 |
+
r_shift_msa_relative_embedding.append(torch.cat([r_shift_msa[:,i:i+1], r_shift_msa[:, -reference_length:]], dim=1).clone())
|
283 |
+
r_scale_msa_relative_embedding.append(torch.cat([r_scale_msa[:,i:i+1], r_scale_msa[:, -reference_length:]], dim=1).clone())
|
284 |
+
x1_zero_frame = torch.cat(x1_relative_embedding, dim=1)
|
285 |
+
r_shift_msa = torch.cat(r_shift_msa_relative_embedding, dim=1)
|
286 |
+
r_scale_msa = torch.cat(r_scale_msa_relative_embedding, dim=1)
|
287 |
+
|
288 |
+
# if current_frame == 18:
|
289 |
+
# import pdb;pdb.set_trace()
|
290 |
+
|
291 |
+
if self.cond_only_on_qk:
|
292 |
+
attn_input = x1_zero_frame
|
293 |
+
extra_condition = input_cond
|
294 |
+
else:
|
295 |
+
attn_input = input_cond + x1_zero_frame
|
296 |
+
extra_condition = None
|
297 |
+
else:
|
298 |
+
attn_input = input_cond + x
|
299 |
+
extra_condition = None
|
300 |
+
# print("input_cond2:", input_cond.abs().mean())
|
301 |
+
# print("c:", c.abs().mean())
|
302 |
+
# input_cond = x1
|
303 |
+
|
304 |
+
x = x + gate(self.r_attn(modulate(self.r_norm1(attn_input), r_shift_msa, r_scale_msa),
|
305 |
+
relative_embedding=relative_embedding,
|
306 |
+
extra_condition=extra_condition,
|
307 |
+
cond_only_on_qk=self.cond_only_on_qk,
|
308 |
+
reference_length=reference_length), r_gate_msa)
|
309 |
+
else:
|
310 |
+
# pose_cond *= 0
|
311 |
+
x = x + gate(self.r_attn(modulate(self.r_norm1(x+pose_cond[:,:,None, None]), r_shift_msa, r_scale_msa),
|
312 |
+
current_frame=current_frame, timestep=timestep,
|
313 |
+
is_last_block=is_last_block,
|
314 |
+
reference_length=reference_length), r_gate_msa)
|
315 |
+
else:
|
316 |
+
x = x + gate(self.r_attn(modulate(self.r_norm1(x), r_shift_msa, r_scale_msa), current_frame=current_frame, timestep=timestep,
|
317 |
+
is_last_block=is_last_block), r_gate_msa)
|
318 |
+
|
319 |
+
x = x + gate(self.r_mlp(modulate(self.r_norm2(x), r_shift_mlp, r_scale_mlp)), r_gate_mlp)
|
320 |
+
|
321 |
+
if self.ref_mode == 'parallel':
|
322 |
+
x = x_t + self.parallel_map(x)
|
323 |
+
|
324 |
+
return x
|
325 |
+
|
326 |
+
# print((x1-x2).abs().sum())
|
327 |
+
# r_shift_msa, r_scale_msa, r_gate_msa, r_shift_mlp, r_scale_mlp, r_gate_mlp = self.r_adaLN_modulation(c).chunk(6, dim=-1)
|
328 |
+
# x2 = x1 + gate(self.r_attn(modulate(self.r_norm1(x_), r_shift_msa, r_scale_msa)), r_gate_msa)
|
329 |
+
# x2 = gate(self.r_mlp(modulate(self.r_norm2(x2), r_shift_mlp, r_scale_mlp)), r_gate_mlp)
|
330 |
+
# x = x1 + x2
|
331 |
+
|
332 |
+
# print(x.mean())
|
333 |
+
# return x
|
334 |
+
|
335 |
+
|
336 |
+
class DiT(nn.Module):
|
337 |
+
"""
|
338 |
+
Diffusion model with a Transformer backbone.
|
339 |
+
"""
|
340 |
+
|
341 |
+
def __init__(
|
342 |
+
self,
|
343 |
+
input_h=18,
|
344 |
+
input_w=32,
|
345 |
+
patch_size=2,
|
346 |
+
in_channels=16,
|
347 |
+
hidden_size=1024,
|
348 |
+
depth=12,
|
349 |
+
num_heads=16,
|
350 |
+
mlp_ratio=4.0,
|
351 |
+
action_cond_dim=25,
|
352 |
+
pose_cond_dim=4,
|
353 |
+
max_frames=32,
|
354 |
+
reference_length=8,
|
355 |
+
use_plucker=False,
|
356 |
+
relative_embedding=False,
|
357 |
+
cond_only_on_qk=False,
|
358 |
+
use_reference_attention=False,
|
359 |
+
add_frame_timestep_embedder=False,
|
360 |
+
ref_mode='sequential'
|
361 |
+
):
|
362 |
+
super().__init__()
|
363 |
+
self.in_channels = in_channels
|
364 |
+
self.out_channels = in_channels
|
365 |
+
self.patch_size = patch_size
|
366 |
+
self.num_heads = num_heads
|
367 |
+
self.max_frames = max_frames
|
368 |
+
|
369 |
+
self.x_embedder = PatchEmbed(input_h, input_w, patch_size, in_channels, hidden_size, flatten=False)
|
370 |
+
self.t_embedder = TimestepEmbedder(hidden_size)
|
371 |
+
|
372 |
+
self.add_frame_timestep_embedder = add_frame_timestep_embedder
|
373 |
+
if self.add_frame_timestep_embedder:
|
374 |
+
self.frame_timestep_embedder = TimestepEmbedder(hidden_size)
|
375 |
+
|
376 |
+
frame_h, frame_w = self.x_embedder.grid_size
|
377 |
+
|
378 |
+
self.spatial_rotary_emb = RotaryEmbedding(dim=hidden_size // num_heads // 2, freqs_for="pixel", max_freq=256)
|
379 |
+
self.temporal_rotary_emb = RotaryEmbedding(dim=hidden_size // num_heads)
|
380 |
+
# self.reference_rotary_emb = RotaryEmbedding(dim=hidden_size // num_heads // 2, freqs_for="pixel", max_freq=256)
|
381 |
+
self.reference_rotary_emb = None
|
382 |
+
|
383 |
+
self.external_cond = nn.Linear(action_cond_dim, hidden_size) if action_cond_dim > 0 else nn.Identity()
|
384 |
+
|
385 |
+
# self.pose_cond = nn.Linear(pose_cond_dim, hidden_size) if pose_cond_dim > 0 else nn.Identity()
|
386 |
+
|
387 |
+
self.use_plucker = use_plucker
|
388 |
+
if not self.use_plucker:
|
389 |
+
self.position_embedder = TimestepEmbedder(hidden_size, freq_type='spatial')
|
390 |
+
self.angle_embedder = TimestepEmbedder(hidden_size, freq_type='angle')
|
391 |
+
else:
|
392 |
+
self.pose_embedder = SimpleCameraPoseEncoder(c_in=6, c_out=hidden_size)
|
393 |
+
|
394 |
+
self.blocks = nn.ModuleList(
|
395 |
+
[
|
396 |
+
SpatioTemporalDiTBlock(
|
397 |
+
hidden_size,
|
398 |
+
num_heads,
|
399 |
+
mlp_ratio=mlp_ratio,
|
400 |
+
is_causal=True,
|
401 |
+
reference_length=reference_length,
|
402 |
+
spatial_rotary_emb=self.spatial_rotary_emb,
|
403 |
+
temporal_rotary_emb=self.temporal_rotary_emb,
|
404 |
+
reference_rotary_emb=self.reference_rotary_emb,
|
405 |
+
use_plucker=self.use_plucker,
|
406 |
+
relative_embedding=relative_embedding,
|
407 |
+
cond_only_on_qk=cond_only_on_qk,
|
408 |
+
use_reference_attention=use_reference_attention,
|
409 |
+
ref_mode=ref_mode
|
410 |
+
)
|
411 |
+
for _ in range(depth)
|
412 |
+
]
|
413 |
+
)
|
414 |
+
self.use_reference_attention = use_reference_attention
|
415 |
+
self.final_layer = FinalLayer(hidden_size, patch_size, self.out_channels)
|
416 |
+
self.initialize_weights()
|
417 |
+
|
418 |
+
def initialize_weights(self):
|
419 |
+
# Initialize transformer layers:
|
420 |
+
def _basic_init(module):
|
421 |
+
if isinstance(module, nn.Linear):
|
422 |
+
torch.nn.init.xavier_uniform_(module.weight)
|
423 |
+
if module.bias is not None:
|
424 |
+
nn.init.constant_(module.bias, 0)
|
425 |
+
|
426 |
+
self.apply(_basic_init)
|
427 |
+
|
428 |
+
# Initialize patch_embed like nn.Linear (instead of nn.Conv2d):
|
429 |
+
w = self.x_embedder.proj.weight.data
|
430 |
+
nn.init.xavier_uniform_(w.view([w.shape[0], -1]))
|
431 |
+
nn.init.constant_(self.x_embedder.proj.bias, 0)
|
432 |
+
|
433 |
+
# Initialize timestep embedding MLP:
|
434 |
+
nn.init.normal_(self.t_embedder.mlp[0].weight, std=0.02)
|
435 |
+
nn.init.normal_(self.t_embedder.mlp[2].weight, std=0.02)
|
436 |
+
|
437 |
+
if self.use_reference_attention:
|
438 |
+
if not self.use_plucker:
|
439 |
+
nn.init.normal_(self.position_embedder.mlp[0].weight, std=0.02)
|
440 |
+
nn.init.normal_(self.position_embedder.mlp[2].weight, std=0.02)
|
441 |
+
|
442 |
+
nn.init.normal_(self.angle_embedder.mlp[0].weight, std=0.02)
|
443 |
+
nn.init.normal_(self.angle_embedder.mlp[2].weight, std=0.02)
|
444 |
+
|
445 |
+
if self.add_frame_timestep_embedder:
|
446 |
+
nn.init.normal_(self.frame_timestep_embedder.mlp[0].weight, std=0.02)
|
447 |
+
nn.init.normal_(self.frame_timestep_embedder.mlp[2].weight, std=0.02)
|
448 |
+
|
449 |
+
|
450 |
+
# Zero-out adaLN modulation layers in DiT blocks:
|
451 |
+
for block in self.blocks:
|
452 |
+
nn.init.constant_(block.s_adaLN_modulation[-1].weight, 0)
|
453 |
+
nn.init.constant_(block.s_adaLN_modulation[-1].bias, 0)
|
454 |
+
nn.init.constant_(block.t_adaLN_modulation[-1].weight, 0)
|
455 |
+
nn.init.constant_(block.t_adaLN_modulation[-1].bias, 0)
|
456 |
+
|
457 |
+
if self.use_plucker and self.use_reference_attention:
|
458 |
+
nn.init.constant_(block.pose_cond_mlp.weight, 0)
|
459 |
+
nn.init.constant_(block.pose_cond_mlp.bias, 0)
|
460 |
+
|
461 |
+
# Zero-out output layers:
|
462 |
+
nn.init.constant_(self.final_layer.adaLN_modulation[-1].weight, 0)
|
463 |
+
nn.init.constant_(self.final_layer.adaLN_modulation[-1].bias, 0)
|
464 |
+
nn.init.constant_(self.final_layer.linear.weight, 0)
|
465 |
+
nn.init.constant_(self.final_layer.linear.bias, 0)
|
466 |
+
|
467 |
+
def unpatchify(self, x):
|
468 |
+
"""
|
469 |
+
x: (N, H, W, patch_size**2 * C)
|
470 |
+
imgs: (N, H, W, C)
|
471 |
+
"""
|
472 |
+
c = self.out_channels
|
473 |
+
p = self.x_embedder.patch_size[0]
|
474 |
+
h = x.shape[1]
|
475 |
+
w = x.shape[2]
|
476 |
+
|
477 |
+
x = x.reshape(shape=(x.shape[0], h, w, p, p, c))
|
478 |
+
x = torch.einsum("nhwpqc->nchpwq", x)
|
479 |
+
imgs = x.reshape(shape=(x.shape[0], c, h * p, w * p))
|
480 |
+
return imgs
|
481 |
+
|
482 |
+
def forward(self, x, t, action_cond=None, pose_cond=None, current_frame=None, mode=None,
|
483 |
+
reference_length=None, frame_idx=None):
|
484 |
+
"""
|
485 |
+
Forward pass of DiT.
|
486 |
+
x: (B, T, C, H, W) tensor of spatial inputs (images or latent representations of images)
|
487 |
+
t: (B, T,) tensor of diffusion timesteps
|
488 |
+
"""
|
489 |
+
|
490 |
+
B, T, C, H, W = x.shape
|
491 |
+
|
492 |
+
# add spatial embeddings
|
493 |
+
x = rearrange(x, "b t c h w -> (b t) c h w")
|
494 |
+
|
495 |
+
x = self.x_embedder(x) # (B*T, C, H, W) -> (B*T, H/2, W/2, D) , C = 16, D = d_model
|
496 |
+
# restore shape
|
497 |
+
x = rearrange(x, "(b t) h w d -> b t h w d", t=T)
|
498 |
+
# embed noise steps
|
499 |
+
t = rearrange(t, "b t -> (b t)")
|
500 |
+
|
501 |
+
c_t = self.t_embedder(t) # (N, D)
|
502 |
+
c = c_t.clone()
|
503 |
+
c = rearrange(c, "(b t) d -> b t d", t=T)
|
504 |
+
|
505 |
+
if torch.is_tensor(action_cond):
|
506 |
+
try:
|
507 |
+
c_action_cond = c + self.external_cond(action_cond)
|
508 |
+
except:
|
509 |
+
import pdb;pdb.set_trace()
|
510 |
+
else:
|
511 |
+
c_action_cond = None
|
512 |
+
|
513 |
+
if torch.is_tensor(pose_cond):
|
514 |
+
if not self.use_plucker:
|
515 |
+
pose_cond = pose_cond.to(action_cond.dtype)
|
516 |
+
b_, t_, d_ = pose_cond.shape
|
517 |
+
pos_emb = self.position_embedder(rearrange(pose_cond[...,:3], "b t d -> (b t d)"))
|
518 |
+
angle_emb = self.angle_embedder(rearrange(pose_cond[...,3:], "b t d -> (b t d)"))
|
519 |
+
pos_emb = rearrange(pos_emb, "(b t d) c -> b t d c", b=b_, t=t_, d=3).sum(-2)
|
520 |
+
angle_emb = rearrange(angle_emb, "(b t d) c -> b t d c", b=b_, t=t_, d=2).sum(-2)
|
521 |
+
pc = pos_emb + angle_emb
|
522 |
+
else:
|
523 |
+
pose_cond = pose_cond[:, :, ::40, ::40]
|
524 |
+
# pc = self.pose_embedder(pose_cond)[0]
|
525 |
+
# pc = pc.permute(0,2,3,4,1)
|
526 |
+
pc = self.pose_embedder(pose_cond)
|
527 |
+
pc = pc.permute(1,0,2,3,4)
|
528 |
+
|
529 |
+
if torch.is_tensor(frame_idx) and self.add_frame_timestep_embedder:
|
530 |
+
bb = frame_idx.shape[1]
|
531 |
+
frame_idx = rearrange(frame_idx, "t b -> (b t)")
|
532 |
+
frame_idx = self.frame_timestep_embedder(frame_idx)
|
533 |
+
frame_idx = rearrange(frame_idx, "(b t) d -> b t d", b=bb)
|
534 |
+
pc = pc + frame_idx[:, :, None, None]
|
535 |
+
|
536 |
+
# pc = pc + rearrange(c_t.clone(), "(b t) d -> b t d", t=T)[:,:,None,None] # add time condition for different timestep scaling
|
537 |
+
else:
|
538 |
+
pc = None
|
539 |
+
|
540 |
+
for i, block in enumerate(self.blocks):
|
541 |
+
x = block(x, c, current_frame=current_frame, timestep=t, is_last_block= (i+1 == len(self.blocks)),
|
542 |
+
pose_cond=pc, mode=mode, c_action_cond=c_action_cond, reference_length=reference_length) # (N, T, H, W, D)
|
543 |
+
x = self.final_layer(x, c) # (N, T, H, W, patch_size ** 2 * out_channels)
|
544 |
+
# unpatchify
|
545 |
+
x = rearrange(x, "b t h w d -> (b t) h w d")
|
546 |
+
x = self.unpatchify(x) # (N, out_channels, H, W)
|
547 |
+
x = rearrange(x, "(b t) c h w -> b t c h w", t=T)
|
548 |
+
|
549 |
+
# print("self.blocks[0].pose_cond_mlp.weight:", self.blocks[0].pose_cond_mlp.weight)
|
550 |
+
# print("self.blocks[0].r_adaLN_modulation[1].weight:", self.blocks[0].r_adaLN_modulation[1].weight)
|
551 |
+
# print("self.blocks[0].t_adaLN_modulation[1].weight:", self.blocks[0].t_adaLN_modulation[1].weight)
|
552 |
+
|
553 |
+
return x
|
554 |
+
|
555 |
+
|
556 |
+
def DiT_S_2(action_cond_dim, pose_cond_dim, reference_length,
|
557 |
+
use_plucker, relative_embedding,
|
558 |
+
cond_only_on_qk, use_reference_attention, add_frame_timestep_embedder,
|
559 |
+
ref_mode):
|
560 |
+
return DiT(
|
561 |
+
patch_size=2,
|
562 |
+
hidden_size=1024,
|
563 |
+
depth=16,
|
564 |
+
num_heads=16,
|
565 |
+
action_cond_dim=action_cond_dim,
|
566 |
+
pose_cond_dim=pose_cond_dim,
|
567 |
+
reference_length=reference_length,
|
568 |
+
use_plucker=use_plucker,
|
569 |
+
relative_embedding=relative_embedding,
|
570 |
+
cond_only_on_qk=cond_only_on_qk,
|
571 |
+
use_reference_attention=use_reference_attention,
|
572 |
+
add_frame_timestep_embedder=add_frame_timestep_embedder,
|
573 |
+
ref_mode=ref_mode
|
574 |
+
)
|
575 |
+
|
576 |
+
|
577 |
+
DiT_models = {"DiT-S/2": DiT_S_2}
|
algorithms/worldmem/models/pose_prediction.py
ADDED
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import torch
|
2 |
+
import torch.nn as nn
|
3 |
+
import torch.nn.functional as F
|
4 |
+
|
5 |
+
class PosePredictionNet(nn.Module):
|
6 |
+
def __init__(self, img_channels=16, img_feat_dim=256, pose_dim=5, action_dim=25, hidden_dim=128):
|
7 |
+
super(PosePredictionNet, self).__init__()
|
8 |
+
|
9 |
+
self.cnn = nn.Sequential(
|
10 |
+
nn.Conv2d(img_channels, 32, kernel_size=3, stride=2, padding=1),
|
11 |
+
nn.ReLU(),
|
12 |
+
nn.Conv2d(32, 64, kernel_size=3, stride=2, padding=1),
|
13 |
+
nn.ReLU(),
|
14 |
+
nn.Conv2d(64, 128, kernel_size=3, stride=2, padding=1),
|
15 |
+
nn.ReLU(),
|
16 |
+
nn.AdaptiveAvgPool2d((1, 1))
|
17 |
+
)
|
18 |
+
|
19 |
+
self.fc_img = nn.Linear(128, img_feat_dim)
|
20 |
+
|
21 |
+
self.mlp_motion = nn.Sequential(
|
22 |
+
nn.Linear(pose_dim + action_dim, hidden_dim),
|
23 |
+
nn.ReLU(),
|
24 |
+
nn.Linear(hidden_dim, hidden_dim),
|
25 |
+
nn.ReLU()
|
26 |
+
)
|
27 |
+
|
28 |
+
self.fc_out = nn.Sequential(
|
29 |
+
nn.Linear(img_feat_dim + hidden_dim, hidden_dim),
|
30 |
+
nn.ReLU(),
|
31 |
+
nn.Linear(hidden_dim, pose_dim)
|
32 |
+
)
|
33 |
+
|
34 |
+
def forward(self, img, action, pose):
|
35 |
+
img_feat = self.cnn(img).view(img.size(0), -1)
|
36 |
+
img_feat = self.fc_img(img_feat)
|
37 |
+
|
38 |
+
motion_feat = self.mlp_motion(torch.cat([pose, action], dim=1))
|
39 |
+
fused_feat = torch.cat([img_feat, motion_feat], dim=1)
|
40 |
+
pose_next_pred = self.fc_out(fused_feat)
|
41 |
+
|
42 |
+
return pose_next_pred
|
algorithms/worldmem/models/rotary_embedding_torch.py
ADDED
@@ -0,0 +1,302 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
Adapted from https://github.com/lucidrains/rotary-embedding-torch/blob/main/rotary_embedding_torch/rotary_embedding_torch.py
|
3 |
+
"""
|
4 |
+
|
5 |
+
from __future__ import annotations
|
6 |
+
from math import pi, log
|
7 |
+
|
8 |
+
import torch
|
9 |
+
from torch.nn import Module, ModuleList
|
10 |
+
from torch.amp import autocast
|
11 |
+
from torch import nn, einsum, broadcast_tensors, Tensor
|
12 |
+
|
13 |
+
from einops import rearrange, repeat
|
14 |
+
|
15 |
+
from typing import Literal
|
16 |
+
|
17 |
+
# helper functions
|
18 |
+
|
19 |
+
|
20 |
+
def exists(val):
|
21 |
+
return val is not None
|
22 |
+
|
23 |
+
|
24 |
+
def default(val, d):
|
25 |
+
return val if exists(val) else d
|
26 |
+
|
27 |
+
|
28 |
+
# broadcat, as tortoise-tts was using it
|
29 |
+
|
30 |
+
|
31 |
+
def broadcat(tensors, dim=-1):
|
32 |
+
broadcasted_tensors = broadcast_tensors(*tensors)
|
33 |
+
return torch.cat(broadcasted_tensors, dim=dim)
|
34 |
+
|
35 |
+
|
36 |
+
# rotary embedding helper functions
|
37 |
+
|
38 |
+
|
39 |
+
def rotate_half(x):
|
40 |
+
x = rearrange(x, "... (d r) -> ... d r", r=2)
|
41 |
+
x1, x2 = x.unbind(dim=-1)
|
42 |
+
x = torch.stack((-x2, x1), dim=-1)
|
43 |
+
return rearrange(x, "... d r -> ... (d r)")
|
44 |
+
|
45 |
+
|
46 |
+
@autocast("cuda", enabled=False)
|
47 |
+
def apply_rotary_emb(freqs, t, start_index=0, scale=1.0, seq_dim=-2):
|
48 |
+
dtype = t.dtype
|
49 |
+
|
50 |
+
if t.ndim == 3:
|
51 |
+
seq_len = t.shape[seq_dim]
|
52 |
+
freqs = freqs[-seq_len:]
|
53 |
+
|
54 |
+
rot_dim = freqs.shape[-1]
|
55 |
+
end_index = start_index + rot_dim
|
56 |
+
|
57 |
+
assert rot_dim <= t.shape[-1], f"feature dimension {t.shape[-1]} is not of sufficient size to rotate in all the positions {rot_dim}"
|
58 |
+
|
59 |
+
# Split t into three parts: left, middle (to be transformed), and right
|
60 |
+
t_left = t[..., :start_index]
|
61 |
+
t_middle = t[..., start_index:end_index]
|
62 |
+
t_right = t[..., end_index:]
|
63 |
+
|
64 |
+
# Apply rotary embeddings without modifying t in place
|
65 |
+
t_transformed = (t_middle * freqs.cos() * scale) + (rotate_half(t_middle) * freqs.sin() * scale)
|
66 |
+
|
67 |
+
out = torch.cat((t_left, t_transformed, t_right), dim=-1)
|
68 |
+
|
69 |
+
return out.type(dtype)
|
70 |
+
|
71 |
+
|
72 |
+
# learned rotation helpers
|
73 |
+
|
74 |
+
|
75 |
+
def apply_learned_rotations(rotations, t, start_index=0, freq_ranges=None):
|
76 |
+
if exists(freq_ranges):
|
77 |
+
rotations = einsum("..., f -> ... f", rotations, freq_ranges)
|
78 |
+
rotations = rearrange(rotations, "... r f -> ... (r f)")
|
79 |
+
|
80 |
+
rotations = repeat(rotations, "... n -> ... (n r)", r=2)
|
81 |
+
return apply_rotary_emb(rotations, t, start_index=start_index)
|
82 |
+
|
83 |
+
|
84 |
+
# classes
|
85 |
+
|
86 |
+
|
87 |
+
class RotaryEmbedding(Module):
|
88 |
+
def __init__(
|
89 |
+
self,
|
90 |
+
dim,
|
91 |
+
custom_freqs: Tensor | None = None,
|
92 |
+
freqs_for: Literal["lang", "pixel", "constant"] = "lang",
|
93 |
+
theta=10000,
|
94 |
+
max_freq=10,
|
95 |
+
num_freqs=1,
|
96 |
+
learned_freq=False,
|
97 |
+
use_xpos=False,
|
98 |
+
xpos_scale_base=512,
|
99 |
+
interpolate_factor=1.0,
|
100 |
+
theta_rescale_factor=1.0,
|
101 |
+
seq_before_head_dim=False,
|
102 |
+
cache_if_possible=True,
|
103 |
+
cache_max_seq_len=8192,
|
104 |
+
):
|
105 |
+
super().__init__()
|
106 |
+
# proposed by reddit user bloc97, to rescale rotary embeddings to longer sequence length without fine-tuning
|
107 |
+
# has some connection to NTK literature
|
108 |
+
# https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_scaled_rope_allows_llama_models_to_have/
|
109 |
+
|
110 |
+
theta *= theta_rescale_factor ** (dim / (dim - 2))
|
111 |
+
|
112 |
+
self.freqs_for = freqs_for
|
113 |
+
|
114 |
+
if exists(custom_freqs):
|
115 |
+
freqs = custom_freqs
|
116 |
+
elif freqs_for == "lang":
|
117 |
+
freqs = 1.0 / (theta ** (torch.arange(0, dim, 2)[: (dim // 2)].float() / dim))
|
118 |
+
elif freqs_for == "pixel":
|
119 |
+
freqs = torch.linspace(1.0, max_freq / 2, dim // 2) * pi
|
120 |
+
elif freqs_for == "spacetime":
|
121 |
+
time_freqs = 1.0 / (theta ** (torch.arange(0, dim, 2)[: (dim // 2)].float() / dim))
|
122 |
+
freqs = torch.linspace(1.0, max_freq / 2, dim // 2) * pi
|
123 |
+
elif freqs_for == "constant":
|
124 |
+
freqs = torch.ones(num_freqs).float()
|
125 |
+
|
126 |
+
if freqs_for == "spacetime":
|
127 |
+
self.time_freqs = nn.Parameter(time_freqs, requires_grad=learned_freq)
|
128 |
+
self.freqs = nn.Parameter(freqs, requires_grad=learned_freq)
|
129 |
+
|
130 |
+
self.cache_if_possible = cache_if_possible
|
131 |
+
self.cache_max_seq_len = cache_max_seq_len
|
132 |
+
|
133 |
+
self.register_buffer("cached_freqs", torch.zeros(cache_max_seq_len, dim), persistent=False)
|
134 |
+
self.register_buffer("cached_freqs_seq_len", torch.tensor(0), persistent=False)
|
135 |
+
|
136 |
+
self.learned_freq = learned_freq
|
137 |
+
|
138 |
+
# dummy for device
|
139 |
+
|
140 |
+
self.register_buffer("dummy", torch.tensor(0), persistent=False)
|
141 |
+
|
142 |
+
# default sequence dimension
|
143 |
+
|
144 |
+
self.seq_before_head_dim = seq_before_head_dim
|
145 |
+
self.default_seq_dim = -3 if seq_before_head_dim else -2
|
146 |
+
|
147 |
+
# interpolation factors
|
148 |
+
|
149 |
+
assert interpolate_factor >= 1.0
|
150 |
+
self.interpolate_factor = interpolate_factor
|
151 |
+
|
152 |
+
# xpos
|
153 |
+
|
154 |
+
self.use_xpos = use_xpos
|
155 |
+
|
156 |
+
if not use_xpos:
|
157 |
+
return
|
158 |
+
|
159 |
+
scale = (torch.arange(0, dim, 2) + 0.4 * dim) / (1.4 * dim)
|
160 |
+
self.scale_base = xpos_scale_base
|
161 |
+
|
162 |
+
self.register_buffer("scale", scale, persistent=False)
|
163 |
+
self.register_buffer("cached_scales", torch.zeros(cache_max_seq_len, dim), persistent=False)
|
164 |
+
self.register_buffer("cached_scales_seq_len", torch.tensor(0), persistent=False)
|
165 |
+
|
166 |
+
# add apply_rotary_emb as static method
|
167 |
+
|
168 |
+
self.apply_rotary_emb = staticmethod(apply_rotary_emb)
|
169 |
+
|
170 |
+
@property
|
171 |
+
def device(self):
|
172 |
+
return self.dummy.device
|
173 |
+
|
174 |
+
def get_seq_pos(self, seq_len, device, dtype, offset=0):
|
175 |
+
return (torch.arange(seq_len, device=device, dtype=dtype) + offset) / self.interpolate_factor
|
176 |
+
|
177 |
+
def rotate_queries_or_keys(self, t, freqs, seq_dim=None, offset=0, scale=None):
|
178 |
+
seq_dim = default(seq_dim, self.default_seq_dim)
|
179 |
+
|
180 |
+
assert not self.use_xpos or exists(scale), "you must use `.rotate_queries_and_keys` method instead and pass in both queries and keys, for length extrapolatable rotary embeddings"
|
181 |
+
|
182 |
+
device, dtype, seq_len = t.device, t.dtype, t.shape[seq_dim]
|
183 |
+
|
184 |
+
seq = self.get_seq_pos(seq_len, device=device, dtype=dtype, offset=offset)
|
185 |
+
|
186 |
+
seq_freqs = self.forward(seq, freqs, seq_len=seq_len, offset=offset)
|
187 |
+
|
188 |
+
if seq_dim == -3:
|
189 |
+
seq_freqs = rearrange(seq_freqs, "n d -> n 1 d")
|
190 |
+
|
191 |
+
return apply_rotary_emb(seq_freqs, t, scale=default(scale, 1.0), seq_dim=seq_dim)
|
192 |
+
|
193 |
+
def rotate_queries_with_cached_keys(self, q, k, seq_dim=None, offset=0):
|
194 |
+
dtype, device, seq_dim = (
|
195 |
+
q.dtype,
|
196 |
+
q.device,
|
197 |
+
default(seq_dim, self.default_seq_dim),
|
198 |
+
)
|
199 |
+
|
200 |
+
q_len, k_len = q.shape[seq_dim], k.shape[seq_dim]
|
201 |
+
assert q_len <= k_len
|
202 |
+
|
203 |
+
q_scale = k_scale = 1.0
|
204 |
+
|
205 |
+
if self.use_xpos:
|
206 |
+
seq = self.get_seq_pos(k_len, dtype=dtype, device=device)
|
207 |
+
|
208 |
+
q_scale = self.get_scale(seq[-q_len:]).type(dtype)
|
209 |
+
k_scale = self.get_scale(seq).type(dtype)
|
210 |
+
|
211 |
+
rotated_q = self.rotate_queries_or_keys(q, seq_dim=seq_dim, scale=q_scale, offset=k_len - q_len + offset)
|
212 |
+
rotated_k = self.rotate_queries_or_keys(k, seq_dim=seq_dim, scale=k_scale**-1)
|
213 |
+
|
214 |
+
rotated_q = rotated_q.type(q.dtype)
|
215 |
+
rotated_k = rotated_k.type(k.dtype)
|
216 |
+
|
217 |
+
return rotated_q, rotated_k
|
218 |
+
|
219 |
+
def rotate_queries_and_keys(self, q, k, freqs, seq_dim=None):
|
220 |
+
seq_dim = default(seq_dim, self.default_seq_dim)
|
221 |
+
|
222 |
+
assert self.use_xpos
|
223 |
+
device, dtype, seq_len = q.device, q.dtype, q.shape[seq_dim]
|
224 |
+
|
225 |
+
seq = self.get_seq_pos(seq_len, dtype=dtype, device=device)
|
226 |
+
|
227 |
+
seq_freqs = self.forward(seq, freqs, seq_len=seq_len)
|
228 |
+
scale = self.get_scale(seq, seq_len=seq_len).to(dtype)
|
229 |
+
|
230 |
+
if seq_dim == -3:
|
231 |
+
seq_freqs = rearrange(seq_freqs, "n d -> n 1 d")
|
232 |
+
scale = rearrange(scale, "n d -> n 1 d")
|
233 |
+
|
234 |
+
rotated_q = apply_rotary_emb(seq_freqs, q, scale=scale, seq_dim=seq_dim)
|
235 |
+
rotated_k = apply_rotary_emb(seq_freqs, k, scale=scale**-1, seq_dim=seq_dim)
|
236 |
+
|
237 |
+
rotated_q = rotated_q.type(q.dtype)
|
238 |
+
rotated_k = rotated_k.type(k.dtype)
|
239 |
+
|
240 |
+
return rotated_q, rotated_k
|
241 |
+
|
242 |
+
def get_scale(self, t: Tensor, seq_len: int | None = None, offset=0):
|
243 |
+
assert self.use_xpos
|
244 |
+
|
245 |
+
should_cache = self.cache_if_possible and exists(seq_len) and (offset + seq_len) <= self.cache_max_seq_len
|
246 |
+
|
247 |
+
if should_cache and exists(self.cached_scales) and (seq_len + offset) <= self.cached_scales_seq_len.item():
|
248 |
+
return self.cached_scales[offset : (offset + seq_len)]
|
249 |
+
|
250 |
+
scale = 1.0
|
251 |
+
if self.use_xpos:
|
252 |
+
power = (t - len(t) // 2) / self.scale_base
|
253 |
+
scale = self.scale ** rearrange(power, "n -> n 1")
|
254 |
+
scale = repeat(scale, "n d -> n (d r)", r=2)
|
255 |
+
|
256 |
+
if should_cache and offset == 0:
|
257 |
+
self.cached_scales[:seq_len] = scale.detach()
|
258 |
+
self.cached_scales_seq_len.copy_(seq_len)
|
259 |
+
|
260 |
+
return scale
|
261 |
+
|
262 |
+
def get_axial_freqs(self, *dims):
|
263 |
+
Colon = slice(None)
|
264 |
+
all_freqs = []
|
265 |
+
|
266 |
+
for ind, dim in enumerate(dims):
|
267 |
+
# only allow pixel freqs for last two dimensions
|
268 |
+
use_pixel = (self.freqs_for == "pixel" or self.freqs_for == "spacetime") and ind >= len(dims) - 2
|
269 |
+
if use_pixel:
|
270 |
+
pos = torch.linspace(-1, 1, steps=dim, device=self.device)
|
271 |
+
else:
|
272 |
+
pos = torch.arange(dim, device=self.device)
|
273 |
+
|
274 |
+
if self.freqs_for == "spacetime" and not use_pixel:
|
275 |
+
seq_freqs = self.forward(pos, self.time_freqs, seq_len=dim)
|
276 |
+
else:
|
277 |
+
seq_freqs = self.forward(pos, self.freqs, seq_len=dim)
|
278 |
+
|
279 |
+
all_axis = [None] * len(dims)
|
280 |
+
all_axis[ind] = Colon
|
281 |
+
|
282 |
+
new_axis_slice = (Ellipsis, *all_axis, Colon)
|
283 |
+
all_freqs.append(seq_freqs[new_axis_slice])
|
284 |
+
|
285 |
+
all_freqs = broadcast_tensors(*all_freqs)
|
286 |
+
return torch.cat(all_freqs, dim=-1)
|
287 |
+
|
288 |
+
@autocast("cuda", enabled=False)
|
289 |
+
def forward(self, t: Tensor, freqs: Tensor, seq_len=None, offset=0):
|
290 |
+
should_cache = self.cache_if_possible and not self.learned_freq and exists(seq_len) and self.freqs_for != "pixel" and (offset + seq_len) <= self.cache_max_seq_len
|
291 |
+
|
292 |
+
if should_cache and exists(self.cached_freqs) and (offset + seq_len) <= self.cached_freqs_seq_len.item():
|
293 |
+
return self.cached_freqs[offset : (offset + seq_len)].detach()
|
294 |
+
|
295 |
+
freqs = einsum("..., f -> ... f", t.type(freqs.dtype), freqs)
|
296 |
+
freqs = repeat(freqs, "... n -> ... (n r)", r=2)
|
297 |
+
|
298 |
+
if should_cache and offset == 0:
|
299 |
+
self.cached_freqs[:seq_len] = freqs.detach()
|
300 |
+
self.cached_freqs_seq_len.copy_(seq_len)
|
301 |
+
|
302 |
+
return freqs
|
algorithms/worldmem/models/utils.py
ADDED
@@ -0,0 +1,163 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
Adapted from https://github.com/buoyancy99/diffusion-forcing/blob/main/algorithms/diffusion_forcing/models/utils.py
|
3 |
+
Action format derived from VPT https://github.com/openai/Video-Pre-Training
|
4 |
+
Adapted from https://github.com/etched-ai/open-oasis/blob/master/utils.py
|
5 |
+
"""
|
6 |
+
|
7 |
+
import math
|
8 |
+
import torch
|
9 |
+
from torch import nn
|
10 |
+
from torchvision.io import read_image, read_video
|
11 |
+
from torchvision.transforms.functional import resize
|
12 |
+
from einops import rearrange
|
13 |
+
from typing import Mapping, Sequence
|
14 |
+
from einops import rearrange, parse_shape
|
15 |
+
|
16 |
+
|
17 |
+
def exists(val):
|
18 |
+
return val is not None
|
19 |
+
|
20 |
+
|
21 |
+
def default(val, d):
|
22 |
+
if exists(val):
|
23 |
+
return val
|
24 |
+
return d() if callable(d) else d
|
25 |
+
|
26 |
+
|
27 |
+
def extract(a, t, x_shape):
|
28 |
+
f, b = t.shape
|
29 |
+
out = a[t]
|
30 |
+
return out.reshape(f, b, *((1,) * (len(x_shape) - 2)))
|
31 |
+
|
32 |
+
|
33 |
+
def linear_beta_schedule(timesteps):
|
34 |
+
"""
|
35 |
+
linear schedule, proposed in original ddpm paper
|
36 |
+
"""
|
37 |
+
scale = 1000 / timesteps
|
38 |
+
beta_start = scale * 0.0001
|
39 |
+
beta_end = scale * 0.02
|
40 |
+
return torch.linspace(beta_start, beta_end, timesteps, dtype=torch.float64)
|
41 |
+
|
42 |
+
|
43 |
+
def cosine_beta_schedule(timesteps, s=0.008):
|
44 |
+
"""
|
45 |
+
cosine schedule
|
46 |
+
as proposed in https://openreview.net/forum?id=-NEXDKk8gZ
|
47 |
+
"""
|
48 |
+
steps = timesteps + 1
|
49 |
+
t = torch.linspace(0, timesteps, steps, dtype=torch.float64) / timesteps
|
50 |
+
alphas_cumprod = torch.cos((t + s) / (1 + s) * math.pi * 0.5) ** 2
|
51 |
+
alphas_cumprod = alphas_cumprod / alphas_cumprod[0]
|
52 |
+
betas = 1 - (alphas_cumprod[1:] / alphas_cumprod[:-1])
|
53 |
+
return torch.clip(betas, 0, 0.999)
|
54 |
+
|
55 |
+
|
56 |
+
|
57 |
+
def sigmoid_beta_schedule(timesteps, start=-3, end=3, tau=1, clamp_min=1e-5):
|
58 |
+
"""
|
59 |
+
sigmoid schedule
|
60 |
+
proposed in https://arxiv.org/abs/2212.11972 - Figure 8
|
61 |
+
better for images > 64x64, when used during training
|
62 |
+
"""
|
63 |
+
steps = timesteps + 1
|
64 |
+
t = torch.linspace(0, timesteps, steps, dtype=torch.float64) / timesteps
|
65 |
+
v_start = torch.tensor(start / tau).sigmoid()
|
66 |
+
v_end = torch.tensor(end / tau).sigmoid()
|
67 |
+
alphas_cumprod = (-((t * (end - start) + start) / tau).sigmoid() + v_end) / (v_end - v_start)
|
68 |
+
alphas_cumprod = alphas_cumprod / alphas_cumprod[0]
|
69 |
+
betas = 1 - (alphas_cumprod[1:] / alphas_cumprod[:-1])
|
70 |
+
return torch.clip(betas, 0, 0.999)
|
71 |
+
|
72 |
+
|
73 |
+
ACTION_KEYS = [
|
74 |
+
"inventory",
|
75 |
+
"ESC",
|
76 |
+
"hotbar.1",
|
77 |
+
"hotbar.2",
|
78 |
+
"hotbar.3",
|
79 |
+
"hotbar.4",
|
80 |
+
"hotbar.5",
|
81 |
+
"hotbar.6",
|
82 |
+
"hotbar.7",
|
83 |
+
"hotbar.8",
|
84 |
+
"hotbar.9",
|
85 |
+
"forward",
|
86 |
+
"back",
|
87 |
+
"left",
|
88 |
+
"right",
|
89 |
+
"cameraX",
|
90 |
+
"cameraY",
|
91 |
+
"jump",
|
92 |
+
"sneak",
|
93 |
+
"sprint",
|
94 |
+
"swapHands",
|
95 |
+
"attack",
|
96 |
+
"use",
|
97 |
+
"pickItem",
|
98 |
+
"drop",
|
99 |
+
]
|
100 |
+
|
101 |
+
|
102 |
+
def one_hot_actions(actions: Sequence[Mapping[str, int]]) -> torch.Tensor:
|
103 |
+
actions_one_hot = torch.zeros(len(actions), len(ACTION_KEYS))
|
104 |
+
for i, current_actions in enumerate(actions):
|
105 |
+
for j, action_key in enumerate(ACTION_KEYS):
|
106 |
+
if action_key.startswith("camera"):
|
107 |
+
if action_key == "cameraX":
|
108 |
+
value = current_actions["camera"][0]
|
109 |
+
elif action_key == "cameraY":
|
110 |
+
value = current_actions["camera"][1]
|
111 |
+
else:
|
112 |
+
raise ValueError(f"Unknown camera action key: {action_key}")
|
113 |
+
max_val = 20
|
114 |
+
bin_size = 0.5
|
115 |
+
num_buckets = int(max_val / bin_size)
|
116 |
+
value = (value - num_buckets) / num_buckets
|
117 |
+
assert -1 - 1e-3 <= value <= 1 + 1e-3, f"Camera action value must be in [-1, 1], got {value}"
|
118 |
+
else:
|
119 |
+
value = current_actions[action_key]
|
120 |
+
assert 0 <= value <= 1, f"Action value must be in [0, 1] got {value}"
|
121 |
+
actions_one_hot[i, j] = value
|
122 |
+
|
123 |
+
return actions_one_hot
|
124 |
+
|
125 |
+
|
126 |
+
IMAGE_EXTENSIONS = {"png", "jpg", "jpeg"}
|
127 |
+
VIDEO_EXTENSIONS = {"mp4"}
|
128 |
+
|
129 |
+
|
130 |
+
def load_prompt(path, video_offset=None, n_prompt_frames=1):
|
131 |
+
if path.lower().split(".")[-1] in IMAGE_EXTENSIONS:
|
132 |
+
print("prompt is image; ignoring video_offset and n_prompt_frames")
|
133 |
+
prompt = read_image(path)
|
134 |
+
# add frame dimension
|
135 |
+
prompt = rearrange(prompt, "c h w -> 1 c h w")
|
136 |
+
elif path.lower().split(".")[-1] in VIDEO_EXTENSIONS:
|
137 |
+
prompt = read_video(path, pts_unit="sec")[0]
|
138 |
+
if video_offset is not None:
|
139 |
+
prompt = prompt[video_offset:]
|
140 |
+
prompt = prompt[:n_prompt_frames]
|
141 |
+
else:
|
142 |
+
raise ValueError(f"unrecognized prompt file extension; expected one in {IMAGE_EXTENSIONS} or {VIDEO_EXTENSIONS}")
|
143 |
+
assert prompt.shape[0] == n_prompt_frames, f"input prompt {path} had less than n_prompt_frames={n_prompt_frames} frames"
|
144 |
+
prompt = resize(prompt, (360, 640))
|
145 |
+
# add batch dimension
|
146 |
+
prompt = rearrange(prompt, "t c h w -> 1 t c h w")
|
147 |
+
prompt = prompt.float() / 255.0
|
148 |
+
return prompt
|
149 |
+
|
150 |
+
|
151 |
+
def load_actions(path, action_offset=None):
|
152 |
+
if path.endswith(".actions.pt"):
|
153 |
+
actions = one_hot_actions(torch.load(path))
|
154 |
+
elif path.endswith(".one_hot_actions.pt"):
|
155 |
+
actions = torch.load(path, weights_only=True)
|
156 |
+
else:
|
157 |
+
raise ValueError("unrecognized action file extension; expected '*.actions.pt' or '*.one_hot_actions.pt'")
|
158 |
+
if action_offset is not None:
|
159 |
+
actions = actions[action_offset:]
|
160 |
+
actions = torch.cat([torch.zeros_like(actions[:1]), actions], dim=0)
|
161 |
+
# add batch dimension
|
162 |
+
actions = rearrange(actions, "t d -> 1 t d")
|
163 |
+
return actions
|
algorithms/worldmem/models/vae.py
ADDED
@@ -0,0 +1,359 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
References:
|
3 |
+
- VQGAN: https://github.com/CompVis/taming-transformers
|
4 |
+
- MAE: https://github.com/facebookresearch/mae
|
5 |
+
"""
|
6 |
+
|
7 |
+
import numpy as np
|
8 |
+
import math
|
9 |
+
import functools
|
10 |
+
from collections import namedtuple
|
11 |
+
import torch
|
12 |
+
import torch.nn as nn
|
13 |
+
import torch.nn.functional as F
|
14 |
+
from einops import rearrange
|
15 |
+
from timm.models.vision_transformer import Mlp
|
16 |
+
from timm.layers.helpers import to_2tuple
|
17 |
+
from rotary_embedding_torch import RotaryEmbedding, apply_rotary_emb
|
18 |
+
from .dit import PatchEmbed
|
19 |
+
|
20 |
+
|
21 |
+
class DiagonalGaussianDistribution(object):
|
22 |
+
def __init__(self, parameters, deterministic=False, dim=1):
|
23 |
+
self.parameters = parameters
|
24 |
+
self.mean, self.logvar = torch.chunk(parameters, 2, dim=dim)
|
25 |
+
if dim == 1:
|
26 |
+
self.dims = [1, 2, 3]
|
27 |
+
elif dim == 2:
|
28 |
+
self.dims = [1, 2]
|
29 |
+
else:
|
30 |
+
raise NotImplementedError
|
31 |
+
self.logvar = torch.clamp(self.logvar, -30.0, 20.0)
|
32 |
+
self.deterministic = deterministic
|
33 |
+
self.std = torch.exp(0.5 * self.logvar)
|
34 |
+
self.var = torch.exp(self.logvar)
|
35 |
+
if self.deterministic:
|
36 |
+
self.var = self.std = torch.zeros_like(self.mean).to(device=self.parameters.device)
|
37 |
+
|
38 |
+
def sample(self):
|
39 |
+
x = self.mean + self.std * torch.randn(self.mean.shape).to(device=self.parameters.device)
|
40 |
+
return x
|
41 |
+
|
42 |
+
def mode(self):
|
43 |
+
return self.mean
|
44 |
+
|
45 |
+
|
46 |
+
class Attention(nn.Module):
|
47 |
+
def __init__(
|
48 |
+
self,
|
49 |
+
dim,
|
50 |
+
num_heads,
|
51 |
+
frame_height,
|
52 |
+
frame_width,
|
53 |
+
qkv_bias=False,
|
54 |
+
):
|
55 |
+
super().__init__()
|
56 |
+
self.num_heads = num_heads
|
57 |
+
head_dim = dim // num_heads
|
58 |
+
self.frame_height = frame_height
|
59 |
+
self.frame_width = frame_width
|
60 |
+
|
61 |
+
self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
|
62 |
+
self.proj = nn.Linear(dim, dim)
|
63 |
+
|
64 |
+
rotary_freqs = RotaryEmbedding(
|
65 |
+
dim=head_dim // 4,
|
66 |
+
freqs_for="pixel",
|
67 |
+
max_freq=frame_height * frame_width,
|
68 |
+
).get_axial_freqs(frame_height, frame_width)
|
69 |
+
self.register_buffer("rotary_freqs", rotary_freqs, persistent=False)
|
70 |
+
|
71 |
+
def forward(self, x):
|
72 |
+
B, N, C = x.shape
|
73 |
+
assert N == self.frame_height * self.frame_width
|
74 |
+
|
75 |
+
q, k, v = self.qkv(x).chunk(3, dim=-1)
|
76 |
+
|
77 |
+
q = rearrange(
|
78 |
+
q,
|
79 |
+
"b (H W) (h d) -> b h H W d",
|
80 |
+
H=self.frame_height,
|
81 |
+
W=self.frame_width,
|
82 |
+
h=self.num_heads,
|
83 |
+
)
|
84 |
+
k = rearrange(
|
85 |
+
k,
|
86 |
+
"b (H W) (h d) -> b h H W d",
|
87 |
+
H=self.frame_height,
|
88 |
+
W=self.frame_width,
|
89 |
+
h=self.num_heads,
|
90 |
+
)
|
91 |
+
v = rearrange(
|
92 |
+
v,
|
93 |
+
"b (H W) (h d) -> b h H W d",
|
94 |
+
H=self.frame_height,
|
95 |
+
W=self.frame_width,
|
96 |
+
h=self.num_heads,
|
97 |
+
)
|
98 |
+
|
99 |
+
q = apply_rotary_emb(self.rotary_freqs, q)
|
100 |
+
k = apply_rotary_emb(self.rotary_freqs, k)
|
101 |
+
|
102 |
+
q = rearrange(q, "b h H W d -> b h (H W) d")
|
103 |
+
k = rearrange(k, "b h H W d -> b h (H W) d")
|
104 |
+
v = rearrange(v, "b h H W d -> b h (H W) d")
|
105 |
+
|
106 |
+
x = F.scaled_dot_product_attention(q, k, v)
|
107 |
+
x = rearrange(x, "b h N d -> b N (h d)")
|
108 |
+
|
109 |
+
x = self.proj(x)
|
110 |
+
return x
|
111 |
+
|
112 |
+
|
113 |
+
class AttentionBlock(nn.Module):
|
114 |
+
def __init__(
|
115 |
+
self,
|
116 |
+
dim,
|
117 |
+
num_heads,
|
118 |
+
frame_height,
|
119 |
+
frame_width,
|
120 |
+
mlp_ratio=4.0,
|
121 |
+
qkv_bias=False,
|
122 |
+
attn_causal=False,
|
123 |
+
act_layer=nn.GELU,
|
124 |
+
norm_layer=nn.LayerNorm,
|
125 |
+
):
|
126 |
+
super().__init__()
|
127 |
+
self.norm1 = norm_layer(dim)
|
128 |
+
self.attn = Attention(
|
129 |
+
dim,
|
130 |
+
num_heads,
|
131 |
+
frame_height,
|
132 |
+
frame_width,
|
133 |
+
qkv_bias=qkv_bias,
|
134 |
+
)
|
135 |
+
self.norm2 = norm_layer(dim)
|
136 |
+
mlp_hidden_dim = int(dim * mlp_ratio)
|
137 |
+
self.mlp = Mlp(
|
138 |
+
in_features=dim,
|
139 |
+
hidden_features=mlp_hidden_dim,
|
140 |
+
act_layer=act_layer,
|
141 |
+
)
|
142 |
+
|
143 |
+
def forward(self, x):
|
144 |
+
x = x + self.attn(self.norm1(x))
|
145 |
+
x = x + self.mlp(self.norm2(x))
|
146 |
+
return x
|
147 |
+
|
148 |
+
|
149 |
+
class AutoencoderKL(nn.Module):
|
150 |
+
def __init__(
|
151 |
+
self,
|
152 |
+
latent_dim,
|
153 |
+
input_height=256,
|
154 |
+
input_width=256,
|
155 |
+
patch_size=16,
|
156 |
+
enc_dim=768,
|
157 |
+
enc_depth=6,
|
158 |
+
enc_heads=12,
|
159 |
+
dec_dim=768,
|
160 |
+
dec_depth=6,
|
161 |
+
dec_heads=12,
|
162 |
+
mlp_ratio=4.0,
|
163 |
+
norm_layer=functools.partial(nn.LayerNorm, eps=1e-6),
|
164 |
+
use_variational=True,
|
165 |
+
**kwargs,
|
166 |
+
):
|
167 |
+
super().__init__()
|
168 |
+
self.input_height = input_height
|
169 |
+
self.input_width = input_width
|
170 |
+
self.patch_size = patch_size
|
171 |
+
self.seq_h = input_height // patch_size
|
172 |
+
self.seq_w = input_width // patch_size
|
173 |
+
self.seq_len = self.seq_h * self.seq_w
|
174 |
+
self.patch_dim = 3 * patch_size**2
|
175 |
+
|
176 |
+
self.latent_dim = latent_dim
|
177 |
+
self.enc_dim = enc_dim
|
178 |
+
self.dec_dim = dec_dim
|
179 |
+
|
180 |
+
# patch
|
181 |
+
self.patch_embed = PatchEmbed(input_height, input_width, patch_size, 3, enc_dim)
|
182 |
+
|
183 |
+
# encoder
|
184 |
+
self.encoder = nn.ModuleList(
|
185 |
+
[
|
186 |
+
AttentionBlock(
|
187 |
+
enc_dim,
|
188 |
+
enc_heads,
|
189 |
+
self.seq_h,
|
190 |
+
self.seq_w,
|
191 |
+
mlp_ratio,
|
192 |
+
qkv_bias=True,
|
193 |
+
norm_layer=norm_layer,
|
194 |
+
)
|
195 |
+
for i in range(enc_depth)
|
196 |
+
]
|
197 |
+
)
|
198 |
+
self.enc_norm = norm_layer(enc_dim)
|
199 |
+
|
200 |
+
# bottleneck
|
201 |
+
self.use_variational = use_variational
|
202 |
+
mult = 2 if self.use_variational else 1
|
203 |
+
self.quant_conv = nn.Linear(enc_dim, mult * latent_dim)
|
204 |
+
self.post_quant_conv = nn.Linear(latent_dim, dec_dim)
|
205 |
+
|
206 |
+
# decoder
|
207 |
+
self.decoder = nn.ModuleList(
|
208 |
+
[
|
209 |
+
AttentionBlock(
|
210 |
+
dec_dim,
|
211 |
+
dec_heads,
|
212 |
+
self.seq_h,
|
213 |
+
self.seq_w,
|
214 |
+
mlp_ratio,
|
215 |
+
qkv_bias=True,
|
216 |
+
norm_layer=norm_layer,
|
217 |
+
)
|
218 |
+
for i in range(dec_depth)
|
219 |
+
]
|
220 |
+
)
|
221 |
+
self.dec_norm = norm_layer(dec_dim)
|
222 |
+
self.predictor = nn.Linear(dec_dim, self.patch_dim) # decoder to patch
|
223 |
+
|
224 |
+
# initialize this weight first
|
225 |
+
self.initialize_weights()
|
226 |
+
|
227 |
+
def initialize_weights(self):
|
228 |
+
# initialization
|
229 |
+
# initialize nn.Linear and nn.LayerNorm
|
230 |
+
self.apply(self._init_weights)
|
231 |
+
|
232 |
+
# initialize patch_embed like nn.Linear (instead of nn.Conv2d)
|
233 |
+
w = self.patch_embed.proj.weight.data
|
234 |
+
nn.init.xavier_uniform_(w.view([w.shape[0], -1]))
|
235 |
+
|
236 |
+
def _init_weights(self, m):
|
237 |
+
if isinstance(m, nn.Linear):
|
238 |
+
# we use xavier_uniform following official JAX ViT:
|
239 |
+
nn.init.xavier_uniform_(m.weight)
|
240 |
+
if isinstance(m, nn.Linear) and m.bias is not None:
|
241 |
+
nn.init.constant_(m.bias, 0.0)
|
242 |
+
elif isinstance(m, nn.LayerNorm):
|
243 |
+
nn.init.constant_(m.bias, 0.0)
|
244 |
+
nn.init.constant_(m.weight, 1.0)
|
245 |
+
|
246 |
+
def patchify(self, x):
|
247 |
+
# patchify
|
248 |
+
bsz, _, h, w = x.shape
|
249 |
+
x = x.reshape(
|
250 |
+
bsz,
|
251 |
+
3,
|
252 |
+
self.seq_h,
|
253 |
+
self.patch_size,
|
254 |
+
self.seq_w,
|
255 |
+
self.patch_size,
|
256 |
+
).permute([0, 1, 3, 5, 2, 4]) # [b, c, h, p, w, p] --> [b, c, p, p, h, w]
|
257 |
+
x = x.reshape(bsz, self.patch_dim, self.seq_h, self.seq_w) # --> [b, cxpxp, h, w]
|
258 |
+
x = x.permute([0, 2, 3, 1]).reshape(bsz, self.seq_len, self.patch_dim) # --> [b, hxw, cxpxp]
|
259 |
+
return x
|
260 |
+
|
261 |
+
def unpatchify(self, x):
|
262 |
+
bsz = x.shape[0]
|
263 |
+
# unpatchify
|
264 |
+
x = x.reshape(bsz, self.seq_h, self.seq_w, self.patch_dim).permute([0, 3, 1, 2]) # [b, h, w, cxpxp] --> [b, cxpxp, h, w]
|
265 |
+
x = x.reshape(
|
266 |
+
bsz,
|
267 |
+
3,
|
268 |
+
self.patch_size,
|
269 |
+
self.patch_size,
|
270 |
+
self.seq_h,
|
271 |
+
self.seq_w,
|
272 |
+
).permute([0, 1, 4, 2, 5, 3]) # [b, c, p, p, h, w] --> [b, c, h, p, w, p]
|
273 |
+
x = x.reshape(
|
274 |
+
bsz,
|
275 |
+
3,
|
276 |
+
self.input_height,
|
277 |
+
self.input_width,
|
278 |
+
) # [b, c, hxp, wxp]
|
279 |
+
return x
|
280 |
+
|
281 |
+
def encode(self, x):
|
282 |
+
# patchify
|
283 |
+
x = self.patch_embed(x)
|
284 |
+
|
285 |
+
# encoder
|
286 |
+
for blk in self.encoder:
|
287 |
+
x = blk(x)
|
288 |
+
x = self.enc_norm(x)
|
289 |
+
|
290 |
+
# bottleneck
|
291 |
+
moments = self.quant_conv(x)
|
292 |
+
if not self.use_variational:
|
293 |
+
moments = torch.cat((moments, torch.zeros_like(moments)), 2)
|
294 |
+
posterior = DiagonalGaussianDistribution(moments, deterministic=(not self.use_variational), dim=2)
|
295 |
+
return posterior
|
296 |
+
|
297 |
+
def decode(self, z):
|
298 |
+
# bottleneck
|
299 |
+
z = self.post_quant_conv(z)
|
300 |
+
|
301 |
+
# decoder
|
302 |
+
for blk in self.decoder:
|
303 |
+
z = blk(z)
|
304 |
+
z = self.dec_norm(z)
|
305 |
+
|
306 |
+
# predictor
|
307 |
+
z = self.predictor(z)
|
308 |
+
|
309 |
+
# unpatchify
|
310 |
+
dec = self.unpatchify(z)
|
311 |
+
return dec
|
312 |
+
|
313 |
+
def autoencode(self, input, sample_posterior=True):
|
314 |
+
posterior = self.encode(input)
|
315 |
+
if self.use_variational and sample_posterior:
|
316 |
+
z = posterior.sample()
|
317 |
+
else:
|
318 |
+
z = posterior.mode()
|
319 |
+
dec = self.decode(z)
|
320 |
+
return dec, posterior, z
|
321 |
+
|
322 |
+
def get_input(self, batch, k):
|
323 |
+
x = batch[k]
|
324 |
+
if len(x.shape) == 3:
|
325 |
+
x = x[..., None]
|
326 |
+
x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format).float()
|
327 |
+
return x
|
328 |
+
|
329 |
+
def forward(self, inputs, labels, split="train"):
|
330 |
+
rec, post, latent = self.autoencode(inputs)
|
331 |
+
return rec, post, latent
|
332 |
+
|
333 |
+
def get_last_layer(self):
|
334 |
+
return self.predictor.weight
|
335 |
+
|
336 |
+
|
337 |
+
def ViT_L_20_Shallow_Encoder(**kwargs):
|
338 |
+
if "latent_dim" in kwargs:
|
339 |
+
latent_dim = kwargs.pop("latent_dim")
|
340 |
+
else:
|
341 |
+
latent_dim = 16
|
342 |
+
return AutoencoderKL(
|
343 |
+
latent_dim=latent_dim,
|
344 |
+
patch_size=20,
|
345 |
+
enc_dim=1024,
|
346 |
+
enc_depth=6,
|
347 |
+
enc_heads=16,
|
348 |
+
dec_dim=1024,
|
349 |
+
dec_depth=12,
|
350 |
+
dec_heads=16,
|
351 |
+
input_height=360,
|
352 |
+
input_width=640,
|
353 |
+
**kwargs,
|
354 |
+
)
|
355 |
+
|
356 |
+
|
357 |
+
VAE_models = {
|
358 |
+
"vit-l-20-shallow-encoder": ViT_L_20_Shallow_Encoder,
|
359 |
+
}
|
algorithms/worldmem/pose_prediction.py
ADDED
@@ -0,0 +1,374 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from omegaconf import DictConfig
|
2 |
+
import torch
|
3 |
+
from lightning.pytorch.utilities.types import STEP_OUTPUT
|
4 |
+
from algorithms.common.metrics import (
|
5 |
+
FrechetInceptionDistance,
|
6 |
+
LearnedPerceptualImagePatchSimilarity,
|
7 |
+
FrechetVideoDistance,
|
8 |
+
)
|
9 |
+
from .df_base import DiffusionForcingBase
|
10 |
+
from utils.logging_utils import log_video, get_validation_metrics_for_videos
|
11 |
+
from .models.vae import VAE_models
|
12 |
+
from .models.dit import DiT_models
|
13 |
+
from einops import rearrange
|
14 |
+
from torch import autocast
|
15 |
+
import numpy as np
|
16 |
+
from tqdm import tqdm
|
17 |
+
import torch.nn.functional as F
|
18 |
+
from .models.pose_prediction import PosePredictionNet
|
19 |
+
import torchvision.transforms.functional as TF
|
20 |
+
import random
|
21 |
+
from torchvision.transforms import InterpolationMode
|
22 |
+
from PIL import Image
|
23 |
+
import math
|
24 |
+
from packaging import version as pver
|
25 |
+
import torch.distributed as dist
|
26 |
+
import matplotlib.pyplot as plt
|
27 |
+
|
28 |
+
import torch
|
29 |
+
import math
|
30 |
+
import wandb
|
31 |
+
|
32 |
+
import torch.nn as nn
|
33 |
+
from algorithms.common.base_pytorch_algo import BasePytorchAlgo
|
34 |
+
|
35 |
+
class PosePrediction(BasePytorchAlgo):
|
36 |
+
|
37 |
+
def __init__(self, cfg: DictConfig):
|
38 |
+
|
39 |
+
super().__init__(cfg)
|
40 |
+
|
41 |
+
def _build_model(self):
|
42 |
+
self.pose_prediction_model = PosePredictionNet()
|
43 |
+
vae = VAE_models["vit-l-20-shallow-encoder"]()
|
44 |
+
self.vae = vae.eval()
|
45 |
+
|
46 |
+
def training_step(self, batch, batch_idx) -> STEP_OUTPUT:
|
47 |
+
xs, conditions, pose_conditions= batch
|
48 |
+
pose_conditions[:,:,3:] = pose_conditions[:,:,3:] // 15
|
49 |
+
xs = self.encode(xs)
|
50 |
+
|
51 |
+
b,f,c,h,w = xs.shape
|
52 |
+
xs = xs[:,:-1].reshape(-1, c, h, w)
|
53 |
+
conditions = conditions[:,1:].reshape(-1, 25)
|
54 |
+
offset_gt = pose_conditions[:,1:] - pose_conditions[:,:-1]
|
55 |
+
pose_conditions = pose_conditions[:,:-1].reshape(-1, 5)
|
56 |
+
offset_gt = offset_gt.reshape(-1, 5)
|
57 |
+
offset_gt[:, 3][offset_gt[:, 3]==23] = -1
|
58 |
+
offset_gt[:, 3][offset_gt[:, 3]==-23] = 1
|
59 |
+
offset_gt[:, 4][offset_gt[:, 4]==23] = -1
|
60 |
+
offset_gt[:, 4][offset_gt[:, 4]==-23] = 1
|
61 |
+
|
62 |
+
offset_pred = self.pose_prediction_model(xs, conditions, pose_conditions)
|
63 |
+
criterion = nn.MSELoss()
|
64 |
+
loss = criterion(offset_pred, offset_gt)
|
65 |
+
if batch_idx % 200 == 0:
|
66 |
+
self.log("training/loss", loss.cpu())
|
67 |
+
output_dict = {
|
68 |
+
"loss": loss}
|
69 |
+
return output_dict
|
70 |
+
|
71 |
+
def encode(self, x):
|
72 |
+
# vae encoding
|
73 |
+
B = x.shape[1]
|
74 |
+
T = x.shape[0]
|
75 |
+
H, W = x.shape[-2:]
|
76 |
+
scaling_factor = 0.07843137255
|
77 |
+
|
78 |
+
x = rearrange(x, "t b c h w -> (t b) c h w")
|
79 |
+
with torch.no_grad():
|
80 |
+
with autocast("cuda", dtype=torch.half):
|
81 |
+
x = self.vae.encode(x * 2 - 1).mean * scaling_factor
|
82 |
+
x = rearrange(x, "(t b) (h w) c -> t b c h w", t=T, h=H // self.vae.patch_size, w=W // self.vae.patch_size)
|
83 |
+
# x = x[:, :n_prompt_frames]
|
84 |
+
return x
|
85 |
+
|
86 |
+
def decode(self, x):
|
87 |
+
total_frames = x.shape[0]
|
88 |
+
scaling_factor = 0.07843137255
|
89 |
+
x = rearrange(x, "t b c h w -> (t b) (h w) c")
|
90 |
+
with torch.no_grad():
|
91 |
+
with autocast("cuda", dtype=torch.half):
|
92 |
+
x = (self.vae.decode(x / scaling_factor) + 1) / 2
|
93 |
+
|
94 |
+
x = rearrange(x, "(t b) c h w-> t b c h w", t=total_frames)
|
95 |
+
return x
|
96 |
+
|
97 |
+
def validation_step(self, batch, batch_idx, namespace="validation") -> STEP_OUTPUT:
|
98 |
+
xs, conditions, pose_conditions= batch
|
99 |
+
pose_conditions[:,:,3:] = pose_conditions[:,:,3:] // 15
|
100 |
+
xs = self.encode(xs)
|
101 |
+
|
102 |
+
b,f,c,h,w = xs.shape
|
103 |
+
xs = xs[:,:-1].reshape(-1, c, h, w)
|
104 |
+
conditions = conditions[:,1:].reshape(-1, 25)
|
105 |
+
offset_gt = pose_conditions[:,1:] - pose_conditions[:,:-1]
|
106 |
+
pose_conditions = pose_conditions[:,:-1].reshape(-1, 5)
|
107 |
+
offset_gt = offset_gt.reshape(-1, 5)
|
108 |
+
offset_gt[:, 3][offset_gt[:, 3]==23] = -1
|
109 |
+
offset_gt[:, 3][offset_gt[:, 3]==-23] = 1
|
110 |
+
offset_gt[:, 4][offset_gt[:, 4]==23] = -1
|
111 |
+
offset_gt[:, 4][offset_gt[:, 4]==-23] = 1
|
112 |
+
|
113 |
+
offset_pred = self.pose_prediction_model(xs, conditions, pose_conditions)
|
114 |
+
|
115 |
+
criterion = nn.MSELoss()
|
116 |
+
loss = criterion(offset_pred, offset_gt)
|
117 |
+
|
118 |
+
if batch_idx % 200 == 0:
|
119 |
+
self.log("validation/loss", loss.cpu())
|
120 |
+
output_dict = {
|
121 |
+
"loss": loss}
|
122 |
+
return
|
123 |
+
|
124 |
+
@torch.no_grad()
|
125 |
+
def interactive(self, batch, context_frames, device):
|
126 |
+
with torch.cuda.amp.autocast():
|
127 |
+
condition_similar_length = self.condition_similar_length
|
128 |
+
# xs_raw, conditions, pose_conditions, c2w_mat, masks, frame_idx = self._preprocess_batch(batch)
|
129 |
+
|
130 |
+
first_frame, new_conditions, new_pose_conditions, new_c2w_mat, new_frame_idx = batch
|
131 |
+
|
132 |
+
if self.frames is None:
|
133 |
+
first_frame_encode = self.encode(first_frame[None, None].to(device))
|
134 |
+
self.frames = first_frame_encode.to(device)
|
135 |
+
self.actions = new_conditions[None, None].to(device)
|
136 |
+
self.poses = new_pose_conditions[None, None].to(device)
|
137 |
+
self.memory_c2w = new_c2w_mat[None, None].to(device)
|
138 |
+
self.frame_idx = torch.tensor([[new_frame_idx]]).to(device)
|
139 |
+
return first_frame
|
140 |
+
else:
|
141 |
+
self.actions = torch.cat([self.actions, new_conditions[None, None].to(device)])
|
142 |
+
self.poses = torch.cat([self.poses, new_pose_conditions[None, None].to(device)])
|
143 |
+
self.memory_c2w = torch.cat([self.memory_c2w, new_c2w_mat[None, None].to(device)])
|
144 |
+
self.frame_idx = torch.cat([self.frame_idx, torch.tensor([[new_frame_idx]]).to(device)])
|
145 |
+
|
146 |
+
conditions = self.actions.clone()
|
147 |
+
pose_conditions = self.poses.clone()
|
148 |
+
c2w_mat = self.memory_c2w .clone()
|
149 |
+
frame_idx = self.frame_idx.clone()
|
150 |
+
|
151 |
+
|
152 |
+
curr_frame = 0
|
153 |
+
horizon = 1
|
154 |
+
batch_size = 1
|
155 |
+
n_frames = curr_frame + horizon
|
156 |
+
# context
|
157 |
+
n_context_frames = context_frames // self.frame_stack
|
158 |
+
xs_pred = self.frames[:n_context_frames].clone()
|
159 |
+
curr_frame += n_context_frames
|
160 |
+
|
161 |
+
pbar = tqdm(total=n_frames, initial=curr_frame, desc="Sampling")
|
162 |
+
|
163 |
+
# generation on frame
|
164 |
+
scheduling_matrix = self._generate_scheduling_matrix(horizon)
|
165 |
+
chunk = torch.randn((horizon, batch_size, *xs_pred.shape[2:])).to(xs_pred.device)
|
166 |
+
chunk = torch.clamp(chunk, -self.clip_noise, self.clip_noise)
|
167 |
+
|
168 |
+
xs_pred = torch.cat([xs_pred, chunk], 0)
|
169 |
+
|
170 |
+
# sliding window: only input the last n_tokens frames
|
171 |
+
start_frame = max(0, curr_frame + horizon - self.n_tokens)
|
172 |
+
|
173 |
+
pbar.set_postfix(
|
174 |
+
{
|
175 |
+
"start": start_frame,
|
176 |
+
"end": curr_frame + horizon,
|
177 |
+
}
|
178 |
+
)
|
179 |
+
|
180 |
+
if condition_similar_length:
|
181 |
+
|
182 |
+
if curr_frame < condition_similar_length:
|
183 |
+
random_idx = [i for i in range(curr_frame)] + [0] * (condition_similar_length-curr_frame)
|
184 |
+
random_idx = np.repeat(np.array(random_idx)[:,None], xs_pred.shape[1], -1)
|
185 |
+
else:
|
186 |
+
num_samples = 10000
|
187 |
+
radius = 30
|
188 |
+
samples = torch.rand((num_samples, 1), device=pose_conditions.device)
|
189 |
+
angles = 2 * np.pi * torch.rand((num_samples,), device=pose_conditions.device)
|
190 |
+
# points = radius * torch.sqrt(samples) * torch.stack((torch.cos(angles), torch.sin(angles)), dim=1)
|
191 |
+
|
192 |
+
points = generate_points_in_sphere(num_samples, radius).to(pose_conditions.device)
|
193 |
+
points = points[:, None].repeat(1, pose_conditions.shape[1], 1)
|
194 |
+
points += pose_conditions[curr_frame, :, :3][None]
|
195 |
+
fov_half_h = torch.tensor(105/2, device=pose_conditions.device)
|
196 |
+
fov_half_v = torch.tensor(75/2, device=pose_conditions.device)
|
197 |
+
# in_fov1 = is_inside_fov(points, pose_conditions[curr_frame, :, [0, 2]], pose_conditions[curr_frame, :, -1], fov_half)
|
198 |
+
|
199 |
+
in_fov1 = is_inside_fov_3d_hv(points, pose_conditions[curr_frame, :, :3],
|
200 |
+
pose_conditions[curr_frame, :, -2], pose_conditions[curr_frame, :, -1],
|
201 |
+
fov_half_h, fov_half_v)
|
202 |
+
|
203 |
+
in_fov_list = []
|
204 |
+
for pc in pose_conditions[:curr_frame]:
|
205 |
+
in_fov_list.append(is_inside_fov_3d_hv(points, pc[:, :3], pc[:, -2], pc[:, -1],
|
206 |
+
fov_half_h, fov_half_v))
|
207 |
+
|
208 |
+
in_fov_list = torch.stack(in_fov_list)
|
209 |
+
# v3
|
210 |
+
random_idx = []
|
211 |
+
|
212 |
+
for csl in range(self.condition_similar_length // 2):
|
213 |
+
overlap_ratio = ((in_fov1[None].bool() & in_fov_list).sum(1))/in_fov1.sum()
|
214 |
+
# mask = distance > (in_fov1.bool().sum(0) / 4)
|
215 |
+
#_, r_idx = torch.topk(overlap_ratio / tensor_max_with_number((frame_idx[curr_frame] - frame_idx[:curr_frame]), 10), k=1, dim=0)
|
216 |
+
|
217 |
+
# if csl > self.condition_similar_length:
|
218 |
+
# _, r_idx = torch.topk(overlap_ratio, k=1, dim=0)
|
219 |
+
# else:
|
220 |
+
# _, r_idx = torch.topk(overlap_ratio / tensor_max_with_number((frame_idx[curr_frame] - frame_idx[:curr_frame]), 10), k=1, dim=0)
|
221 |
+
|
222 |
+
_, r_idx = torch.topk(overlap_ratio, k=1, dim=0)
|
223 |
+
# _, r_idx = torch.topk(overlap_ratio / tensor_max_with_number((frame_idx[curr_frame] - frame_idx[:curr_frame]), 10), k=1, dim=0)
|
224 |
+
|
225 |
+
# if curr_frame >=93:
|
226 |
+
# import pdb;pdb.set_trace()
|
227 |
+
|
228 |
+
# start_time = time.time()
|
229 |
+
cos_sim = F.cosine_similarity(xs_pred.to(r_idx.device)[r_idx[:, range(in_fov1.shape[1])],
|
230 |
+
range(in_fov1.shape[1])], xs_pred.to(r_idx.device)[:curr_frame], dim=2)
|
231 |
+
cos_sim = cos_sim.mean((-2,-1))
|
232 |
+
|
233 |
+
mask_sim = cos_sim>0.9
|
234 |
+
in_fov_list = in_fov_list & ~mask_sim[:,None].to(in_fov_list.device)
|
235 |
+
|
236 |
+
random_idx.append(r_idx)
|
237 |
+
|
238 |
+
for bi in range(conditions.shape[1]):
|
239 |
+
if len(torch.nonzero(conditions[:,bi,24] == 1))==0:
|
240 |
+
pass
|
241 |
+
else:
|
242 |
+
last_idx = torch.nonzero(conditions[:,bi,24] == 1)[-1]
|
243 |
+
in_fov_list[:last_idx,:,bi] = False
|
244 |
+
|
245 |
+
for csl in range(self.condition_similar_length // 2):
|
246 |
+
overlap_ratio = ((in_fov1[None].bool() & in_fov_list).sum(1))/in_fov1.sum()
|
247 |
+
# mask = distance > (in_fov1.bool().sum(0) / 4)
|
248 |
+
#_, r_idx = torch.topk(overlap_ratio / tensor_max_with_number((frame_idx[curr_frame] - frame_idx[:curr_frame]), 10), k=1, dim=0)
|
249 |
+
|
250 |
+
# if csl > self.condition_similar_length:
|
251 |
+
# _, r_idx = torch.topk(overlap_ratio, k=1, dim=0)
|
252 |
+
# else:
|
253 |
+
# _, r_idx = torch.topk(overlap_ratio / tensor_max_with_number((frame_idx[curr_frame] - frame_idx[:curr_frame]), 10), k=1, dim=0)
|
254 |
+
|
255 |
+
_, r_idx = torch.topk(overlap_ratio, k=1, dim=0)
|
256 |
+
# _, r_idx = torch.topk(overlap_ratio / tensor_max_with_number((frame_idx[curr_frame] - frame_idx[:curr_frame]), 10), k=1, dim=0)
|
257 |
+
|
258 |
+
# if curr_frame >=93:
|
259 |
+
# import pdb;pdb.set_trace()
|
260 |
+
|
261 |
+
# start_time = time.time()
|
262 |
+
cos_sim = F.cosine_similarity(xs_pred.to(r_idx.device)[r_idx[:, range(in_fov1.shape[1])],
|
263 |
+
range(in_fov1.shape[1])], xs_pred.to(r_idx.device)[:curr_frame], dim=2)
|
264 |
+
cos_sim = cos_sim.mean((-2,-1))
|
265 |
+
|
266 |
+
mask_sim = cos_sim>0.9
|
267 |
+
in_fov_list = in_fov_list & ~mask_sim[:,None].to(in_fov_list.device)
|
268 |
+
|
269 |
+
random_idx.append(r_idx)
|
270 |
+
|
271 |
+
random_idx = torch.cat(random_idx).cpu()
|
272 |
+
condition_similar_length = len(random_idx)
|
273 |
+
|
274 |
+
xs_pred = torch.cat([xs_pred, xs_pred[random_idx[:,range(xs_pred.shape[1])], range(xs_pred.shape[1])].clone()], 0)
|
275 |
+
|
276 |
+
if condition_similar_length:
|
277 |
+
# import pdb;pdb.set_trace()
|
278 |
+
padding = torch.zeros((condition_similar_length,) + conditions.shape[1:], device=conditions.device, dtype=conditions.dtype)
|
279 |
+
input_condition = torch.cat([conditions[start_frame : curr_frame + horizon], padding], dim=0)
|
280 |
+
if self.pose_cond_dim:
|
281 |
+
# if not self.use_plucker:
|
282 |
+
input_pose_condition = torch.cat([pose_conditions[start_frame : curr_frame + horizon], pose_conditions[random_idx[:,range(xs_pred.shape[1])], range(xs_pred.shape[1])]], dim=0).clone()
|
283 |
+
|
284 |
+
if self.use_plucker:
|
285 |
+
if self.all_zero_frame:
|
286 |
+
frame_idx_list = []
|
287 |
+
input_pose_condition = []
|
288 |
+
for i in range(start_frame, curr_frame + horizon):
|
289 |
+
input_pose_condition.append(convert_to_plucker(torch.cat([c2w_mat[i:i+1],c2w_mat[random_idx[:,range(xs_pred.shape[1])], range(xs_pred.shape[1])]]).clone(), 0, focal_length=self.focal_length, is_old_setting=self.old_setting).to(xs_pred.dtype))
|
290 |
+
frame_idx_list.append(torch.cat([frame_idx[i:i+1]-frame_idx[i:i+1], frame_idx[random_idx[:,range(xs_pred.shape[1])], range(xs_pred.shape[1])]-frame_idx[i:i+1]]))
|
291 |
+
input_pose_condition = torch.cat(input_pose_condition)
|
292 |
+
frame_idx_list = torch.cat(frame_idx_list)
|
293 |
+
|
294 |
+
# print(frame_idx_list[:,0])
|
295 |
+
else:
|
296 |
+
# print(curr_frame-start_frame)
|
297 |
+
# input_pose_condition = torch.cat([c2w_mat[start_frame : curr_frame + horizon], c2w_mat[random_idx[:,range(xs_pred.shape[1])], range(xs_pred.shape[1])]], dim=0).clone()
|
298 |
+
# import pdb;pdb.set_trace()
|
299 |
+
if self.last_frame_refer:
|
300 |
+
input_pose_condition = torch.cat([c2w_mat[start_frame : curr_frame + horizon], c2w_mat[-1:]], dim=0).clone()
|
301 |
+
else:
|
302 |
+
input_pose_condition = torch.cat([c2w_mat[start_frame : curr_frame + horizon], c2w_mat[random_idx[:,range(xs_pred.shape[1])], range(xs_pred.shape[1])]], dim=0).clone()
|
303 |
+
|
304 |
+
if self.zero_curr:
|
305 |
+
# print("="*50)
|
306 |
+
input_pose_condition = convert_to_plucker(input_pose_condition, curr_frame-start_frame, focal_length=self.focal_length, is_old_setting=self.old_setting)
|
307 |
+
# input_pose_condition[:curr_frame-start_frame] = input_pose_condition[curr_frame-start_frame:curr_frame-start_frame+1]
|
308 |
+
# input_pose_condition = convert_to_plucker(input_pose_condition, -self.condition_similar_length-1, focal_length=self.focal_length)
|
309 |
+
else:
|
310 |
+
input_pose_condition = convert_to_plucker(input_pose_condition, -condition_similar_length, focal_length=self.focal_length, is_old_setting=self.old_setting)
|
311 |
+
frame_idx_list = None
|
312 |
+
else:
|
313 |
+
input_pose_condition = torch.cat([pose_conditions[start_frame : curr_frame + horizon], pose_conditions[random_idx[:,range(xs_pred.shape[1])], range(xs_pred.shape[1])]], dim=0).clone()
|
314 |
+
frame_idx_list = None
|
315 |
+
else:
|
316 |
+
input_condition = conditions[start_frame : curr_frame + horizon]
|
317 |
+
input_pose_condition = None
|
318 |
+
frame_idx_list = None
|
319 |
+
|
320 |
+
for m in range(scheduling_matrix.shape[0] - 1):
|
321 |
+
from_noise_levels = np.concatenate((np.zeros((curr_frame,), dtype=np.int64), scheduling_matrix[m]))[
|
322 |
+
:, None
|
323 |
+
].repeat(batch_size, axis=1)
|
324 |
+
to_noise_levels = np.concatenate(
|
325 |
+
(
|
326 |
+
np.zeros((curr_frame,), dtype=np.int64),
|
327 |
+
scheduling_matrix[m + 1],
|
328 |
+
)
|
329 |
+
)[
|
330 |
+
:, None
|
331 |
+
].repeat(batch_size, axis=1)
|
332 |
+
|
333 |
+
if condition_similar_length:
|
334 |
+
from_noise_levels = np.concatenate([from_noise_levels, np.zeros((condition_similar_length,from_noise_levels.shape[-1]), dtype=np.int32)], axis=0)
|
335 |
+
to_noise_levels = np.concatenate([to_noise_levels, np.zeros((condition_similar_length,from_noise_levels.shape[-1]), dtype=np.int32)], axis=0)
|
336 |
+
|
337 |
+
from_noise_levels = torch.from_numpy(from_noise_levels).to(self.device)
|
338 |
+
to_noise_levels = torch.from_numpy(to_noise_levels).to(self.device)
|
339 |
+
|
340 |
+
|
341 |
+
if input_pose_condition is not None:
|
342 |
+
input_pose_condition = input_pose_condition.to(xs_pred.dtype)
|
343 |
+
|
344 |
+
xs_pred[start_frame:] = self.diffusion_model.sample_step(
|
345 |
+
xs_pred[start_frame:],
|
346 |
+
input_condition,
|
347 |
+
input_pose_condition,
|
348 |
+
from_noise_levels[start_frame:],
|
349 |
+
to_noise_levels[start_frame:],
|
350 |
+
current_frame=curr_frame,
|
351 |
+
mode="validation",
|
352 |
+
reference_length=condition_similar_length,
|
353 |
+
frame_idx=frame_idx_list
|
354 |
+
)
|
355 |
+
|
356 |
+
# if curr_frame > 14:
|
357 |
+
# import pdb;pdb.set_trace()
|
358 |
+
|
359 |
+
# if xs_pred_back is not None:
|
360 |
+
# xs_pred = torch.cat([xs_pred[:6], xs_pred_back[6:12], xs_pred[6:]], dim=0)
|
361 |
+
|
362 |
+
# import pdb;pdb.set_trace()
|
363 |
+
if condition_similar_length: # and curr_frame+1!=n_frames:
|
364 |
+
xs_pred = xs_pred[:-condition_similar_length]
|
365 |
+
|
366 |
+
curr_frame += horizon
|
367 |
+
pbar.update(horizon)
|
368 |
+
|
369 |
+
self.frames = torch.cat([self.frames, xs_pred[n_context_frames:]])
|
370 |
+
|
371 |
+
xs_pred = self.decode(xs_pred[n_context_frames:])
|
372 |
+
|
373 |
+
return xs_pred[-1,0].cpu()
|
374 |
+
|
app.py
ADDED
@@ -0,0 +1,365 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import gradio as gr
|
2 |
+
import time
|
3 |
+
|
4 |
+
import sys
|
5 |
+
import subprocess
|
6 |
+
import time
|
7 |
+
from pathlib import Path
|
8 |
+
|
9 |
+
import hydra
|
10 |
+
from omegaconf import DictConfig, OmegaConf
|
11 |
+
from omegaconf.omegaconf import open_dict
|
12 |
+
|
13 |
+
from utils.print_utils import cyan
|
14 |
+
from utils.ckpt_utils import download_latest_checkpoint, is_run_id
|
15 |
+
from utils.cluster_utils import submit_slurm_job
|
16 |
+
from utils.distributed_utils import is_rank_zero
|
17 |
+
import numpy as np
|
18 |
+
import torch
|
19 |
+
from datasets.video.minecraft_video_dataset import *
|
20 |
+
import torchvision.transforms as transforms
|
21 |
+
import cv2
|
22 |
+
import subprocess
|
23 |
+
from PIL import Image
|
24 |
+
from datetime import datetime
|
25 |
+
|
26 |
+
ACTION_KEYS = [
|
27 |
+
"inventory",
|
28 |
+
"ESC",
|
29 |
+
"hotbar.1",
|
30 |
+
"hotbar.2",
|
31 |
+
"hotbar.3",
|
32 |
+
"hotbar.4",
|
33 |
+
"hotbar.5",
|
34 |
+
"hotbar.6",
|
35 |
+
"hotbar.7",
|
36 |
+
"hotbar.8",
|
37 |
+
"hotbar.9",
|
38 |
+
"forward",
|
39 |
+
"back",
|
40 |
+
"left",
|
41 |
+
"right",
|
42 |
+
"cameraY",
|
43 |
+
"cameraX",
|
44 |
+
"jump",
|
45 |
+
"sneak",
|
46 |
+
"sprint",
|
47 |
+
"swapHands",
|
48 |
+
"attack",
|
49 |
+
"use",
|
50 |
+
"pickItem",
|
51 |
+
"drop",
|
52 |
+
]
|
53 |
+
|
54 |
+
# Mapping of input keys to action names
|
55 |
+
KEY_TO_ACTION = {
|
56 |
+
"Q": ("forward", 1),
|
57 |
+
"E": ("back", 1),
|
58 |
+
"W": ("cameraY", -1),
|
59 |
+
"S": ("cameraY", 1),
|
60 |
+
"A": ("cameraX", -1),
|
61 |
+
"D": ("cameraX", 1),
|
62 |
+
"U": ("drop", 1),
|
63 |
+
"N": ("noop", 1),
|
64 |
+
"1": ("hotbar.1", 1),
|
65 |
+
}
|
66 |
+
|
67 |
+
def parse_input_to_tensor(input_str):
|
68 |
+
"""
|
69 |
+
Convert an input string into a (sequence_length, 25) tensor, where each row is a one-hot representation
|
70 |
+
of the corresponding action key.
|
71 |
+
|
72 |
+
Args:
|
73 |
+
input_str (str): A string consisting of "WASD" characters (e.g., "WASDWS").
|
74 |
+
|
75 |
+
Returns:
|
76 |
+
torch.Tensor: A tensor of shape (sequence_length, 25), where each row is a one-hot encoded action.
|
77 |
+
"""
|
78 |
+
# Get the length of the input sequence
|
79 |
+
seq_len = len(input_str)
|
80 |
+
|
81 |
+
# Initialize a zero tensor of shape (seq_len, 25)
|
82 |
+
action_tensor = torch.zeros((seq_len, 25))
|
83 |
+
|
84 |
+
# Iterate through the input string and update the corresponding positions
|
85 |
+
for i, char in enumerate(input_str):
|
86 |
+
action, value = KEY_TO_ACTION.get(char.upper()) # Convert to uppercase to handle case insensitivity
|
87 |
+
if action and action in ACTION_KEYS:
|
88 |
+
index = ACTION_KEYS.index(action)
|
89 |
+
action_tensor[i, index] = value # Set the corresponding action index to 1
|
90 |
+
|
91 |
+
return action_tensor
|
92 |
+
|
93 |
+
def load_image_as_tensor(image_path: str) -> torch.Tensor:
|
94 |
+
"""
|
95 |
+
Load an image and convert it to a 0-1 normalized tensor.
|
96 |
+
|
97 |
+
Args:
|
98 |
+
image_path (str): Path to the image file.
|
99 |
+
|
100 |
+
Returns:
|
101 |
+
torch.Tensor: Image tensor of shape (C, H, W), normalized to [0,1].
|
102 |
+
"""
|
103 |
+
if isinstance(image_path, str):
|
104 |
+
image = Image.open(image_path).convert("RGB") # Ensure it's RGB
|
105 |
+
else:
|
106 |
+
image = image_path
|
107 |
+
transform = transforms.Compose([
|
108 |
+
transforms.ToTensor(), # Converts to tensor and normalizes to [0,1]
|
109 |
+
])
|
110 |
+
return transform(image)
|
111 |
+
|
112 |
+
def run_local(cfg: DictConfig):
|
113 |
+
# delay some imports in case they are not needed in non-local envs for submission
|
114 |
+
from experiments import build_experiment
|
115 |
+
|
116 |
+
# Get yaml names
|
117 |
+
hydra_cfg = hydra.core.hydra_config.HydraConfig.get()
|
118 |
+
cfg_choice = OmegaConf.to_container(hydra_cfg.runtime.choices)
|
119 |
+
|
120 |
+
with open_dict(cfg):
|
121 |
+
if cfg_choice["experiment"] is not None:
|
122 |
+
cfg.experiment._name = cfg_choice["experiment"]
|
123 |
+
if cfg_choice["dataset"] is not None:
|
124 |
+
cfg.dataset._name = cfg_choice["dataset"]
|
125 |
+
if cfg_choice["algorithm"] is not None:
|
126 |
+
cfg.algorithm._name = cfg_choice["algorithm"]
|
127 |
+
|
128 |
+
# launch experiment
|
129 |
+
experiment = build_experiment(cfg, None, cfg.checkpoint_path)
|
130 |
+
return experiment.exec_interactive(cfg.experiment.tasks[0])
|
131 |
+
|
132 |
+
memory_frames = []
|
133 |
+
memory_curr_frame = 0
|
134 |
+
input_history = ""
|
135 |
+
ICE_PLAINS_IMAGE = "assets/ice_plains.png"
|
136 |
+
DESERT_IMAGE = "assets/desert.png"
|
137 |
+
SAVANNA_IMAGE = "assets/savanna.png"
|
138 |
+
PLAINS_IMAGE = "assets/plans.png"
|
139 |
+
PLACE_IMAGE = "assets/place.png"
|
140 |
+
SUNFLOWERS_IMAGE = "assets/sunflower_plains.png"
|
141 |
+
SUNFLOWERS_RAIN_IMAGE = "assets/rain_sunflower_plains.png"
|
142 |
+
|
143 |
+
DEFAULT_IMAGE = ICE_PLAINS_IMAGE
|
144 |
+
device = "cuda:0"
|
145 |
+
|
146 |
+
def save_video(frames, path="output.mp4", fps=10):
|
147 |
+
h, w, _ = frames[0].shape
|
148 |
+
out = cv2.VideoWriter(path, cv2.VideoWriter_fourcc(*'XVID'), fps, (w, h))
|
149 |
+
for frame in frames:
|
150 |
+
out.write(cv2.cvtColor(frame, cv2.COLOR_RGB2BGR))
|
151 |
+
out.release()
|
152 |
+
|
153 |
+
ffmpeg_cmd = [
|
154 |
+
"ffmpeg", "-y", "-i", path, "-c:v", "libx264", "-crf", "23", "-preset", "medium", path
|
155 |
+
]
|
156 |
+
subprocess.run(ffmpeg_cmd, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
|
157 |
+
return path
|
158 |
+
|
159 |
+
@hydra.main(
|
160 |
+
version_base=None,
|
161 |
+
config_path="configurations",
|
162 |
+
config_name="config",
|
163 |
+
)
|
164 |
+
def run(cfg: DictConfig):
|
165 |
+
algo = run_local(cfg)
|
166 |
+
algo.to("cuda:0")
|
167 |
+
|
168 |
+
actions = torch.zeros((1, 25))
|
169 |
+
poses = torch.zeros((1, 5))
|
170 |
+
|
171 |
+
memory_frames.append(load_image_as_tensor(DEFAULT_IMAGE))
|
172 |
+
|
173 |
+
_ = algo.interactive(memory_frames[0],
|
174 |
+
actions[0],
|
175 |
+
poses[0],
|
176 |
+
memory_curr_frame,
|
177 |
+
device="cuda:0")
|
178 |
+
|
179 |
+
def set_denoising_steps(denoising_steps, sampling_timesteps_state):
|
180 |
+
algo.sampling_timesteps = denoising_steps
|
181 |
+
algo.diffusion_model.sampling_timesteps = denoising_steps
|
182 |
+
sampling_timesteps_state = denoising_steps
|
183 |
+
print("set denoising steps to", algo.sampling_timesteps)
|
184 |
+
return sampling_timesteps_state
|
185 |
+
|
186 |
+
|
187 |
+
def update_image_and_log(keys):
|
188 |
+
actions = parse_input_to_tensor(keys)
|
189 |
+
global input_history
|
190 |
+
global memory_curr_frame
|
191 |
+
for i in range(len(actions)):
|
192 |
+
memory_curr_frame += 1
|
193 |
+
new_frame = algo.interactive(memory_frames[0],
|
194 |
+
actions[i],
|
195 |
+
None,
|
196 |
+
memory_curr_frame,
|
197 |
+
device="cuda:0")
|
198 |
+
|
199 |
+
memory_frames.append(new_frame)
|
200 |
+
|
201 |
+
out_video = torch.stack(memory_frames)
|
202 |
+
out_video = out_video.permute(0,2,3,1).numpy()
|
203 |
+
out_video = np.clip(out_video, a_min=0.0, a_max=1.0)
|
204 |
+
out_video = (out_video * 255).astype(np.uint8)
|
205 |
+
|
206 |
+
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
207 |
+
os.makedirs("outputs_gradio", exist_ok=True)
|
208 |
+
filename = f"outputs_gradio/{timestamp}.mp4"
|
209 |
+
save_video(out_video, filename)
|
210 |
+
|
211 |
+
input_history += keys
|
212 |
+
return out_video[-1], filename, input_history
|
213 |
+
|
214 |
+
def reset():
|
215 |
+
global memory_curr_frame
|
216 |
+
global input_history
|
217 |
+
global memory_frames
|
218 |
+
|
219 |
+
algo.reset()
|
220 |
+
memory_frames = []
|
221 |
+
memory_frames.append(load_image_as_tensor(DEFAULT_IMAGE))
|
222 |
+
memory_curr_frame = 0
|
223 |
+
input_history = ""
|
224 |
+
|
225 |
+
_ = algo.interactive(memory_frames[0],
|
226 |
+
actions[0],
|
227 |
+
poses[0],
|
228 |
+
memory_curr_frame,
|
229 |
+
device="cuda:0")
|
230 |
+
return input_history, DEFAULT_IMAGE
|
231 |
+
|
232 |
+
def on_image_click(SELECTED_IMAGE):
|
233 |
+
global DEFAULT_IMAGE
|
234 |
+
DEFAULT_IMAGE = SELECTED_IMAGE
|
235 |
+
reset()
|
236 |
+
return SELECTED_IMAGE
|
237 |
+
|
238 |
+
css = """
|
239 |
+
h1 {
|
240 |
+
text-align: center;
|
241 |
+
display:block;
|
242 |
+
}
|
243 |
+
"""
|
244 |
+
|
245 |
+
# update_image_and_log("W")
|
246 |
+
with gr.Blocks(css=css) as demo:
|
247 |
+
gr.Markdown(
|
248 |
+
"""
|
249 |
+
# WORLDMEM: Long-term Consistent World Generation with Memory
|
250 |
+
|
251 |
+
<div style="text-align: center;">
|
252 |
+
<!-- Public Website -->
|
253 |
+
<a style="display:inline-block" href="https://nirvanalan.github.io/projects/GA/">
|
254 |
+
<img src="https://img.shields.io/badge/public_website-8A2BE2">
|
255 |
+
</a>
|
256 |
+
|
257 |
+
<!-- GitHub Stars -->
|
258 |
+
<a style="display:inline-block; margin-left: .5em" href="https://github.com/NIRVANALAN/GaussianAnything">
|
259 |
+
<img src="https://img.shields.io/github/stars/NIRVANALAN/GaussianAnything?style=social">
|
260 |
+
</a>
|
261 |
+
|
262 |
+
<!-- Project Page -->
|
263 |
+
<a style="display:inline-block; margin-left: .5em" href="https://nirvanalan.github.io/projects/GA/">
|
264 |
+
<img src="https://img.shields.io/badge/project_page-blue">
|
265 |
+
</a>
|
266 |
+
|
267 |
+
<!-- arXiv Paper -->
|
268 |
+
<a style="display:inline-block; margin-left: .5em" href="https://arxiv.org/abs/XXXX.XXXXX">
|
269 |
+
<img src="https://img.shields.io/badge/arXiv-paper-red">
|
270 |
+
</a>
|
271 |
+
</div>
|
272 |
+
|
273 |
+
"""
|
274 |
+
)
|
275 |
+
|
276 |
+
with gr.Row(variant="panel"):
|
277 |
+
video_display = gr.Video(autoplay=True, loop=True)
|
278 |
+
image_display = gr.Image(value=DEFAULT_IMAGE, interactive=False, label="Last Frame")
|
279 |
+
|
280 |
+
with gr.Row(variant="panel"):
|
281 |
+
with gr.Column(scale=2):
|
282 |
+
input_box = gr.Textbox(label="Action Sequence", placeholder="Enter action sequence here...", lines=1, max_lines=1)
|
283 |
+
log_output = gr.Textbox(label="History Log", interactive=False)
|
284 |
+
with gr.Column(scale=1):
|
285 |
+
slider = gr.Slider(minimum=10, maximum=50, value=algo.sampling_timesteps, step=1, label="Denoising Steps")
|
286 |
+
submit_button = gr.Button("Generate")
|
287 |
+
reset_btn = gr.Button("Reset")
|
288 |
+
|
289 |
+
sampling_timesteps_state = gr.State(algo.sampling_timesteps)
|
290 |
+
|
291 |
+
example_actions = ["DDDDDDDDEEEEEEEEEESSSAAAAAAAAWWW", "DDDDDDDDDDDDQQQQQQQQQQQQQQQDDDDDDDDDDDD",
|
292 |
+
"DDDDWWWDDDDDDDDDDDDDDDDDDDDSSSAAAAAAAAAAAAAAAAAAAAAAAA", "SSUNNWWEEEEEEEEEAAASSUNNWWEEEEEEEEEAAAAAAAAAAAAAAAAAAAAAA"]
|
293 |
+
|
294 |
+
def set_action(action):
|
295 |
+
return action
|
296 |
+
|
297 |
+
gr.Markdown("### Action sequence examples.")
|
298 |
+
with gr.Row():
|
299 |
+
buttons = []
|
300 |
+
for action in example_actions[:2]:
|
301 |
+
with gr.Column(scale=len(action)):
|
302 |
+
buttons.append(gr.Button(action))
|
303 |
+
with gr.Row():
|
304 |
+
for action in example_actions[2:4]:
|
305 |
+
with gr.Column(scale=len(action)):
|
306 |
+
buttons.append(gr.Button(action))
|
307 |
+
with gr.Row():
|
308 |
+
for action in example_actions[4:5]:
|
309 |
+
with gr.Column(scale=len(action)):
|
310 |
+
buttons.append(gr.Button(action))
|
311 |
+
|
312 |
+
for button, action in zip(buttons, example_actions):
|
313 |
+
button.click(set_action, inputs=[gr.State(value=action)], outputs=input_box)
|
314 |
+
|
315 |
+
|
316 |
+
gr.Markdown("### Click on the images below to reset the sequence and generate from the new image.")
|
317 |
+
|
318 |
+
with gr.Row():
|
319 |
+
image_display_1 = gr.Image(value=SUNFLOWERS_IMAGE, interactive=False, label="Sunflower Plains")
|
320 |
+
image_display_2 = gr.Image(value=DESERT_IMAGE, interactive=False, label="Desert")
|
321 |
+
image_display_3 = gr.Image(value=SAVANNA_IMAGE, interactive=False, label="Savanna")
|
322 |
+
image_display_4 = gr.Image(value=ICE_PLAINS_IMAGE, interactive=False, label="Ice Plains")
|
323 |
+
image_display_5 = gr.Image(value=SUNFLOWERS_RAIN_IMAGE, interactive=False, label="Rainy Sunflower Plains")
|
324 |
+
image_display_6 = gr.Image(value=PLACE_IMAGE, interactive=False, label="Place")
|
325 |
+
|
326 |
+
gr.Markdown(
|
327 |
+
"""
|
328 |
+
## Instructions & Notes:
|
329 |
+
|
330 |
+
1. Enter an action sequence in the **"Action Sequence"** text box and click **"Generate"** to begin.
|
331 |
+
2. You can continue generation by clicking **"Generation"** again and again. Previous sequences are logged in the history panel.
|
332 |
+
3. Click **"Reset"** to clear the current sequence and start fresh.
|
333 |
+
4. Action sequences can be composed using the following keys:
|
334 |
+
- W: turn up
|
335 |
+
- S: turn down
|
336 |
+
- A: turn left
|
337 |
+
- D: turn right
|
338 |
+
- Q: move forward
|
339 |
+
- E: move backward
|
340 |
+
- N: no-op (do nothing)
|
341 |
+
- 1: switch to hotbar 1
|
342 |
+
- U: use item
|
343 |
+
5. Higher denoising steps produce more detailed results but take longer. **20 steps** is a good balance between quality and speed.
|
344 |
+
6. If you find this project interesting or useful, please consider giving it a ⭐️ on [GitHub]()!
|
345 |
+
7. For feedback or suggestions, feel free to open a GitHub issue or contact me directly at **[email protected]**.
|
346 |
+
"""
|
347 |
+
)
|
348 |
+
# input_box.submit(update_image_and_log, inputs=[input_box], outputs=[image_display, video_display, log_output])
|
349 |
+
submit_button.click(update_image_and_log, inputs=[input_box], outputs=[image_display, video_display, log_output])
|
350 |
+
reset_btn.click(reset, outputs=[log_output, image_display])
|
351 |
+
image_display_1.select(lambda: on_image_click(SUNFLOWERS_IMAGE), outputs=image_display)
|
352 |
+
image_display_2.select(lambda: on_image_click(DESERT_IMAGE), outputs=image_display)
|
353 |
+
image_display_3.select(lambda: on_image_click(SAVANNA_IMAGE), outputs=image_display)
|
354 |
+
image_display_4.select(lambda: on_image_click(ICE_PLAINS_IMAGE), outputs=image_display)
|
355 |
+
image_display_5.select(lambda: on_image_click(SUNFLOWERS_RAIN_IMAGE), outputs=image_display)
|
356 |
+
image_display_6.select(lambda: on_image_click(PLACE_IMAGE), outputs=image_display)
|
357 |
+
|
358 |
+
slider.change(fn=set_denoising_steps, inputs=[slider, sampling_timesteps_state], outputs=sampling_timesteps_state)
|
359 |
+
|
360 |
+
# 允许公开访问
|
361 |
+
demo.launch(share=True)
|
362 |
+
demo.launch(server_name="0.0.0.0", server_port=30066)
|
363 |
+
|
364 |
+
if __name__ == "__main__":
|
365 |
+
run() # pylint: disable=no-value-for-parameter
|
app.sh
ADDED
@@ -0,0 +1,50 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
wandb disabled
|
2 |
+
# srun -p a6000_xgpan -w MICL-PanXGSvr2 --gres=gpu:1 --ntasks-per-node=1 --cpus-per-task=8 \
|
3 |
+
export WANDB_API_KEY=a4f0741e80f509317597ad944a7292fabcb68bdf
|
4 |
+
|
5 |
+
CHECKPOINT_PATH="checkpoints/diffusion_only.ckpt"
|
6 |
+
|
7 |
+
python -m app +name=pumpkin \
|
8 |
+
algorithm=df_video_worldmemminecraft \
|
9 |
+
+checkpoint_path=$CHECKPOINT_PATH \
|
10 |
+
experiment.tasks=[interactive] \
|
11 |
+
dataset.validation_multiplier=1 \
|
12 |
+
dataset=video_minecraft \
|
13 |
+
+customized_load=true \
|
14 |
+
+dataset.n_frames_valid=100 \
|
15 |
+
+algorithm.n_tokens=8 \
|
16 |
+
+load_vae=false \
|
17 |
+
+load_t_to_r=false \
|
18 |
+
+zero_init_gate=false \
|
19 |
+
experiment.validation.batch_size=1 \
|
20 |
+
+algorithm.pose_cond_dim=5 \
|
21 |
+
+algorithm.condition_similar_length=8 \
|
22 |
+
+dataset.condition_similar_length=8 \
|
23 |
+
+algorithm.use_plucker=true \
|
24 |
+
+dataset.use_plucker=true \
|
25 |
+
+dataset.padding_pool=10 \
|
26 |
+
+dataset.focal_length=0.35 \
|
27 |
+
+algorithm.focal_length=0.35 \
|
28 |
+
+only_tune_refer=false \
|
29 |
+
+dataset.customized_validation=true \
|
30 |
+
+algorithm.customized_validation=true \
|
31 |
+
algorithm.context_frames=90 \
|
32 |
+
+algorithm.vis_gt=true \
|
33 |
+
+algorithm.relative_embedding=true \
|
34 |
+
dataset.save_dir=data/test_pumpkin \
|
35 |
+
+algorithm.log_video=true \
|
36 |
+
experiment.training.data.num_workers=4 \
|
37 |
+
experiment.validation.data.num_workers=4 \
|
38 |
+
+dataset.angle_range=30 \
|
39 |
+
+dataset.pos_range=0.5 \
|
40 |
+
+algorithm.cond_only_on_qk=true \
|
41 |
+
+algorithm.add_pose_embed=false \
|
42 |
+
+algorithm.use_domain_adapter=false \
|
43 |
+
+algorithm.use_reference_attention=true \
|
44 |
+
+algorithm.add_frame_timestep_embedder=true \
|
45 |
+
+dataset.add_frame_timestep_embedder=true \
|
46 |
+
experiment.validation.limit_batch=1 \
|
47 |
+
algorithm.diffusion.sampling_timesteps=20 \
|
48 |
+
+algorithm.is_interactive=true \
|
49 |
+
+vae_path=checkpoints/vae_only.ckpt \
|
50 |
+
+pose_predictor_path=checkpoints/pose_prediction_model_only.ckpt
|
configurations/README.md
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# configurations
|
2 |
+
|
3 |
+
We use [Hydra](https://hydra.cc/docs/intro/) to manage configurations. Change/Add the yaml files in this folder
|
4 |
+
to change the default configurations. You can also override the default configurations by
|
5 |
+
passing command line arguments.
|
6 |
+
|
7 |
+
All configurations are automatically saved in wandb run.
|