mehmetkeremturkcan commited on
Commit
91a0336
·
verified ·
1 Parent(s): da24f70

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -0
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - Wan-AI/Wan2.1-T2V-1.3B
4
+ - Wan-AI/Wan2.1-T2V-1.3B-Diffusers
5
+ library_name: diffusers
6
+ pipeline_tag: text-to-video
7
+ ---
8
+
9
+ <p align="center">
10
+ <img src="https://github.com/mkturkcan/suturingmodels/blob/main/static/images/title.svg?raw=true" />
11
+ </p>
12
+
13
+ # Towards Suturing World Models (Wan, t2v)
14
+
15
+ <p align="center">
16
+ <img src="https://github.com/mkturkcan/suturingmodels/blob/main/static/images/lora_sample.jpg?raw=true" />
17
+ </p>
18
+
19
+
20
+ This repository hosts the fine-tuned Wan2.1-T2V-1.3B text-to-video (t2v) diffusion model specialized for generating realistic robotic surgical suturing videos, capturing fine-grained sub-stitch actions including needle positioning, targeting, driving, and withdrawal. The model can differentiate between ideal and non-ideal surgical techniques, making it suitable for applications in surgical training, skill evaluation, and autonomous surgical system development.
21
+
22
+ ## Model Details
23
+
24
+ - **Base Model**: Wan2.1-T2V-1.3B
25
+ - **Resolution**: 768×512 pixels (Adjustable)
26
+ - **Frame Length**: 49 frames per generated video (Adjustable)
27
+ - **Fine-tuning Method**: Low-Rank Adaptation (LoRA)
28
+ - **Data Source**: Annotated laparoscopic surgery exercise videos (∼2,000 clips)
29
+
30
+ ## Usage Example
31
+
32
+ ```python
33
+ import torch
34
+ from diffsynth import ModelManager, WanVideoPipeline, save_video, VideoData
35
+
36
+
37
+ model_manager = ModelManager(torch_dtype=torch.bfloat16, device="cpu")
38
+ model_manager.load_models([
39
+ "../Wan2.1-T2V-1.3B/diffusion_pytorch_model.safetensors",
40
+ "../Wan2.1-T2V-1.3B/models_t5_umt5-xxl-enc-bf16.pth",
41
+ "../Wan2.1-T2V-1.3B/Wan2.1_VAE.pth",
42
+ ])
43
+ model_manager.load_lora("mehmetkeremturkcan/Suturing-WAN-T2V", lora_alpha=1.0)
44
+ pipe = WanVideoPipeline.from_model_manager(model_manager, device="cuda")
45
+ pipe.enable_vram_management(num_persistent_param_in_dit=None)
46
+
47
+ video = pipe(
48
+ prompt="A needledrivingnonideal clip, generated from a backhand task.",
49
+ num_inference_steps=50,
50
+ tiled=True
51
+ )
52
+ save_video(video, "video.mp4", fps=30, quality=5)
53
+ ```
54
+
55
+ ## Applications
56
+ - **Surgical Training**: Generate demonstrations of both ideal and non-ideal surgical techniques for training purposes.
57
+ - **Skill Evaluation**: Assess surgical skills by comparing actual procedures against model-generated standards.
58
+ - **Robotic Automation**: Inform autonomous surgical robotic systems for real-time guidance and procedure automation.
59
+
60
+ ## Quantitative Performance
61
+ | Metric | Performance |
62
+ |-------------------------|---------------|
63
+ | L2 Reconstruction Loss | 0.0667 |
64
+ | Inference Time | ~360 seconds per video |
65
+
66
+ ## Future Directions
67
+ Further improvements will focus on increasing model robustness, expanding the dataset diversity, and enhancing real-time applicability to robotic surgical scenarios.