Deep Spatiotemporal Clutter Filtering Network

Architecture of the proposed spatiotemporal clutter filtering network. This fully convolutional autoencoder, based on the 3D U-Net, is designed to generate filtered TTE sequences that are coherent in both space and time. An input-output skip connection was incorporated to preserve fine image structures, while attention gate (AG) modules enable the network to focus on clutter zones and leverage contextual information for efficient image reconstruction. The size of the max-pooling window was set to (2x2x1) to preserve the original temporal dimension (i.e., the number of frames) of the input TTE sequences at all levels of the encoding path. Clutter_filter

Results

Filtered_eg1 Filtered_eg2

(a) Examples of the cluttered test frames and ((b)-(h)) the clutter-filtered frames of the six vendors generated by the examined deep networks. (b), (c) and (d) The proposed 3D filter trained with L_rec, L_rec&adv and L_rec&prc, respectively. (e), (f) and (g), the 3D benchmark networks. (h) the 2D benchmark network. (i) The clutter-free frames. For each vendor, absolute difference images computed from the clutter-filtered and clutter-free frames are shown in the row below the respective filtered frames.

Example video clips of (a) the cluttered and (d) clutter-free TTE sequences and the filtering results generated by (b) the proposed 3D filter and (c) the 2D filter (both trained with the in-out skip connection, AG module and reconstruction loss) can be found in Filtering_results_videos /synthetic directory. The row below the filtered frames shows the absolute difference between the clutter-filtered and clutter-free frames.

Example video clips of the in-vivo TTE sequences of four different subjects which are contaminated by the NF and/or RL clutter patterns. (b) The filtering results generated by the proposed 3D filter and (c) the 2D filter can be found in Filtering_results_videos /in-vivo. Absolute differences between the cluttered and clutter-filtered frames are shown below the filtered frames.

Installation

After cloning the repository, run the following command:

pip install -r requirements.txt

To train each filter using the synthetic data, provide directory of the data and a directory for saving the results in config.json of that filter. Then run the following command after changin directory to the filter directory (e.g. for the 3D filter with reconstruction loss):

python TrainClutterFilter3D.py --config config.json

Citation

Please cite as:

@misc{tabassian2025deepspatiotemporalclutterfiltering,
      title={Deep Spatiotemporal Clutter Filtering of Transthoracic Echocardiographic Images: Leveraging Contextual Attention and Residual Learning}, 
      author={Mahdi Tabassian and Somayeh Akbari and Sandro Queirós and Jan D'hooge},
      year={2025},
      eprint={2401.13147},
      archivePrefix={arXiv},
      primaryClass={eess.IV},
      url={https://arxiv.org/abs/2401.13147}, 
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.