aotimme commited on
Commit
c4eee08
·
verified ·
1 Parent(s): 1d69245

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -11,11 +11,11 @@ tags:
11
  - pathology
12
  license: apache-2.0
13
  ---
14
- # cx-ts:tcga-v1
15
 
16
  ## Overview
17
 
18
- The **cx-ts:tcga-v1** model performs binary segmentation of patches of tissue present in [H&E](https://en.wikipedia.org/wiki/H%26E_stain) pathology slides.
19
  It is architected to run efficiently on resource constrained systems, providing tissue segmentation on a slide in under 1 second on a typical CPU.
20
 
21
  The model is trained on a manually curated set of slides from [our linked dataset](https://huggingface.co/datasets/conflux-xyz/tcga-tissue-segmentation), where it achieves 0.93 mIoU for tissue on the test split.
@@ -36,7 +36,7 @@ For more details on the background of the model, check out the blog post here: h
36
 
37
  ## Usage
38
 
39
- **cx-ts:tgca-v1** was trained on 512 x 512 pixel patches from thumbnail images of whole slides at 40 microns per pixel (MPP) -- a 4x downsample from the images in the dataset.
40
  Thus, it is important when running inference with the model to run it on 40 MPP thumbnails and run inference on tiles of the same dimension (512 x 512).
41
  When padding tiles, pad with pure white: `rgb(255, 255, 255)`.
42
 
 
11
  - pathology
12
  license: apache-2.0
13
  ---
14
+ # CxTissueSeg
15
 
16
  ## Overview
17
 
18
+ The **CxTissueSeg** model performs binary segmentation of patches of tissue present in [H&E](https://en.wikipedia.org/wiki/H%26E_stain) pathology slides.
19
  It is architected to run efficiently on resource constrained systems, providing tissue segmentation on a slide in under 1 second on a typical CPU.
20
 
21
  The model is trained on a manually curated set of slides from [our linked dataset](https://huggingface.co/datasets/conflux-xyz/tcga-tissue-segmentation), where it achieves 0.93 mIoU for tissue on the test split.
 
36
 
37
  ## Usage
38
 
39
+ **CxTissueSeg** was trained on 512 x 512 pixel patches from thumbnail images of whole slides at 40 microns per pixel (MPP) -- a 4x downsample from the images in the dataset.
40
  Thus, it is important when running inference with the model to run it on 40 MPP thumbnails and run inference on tiles of the same dimension (512 x 512).
41
  When padding tiles, pad with pure white: `rgb(255, 255, 255)`.
42