erotemic commited on
Commit
3e73353
·
1 Parent(s): 80f5063

add metadata

Browse files
Files changed (2) hide show
  1. DATASHEET.md +336 -0
  2. README.md +21 -10
DATASHEET.md ADDED
@@ -0,0 +1,336 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Datasheet: ShitSpotter
2
+
3
+ Template: [JRMeyer/markdown-datasheet-for-datasets](https://github.com/JRMeyer/markdown-datasheet-for-datasets) based on [Datasheets for Datasets by Gebru et al](https://arxiv.org/abs/1803.09010).
4
+
5
+ Author: Jon Crall
6
+
7
+ Organization: Kitware
8
+
9
+
10
+ ## Motivation
11
+
12
+ *The questions in this section are primarily intended to encourage dataset creators to clearly articulate their reasons for creating the dataset and to promote transparency about funding interests.*
13
+
14
+ 1. **For what purpose was the dataset created?** Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description.
15
+
16
+ There are several reasons.
17
+
18
+ 1. To provide a training / validation data for poop detection / segmentation networks.
19
+
20
+ 2. To experiment with distribution of datasets over IPFS.
21
+
22
+ 3. To provide an interesting, challenge, and humorous problem / benchmark.
23
+
24
+ 2. **Who created this dataset (e.g. which team, research group) and on behalf of which entity (e.g. company, institution, organization)**?
25
+
26
+ The primary author - Jon Crall - collected most images.
27
+ Several contributions have been made by other people: friends, familly, acquaintances, colleagues.
28
+
29
+ There is no entity this is on behalf of.
30
+
31
+ 3. **What support was needed to make this dataset?** (e.g. who funded the creation of the dataset? If there is an associated grant, provide the name of the grantor and the grant name and number, or if it was supported by a company or government agency, give those details.)
32
+
33
+ There is currently no funding. All effort is based on volunteer time.
34
+
35
+ 4. **Any other comments?**
36
+
37
+ The main reason Jon Crall started collecting the dataset was because
38
+ he wasn't able to find where his dogs in the fall because of all the leafs.
39
+
40
+ See the README.
41
+
42
+
43
+ ## Composition
44
+
45
+ *Dataset creators should read through the questions in this section prior to any data collection and then provide answers once collection is complete. Most of these questions are intended to provide dataset consumers with the information they need to make informed decisions about using the dataset for specific tasks. The answers to some of these questions reveal information about compliance with the EU’s General Data Protection Regulation (GDPR) or comparable regulations in other jurisdictions.*
46
+
47
+ 1. **What do the instances that comprise the dataset represent (e.g. documents, photos, people, countries)?** Are there multiple types of instances (e.g. movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description.
48
+
49
+ The exact answer to this question may change as the dataset grows.
50
+
51
+ Mostly dog feces. Feces from other species are represented, but not well
52
+ supported by labels yet. Specifically, there are images of horse and racoon
53
+ poop. It is possible that some of the images labeled as dog poop are
54
+ actually feces from another species. If another animal pooped in
55
+ a public area and a contributor came across it, it may be mistaken for
56
+ dog poop. It would be interesting of some of these cases could be
57
+ identified.
58
+
59
+ 2. **How many instances are there in total (of each type, if appropriate)?**
60
+
61
+ The answer to this will change as the dataset grows. Thus, it is important
62
+ to answer this question programatically. We provide the command and a recent
63
+ output with a timestamp of its generation.
64
+
65
+ ```
66
+ cd $HOME/code/shitspotter/shitspotter_dvc
67
+ date
68
+ kwcoco stats data.kwcoco.json
69
+
70
+ n_anns n_cats n_imgs n_tracks n_videos
71
+ 4386 3 6648 0 0
72
+
73
+ ```
74
+
75
+ 1986 Poop Annotations. @ Sun Dec 10 07:20:05 PM EST 2023
76
+
77
+
78
+ 3. **Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set?** If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g. geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g. to cover a more diverse range of instances, because instances were withheld or unavailable).
79
+
80
+ No, it is a sample of stool, mostly from Albany, NY.
81
+
82
+ 4. **What data does each instance consist of?** "Raw" data (e.g. unprocessed text or images) or features? In either case, please provide a description.
83
+
84
+ Each annotation is a polygon
85
+
86
+ 5. **Is there a label or target associated with each instance?** If so, please provide a description.
87
+
88
+ Currently, polygons are only labeled as poop. This may change in the future.
89
+
90
+ 6. **Is any information missing from individual instances?** If so, please provide a description, explaining why this information is missing (e.g. because it was unavailable). This does not include intentionally removed information, but might include, e.g. redacted text.
91
+
92
+ Yes, the identity of the dog that pooped was not recorded - and sometimes
93
+ is unavailable.
94
+
95
+
96
+ 7. **Are relationships between individual instances made explicit (e.g. users' movie ratings, social network links)?** If so, please describe how these relationships are made explicit.
97
+
98
+ No. I don't think that applies here.
99
+
100
+ 8. **Are there recommended data splits (e.g. training, development/validation, testing)?** If so, please provide a description of these splits, explaining the rationale behind them.
101
+
102
+ Yes, we are currently suggesting that data from 2021, 2022, 2023 are in the
103
+ training set. Data from 2020 is used for validation. Data from 2024 is
104
+ split between training and validation. On the nth day of the year, images
105
+ are in the validation set if n % 3 == 0 else they are in the training set.
106
+
107
+ 9. **Are there any errors, sources of noise, or redundancies in the dataset?** If so, please provide a description.
108
+
109
+ Yes, some images were not taken according to the "before,after,negative" protocol.
110
+
111
+ 10. **Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g. websites, tweets, other datasets)?** If it links to or relies on external resources, a) are there guarantees that they will exist, and remain constant, over time; b) are there official archival versions of the complete dataset (i.e., including the external resources as they existed at the time the dataset was created); c) are there any restrictions (e.g. licenses, fees) associated with any of the external resources that might apply to a future user? Please provide descriptions of all external resources and any restrictions associated with them, as well as links or other access points, as appropriate.
112
+
113
+ Yes, it is a self-contained dataset.
114
+
115
+ 11. **Does the dataset contain data that might be considered confidential (e.g. data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals' non-public communications)?** If so, please provide a description.
116
+
117
+ No, I don't think so.
118
+
119
+ 12. **Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?** If so, please describe why.
120
+
121
+ Yes. Some people might find poop offensive, and viewing it may cause anxiety.
122
+
123
+ 13. **Does the dataset relate to people?** If not, you may skip the remaining questions in this section.
124
+
125
+ Mostly no. Sometimes people do appear in photos incidentally.
126
+
127
+ 14. **Does the dataset identify any subpopulations (e.g. by age, gender)?** If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset.
128
+
129
+ No.
130
+
131
+ 15. **Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset?** If so, please describe how.
132
+
133
+ Possibly, but it would be very difficult.
134
+
135
+ 16. **Does the dataset contain data that might be considered sensitive in any way (e.g. data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)?** If so, please provide a description.
136
+
137
+ No. I don't think that images of poop qualify as sensitive.
138
+
139
+ 17. **Any other comments?**
140
+
141
+ Everybody poops.
142
+
143
+
144
+ ## Collection
145
+
146
+ *As with the previous section, dataset creators should read through these questions prior to any data collection to flag potential issues and then provide answers once collection is complete. In addition to the goals of the prior section, the answers to questions here may provide information that allow others to reconstruct the dataset without access to it.*
147
+
148
+ 1. **How was the data associated with each instance acquired?** Was the data directly observable (e.g. raw text, movie ratings), reported by subjects (e.g. survey responses), or indirectly inferred/derived from other data (e.g. part-of-speech tags, model-based guesses for age or language)? If data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how.
149
+
150
+ Images were labeled with with both manual and AI-assited (SegmentAnythingModel) polygons.
151
+
152
+ 2. **What mechanisms or procedures were used to collect the data (e.g. hardware apparatus or sensor, manual human curation, software program, software API)?** How were these mechanisms or procedures validated?
153
+
154
+ A phone camera (details are recorded in metadata) and the LabelMe segmentation tool.
155
+
156
+ 3. **If the dataset is a sample from a larger set, what was the sampling strategy (e.g. deterministic, probabilistic with specific sampling probabilities)?**
157
+
158
+ It is a subset of the set of all possible images of dog poop. It is not a subset or generated from some other dataset, these are all original phone images taken for the purpose of constructing this dataset.
159
+
160
+ 4. **Who was involved in the data collection process (e.g. students, crowdworkers, contractors) and how were they compensated (e.g. how much were crowdworkers paid)?**
161
+
162
+ All work was volunteer work.
163
+
164
+ 5. **Over what timeframe was the data collected?** Does this timeframe match the creation timeframe of the data associated with the instances (e.g. recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. Finally, list when the dataset was first published.
165
+
166
+ Work started on 2020-11-12. Collection is ongoing. The git repo was launched on 2021-11-11.
167
+
168
+ 7. **Were any ethical review processes conducted (e.g. by an institutional review board)?** If so, please provide a description of these review processes, including the outcomes, as well as a link or other access point to any supporting documentation.
169
+
170
+ No. This project started organically.
171
+
172
+ 8. **Does the dataset relate to people?** If not, you may skip the remainder of the questions in this section.
173
+
174
+ Mostly no. Some people do appear in the images.
175
+
176
+ 9. **Did you collect the data from the individuals in question directly, or obtain it via third parties or other sources (e.g. websites)?**
177
+
178
+ No
179
+
180
+ 10. **Were the individuals in question notified about the data collection?** If so, please describe (or show with screenshots or other information) how notice was provided, and provide a link or other access point to, or otherwise reproduce, the exact language of the notification itself.
181
+
182
+ No
183
+
184
+ 11. **Did the individuals in question consent to the collection and use of their data?** If so, please describe (or show with screenshots or other information) how consent was requested and provided, and provide a link or other access point to, or otherwise reproduce, the exact language to which the individuals consented.
185
+
186
+ No
187
+
188
+ 12. **If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses?** If so, please provide a description, as well as a link or other access point to the mechanism (if appropriate).
189
+
190
+ Na
191
+
192
+ 13. **Has an analysis of the potential impact of the dataset and its use on data subjects (e.g. a data protection impact analysis) been conducted?** If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation.
193
+
194
+ No
195
+
196
+ 14. **Any other comments?**
197
+
198
+ No
199
+
200
+
201
+ ## Preprocessing / Cleaning / Labeling
202
+
203
+ *Dataset creators should read through these questions prior to any pre-processing, cleaning, or labeling and then provide answers once these tasks are complete. The questions in this section are intended to provide dataset consumers with the information they need to determine whether the “raw” data has been processed in ways that are compatible with their chosen tasks. For example, text that has been converted into a “bag-of-words” is not suitable for tasks involving word order.*
204
+
205
+ 1. **Was any preprocessing/cleaning/labeling of the data done (e.g. discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)?** If so, please provide a description. If not, you may skip the remainder of the questions in this section.
206
+
207
+ All data is provided as recieved.
208
+
209
+ 2. **Was the "raw" data saved in addition to the preprocessed/cleaned/labeled data (e.g. to support unanticipated future uses)?** If so, please provide a link or other access point to the "raw" data.
210
+
211
+ Yes, the data is stored in its original form as given by the phone.
212
+
213
+ 3. **Is the software used to preprocess/clean/label the instances available?** If so, please provide a link or other access point.
214
+
215
+ Software on the phone may include post processing. I'm unaware of what these methods are.
216
+
217
+ 4. **Any other comments?**
218
+
219
+ Several attributes like precomputed homographies between the "before/after" images are
220
+ provided in the IPFS distribution as a lightweight cache, and the shitspotter codebase
221
+ contains
222
+
223
+
224
+ ## Uses
225
+
226
+ *These questions are intended to encourage dataset creators to reflect on the tasks for which the dataset should and should not be used. By explicitly highlighting these tasks, dataset creators can help dataset consumers to make informed decisions, thereby avoiding potential risks or harms.*
227
+
228
+ 1. **Has the dataset been used for any tasks already?** If so, please provide a description.
229
+
230
+ No
231
+
232
+ 2. **Is there a repository that links to any or all papers or systems that use the dataset?** If so, please provide a link or other access point.
233
+
234
+ Currently there are none that I know of, but the main README will be
235
+ updated with this information: https://github.com/Erotemic/shitspotter
236
+
237
+ 3. **What (other) tasks could the dataset be used for?**
238
+
239
+ There is a potential to classify different types of poop from different
240
+ species. Images contain other content such as local park scenery, grass,
241
+ leafs. Additional annotations could be placed on those objets for other tasks.
242
+
243
+ 4. **Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses?** For example, is there anything that a future user might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g. stereotyping, quality of service issues) or other undesirable harms (e.g. financial harms, legal risks) If so, please provide a description. Is there anything a future user could do to mitigate these undesirable harms?
244
+
245
+ There is a bias towards poops from certain individuals. There is a long
246
+ tailed distribution of identity of the pooper. Some poops are older than
247
+ others, that distribution is unlabeled, but a human annotator may be able
248
+ to guess the ages (or individual animal in some cases). It's also not 100%
249
+ clear if all images are dog poop. Most certainly are, but some may not be.
250
+
251
+ 5. **Are there tasks for which the dataset should not be used?** If so, please provide a description.
252
+
253
+ Nothing comes to mind.
254
+
255
+ 6. **Any other comments?**
256
+
257
+ The motivating use case is to build a phone application that can help dog
258
+ owners find lost poops. There are several other use cases I can imagine, some
259
+ more elaborate than others.
260
+
261
+ * Automatic waste cleanup robots
262
+ * "Smart glasses" augmented reality to warn you before you step in poop.
263
+ * As suplemental data for species identification from images of feces.
264
+
265
+
266
+ ## Distribution
267
+
268
+ *Dataset creators should provide answers to these questions prior to distributing the dataset either internally within the entity on behalf of which the dataset was created or externally to third parties.*
269
+
270
+ 1. **Will the dataset be distributed to third parties outside of the entity (e.g. company, institution, organization) on behalf of which the dataset was created?** If so, please provide a description.
271
+
272
+ It will be freely available as long as someone is willing to host it.
273
+
274
+ 2. **How will the dataset will be distributed (e.g. tarball on website, API, GitHub)?** Does the dataset have a digital object identifier (DOI)?
275
+
276
+ No DOI yet. It is being made available via IPFS, BitTorrent, and centralized means.
277
+
278
+ 3. **When will the dataset be distributed?**
279
+
280
+ I update about once a month.
281
+
282
+ 4. **Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)?** If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions.
283
+
284
+ All data is free to use under "Creative Commons Attribution 4.0 International".
285
+
286
+ 5. **Have any third parties imposed IP-based or other restrictions on the data associated with the instances?** If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms, as well as any fees associated with these restrictions.
287
+
288
+ No
289
+
290
+ 6. **Do any export controls or other regulatory restrictions apply to the dataset or to individual instances?** If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation.
291
+
292
+ No
293
+
294
+ 7. **Any other comments?**
295
+
296
+ No
297
+
298
+
299
+ ## Maintenance
300
+
301
+ *As with the previous section, dataset creators should provide answers to these questions prior to distributing the dataset. These questions are intended to encourage dataset creators to plan for dataset maintenance and communicate this plan with dataset consumers.*
302
+
303
+ 1. **Who is supporting/hosting/maintaining the dataset?**
304
+
305
+ Currently, the main author hosts an IPFS server. Their employer's IPFS
306
+ server also pins the information, and other entities may be pinning it.
307
+
308
+ 2. **How can the owner/curator/manager of the dataset be contacted (e.g. email address)?**
309
+
310
+ Github Issue for this project.
311
+
312
+ 3. **Is there an erratum?** If so, please provide a link or other access point.
313
+
314
+ No.
315
+
316
+ 4. **Will the dataset be updated (e.g. to correct labeling errors, add new instances, delete instances)?** If so, please describe how often, by whom, and how updates will be communicated to users (e.g. mailing list, GitHub)?
317
+
318
+ Roughly monthly with a push to github.
319
+
320
+ 5. **If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g. were individuals in question told that their data would be retained for a fixed period of time and then deleted)?** If so, please describe these limits and explain how they will be enforced.
321
+
322
+ Na.
323
+
324
+ 6. **Will older versions of the dataset continue to be supported/hosted/maintained?** If so, please describe how. If not, please describe how its obsolescence will be communicated to users.
325
+
326
+ Possibly, as long as the IPFS CIDs remain alive. At the time of writing all
327
+ versions of the dataset should still be available.
328
+
329
+ 7. **If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so?** If so, please provide a description. Will these contributions be validated/verified? If so, please describe how. If not, why not? Is there a process for communicating/distributing these contributions to other users? If so, please provide a description.
330
+
331
+ Yes. Take it and do what you want: ideally something cool and good. It
332
+ would be nice to throw us a citation though.
333
+
334
+ 8. **Any other comments?**
335
+
336
+ No
README.md CHANGED
@@ -38,7 +38,7 @@ a centralized Girder server, and HuggingFace.
38
  - **License:**, Code: Apache 2.0, Data: Creative Commons Attribution 4.0 International (CC BY 4.0)
39
 
40
 
41
- ### Dataset Sources [optional]
42
 
43
  - **Repository:** https://github.com/Erotemic/shitspotter
44
  - **Paper:** Preprint: https://www.arxiv.org/abs/2412.16473
@@ -144,31 +144,32 @@ The dataset creator; contributors may annotate in future.
144
 
145
  #### Personal and Sensitive Information
146
 
147
- Some EXIF location metadata is stripped.
 
148
 
149
  ## Bias, Risks, and Limitations
150
 
151
- Geographic bias: Mostly Upstate NY, (one photo from Greece).
152
 
153
  Sensor bias: Primarily Google Pixel 5
154
 
155
  Collector/dog bias: Same person and 3 dogs over time
156
 
157
- Bias in freshness and lighting: More recent, fresh, daytime images dominate.
 
158
 
159
 
160
  ### Recommendations
161
 
162
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
163
-
164
  When applying models trained on this dataset to new domains (e.g., different locations, animals, lighting),
165
  expect a performance gap unless domain adaptation is applied.
166
  Future work may include generalizing across feces types.
167
 
168
 
169
- ## Citation [optional]
170
 
171
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
 
172
 
173
  **BibTeX:**
174
 
@@ -186,7 +187,7 @@ Future work may include generalizing across feces types.
186
  Crall, J. (2024). ScatSpotter 2024 — A Distributed Dog Poop Detection Dataset. arXiv:2412.16473.
187
 
188
 
189
- ## Glossary [optional]
190
 
191
  B/A/N Protocol: Before/After/Negative triplets for contrastive learning
192
 
@@ -195,12 +196,22 @@ IPFS: InterPlanetary File System, a decentralized, content-addressable data stor
195
  Polygon annotation: High-precision mask of object shape
196
 
197
 
198
- ## More Information [optional]
199
 
200
  Dataset access via: IPFS, BitTorrent, and Girder
201
 
202
  Datasheet: Available in repo
203
 
 
 
 
 
 
 
 
 
 
 
204
  ## Dataset Card Contact
205
 
206
  GitHub Issues: https://github.com/Erotemic/shitspotter/issues
 
38
  - **License:**, Code: Apache 2.0, Data: Creative Commons Attribution 4.0 International (CC BY 4.0)
39
 
40
 
41
+ ### Dataset Sources
42
 
43
  - **Repository:** https://github.com/Erotemic/shitspotter
44
  - **Paper:** Preprint: https://www.arxiv.org/abs/2412.16473
 
144
 
145
  #### Personal and Sensitive Information
146
 
147
+ Some EXIF location metadata is stripped, but many images are unmodified.
148
+ Consent was received to publish EXIF metadata along with these images.
149
 
150
  ## Bias, Risks, and Limitations
151
 
152
+ Geographic bias: Mostly Upstate NY.
153
 
154
  Sensor bias: Primarily Google Pixel 5
155
 
156
  Collector/dog bias: Same person and 3 dogs over time
157
 
158
+ Bias in freshness, specific dogs, and lighting: More recent, fresh, daytime
159
+ images with poops from the author's dogs dominate.
160
 
161
 
162
  ### Recommendations
163
 
 
 
164
  When applying models trained on this dataset to new domains (e.g., different locations, animals, lighting),
165
  expect a performance gap unless domain adaptation is applied.
166
  Future work may include generalizing across feces types.
167
 
168
 
169
+ ## Citation
170
 
171
+ The current paper is not peer-reviewed. When a peer-reviewed version of the
172
+ paper is published we will update the citation.
173
 
174
  **BibTeX:**
175
 
 
187
  Crall, J. (2024). ScatSpotter 2024 — A Distributed Dog Poop Detection Dataset. arXiv:2412.16473.
188
 
189
 
190
+ ## Glossary
191
 
192
  B/A/N Protocol: Before/After/Negative triplets for contrastive learning
193
 
 
196
  Polygon annotation: High-precision mask of object shape
197
 
198
 
199
+ ## More Information
200
 
201
  Dataset access via: IPFS, BitTorrent, and Girder
202
 
203
  Datasheet: Available in repo
204
 
205
+ 2024 Paper Preprint: https://www.arxiv.org/abs/2412.16473
206
+
207
+ Torrent: https://academictorrents.com/details/ee8d2c87a39ea9bfe48bef7eb4ca12eb68852c49
208
+
209
+ IPNS address: /ipns/k51qzi5uqu5dje1ees96dtsoslauh124drt5ajrtr85j12ae7fwsfhxb07shit
210
+
211
+ Recent IPFS CID: /ipfs/bafybeihsd6rwjha4kbeluwdjzizxshrkcsynkwgjx7fipm5pual6eexax4
212
+
213
+ Girder Mirror: https://data.kitware.com/#user/598a19658d777f7d33e9c18b/folder/66b6bc34f87a980650f41f90
214
+
215
  ## Dataset Card Contact
216
 
217
  GitHub Issues: https://github.com/Erotemic/shitspotter/issues