Update README.md
Browse files
README.md
CHANGED
@@ -26,7 +26,7 @@ Of course, I recommend conducting your own independent analysis of content and c
|
|
26 |
* [DrawThings.ai](https://drawthings.ai) have uploaded [`megalith-10m-sharecap`](https://huggingface.co/datasets/drawthingsai/megalith-10m-sharecap) (captions made with [ShareCaptioner](https://huggingface.co/Lin-Chen/ShareCaptioner)) <br/><a href="https://huggingface.co/datasets/drawthingsai/megalith-10m-sharecap"><img src='https://cdn-uploads.huggingface.co/production/uploads/630447d40547362a22a969a2/vXM-x4TNfRn3AQTRGveLn.png' width=720px/></a>
|
27 |
* [AI Picasso](https://aipicasso.app) have uploaded [`megalith-10m-florence2`](https://huggingface.co/datasets/aipicasso/megalith-10m-florence2) (captions made with [Florence 2](https://huggingface.co/microsoft/Florence-2-large)) <br/><a href="https://huggingface.co/datasets/aipicasso/megalith-10m-florence2"><img src='https://cdn-uploads.huggingface.co/production/uploads/630447d40547362a22a969a2/RVHZluYqq4-pB1mFpq5Qj.png' width=720px/></a>
|
28 |
* [CaptionEmporium](https://huggingface.co/CaptionEmporium) have uploaded [`flickr-megalith-10m-internvl2-multi-caption`](https://huggingface.co/datasets/CaptionEmporium/flickr-megalith-10m-internvl2-multi-caption) (captions made with [InternVL2-8B](https://huggingface.co/datasets/CaptionEmporium/flickr-megalith-10m-internvl2-multi-caption/blob/main/OpenGVLab/InternVL2-8B) as well as shorter single-sentence captions made by summarizing the InternVL2/Florence2/ShareCaptioner results with [Llama3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)) <br/><a href="https://huggingface.co/datasets/CaptionEmporium/flickr-megalith-10m-internvl2-multi-caption"><img src='https://cdn-uploads.huggingface.co/production/uploads/630447d40547362a22a969a2/BObamPthy8kiQICGjCQ4f.png' width=720px/></a>
|
29 |
-
* [
|
30 |
|
31 |
### How can I efficiently download the images referenced by Megalith-10m?
|
32 |
|
@@ -78,4 +78,5 @@ Of course, many parts of the world aren't well-represented in Megalith-10m, so y
|
|
78 |
|
79 |
1. AI Picasso have successfully trained a full text-to-image model [CommonArt β](https://huggingface.co/aipicasso/commonart-beta) on Megalith-10m (and other open datasets).
|
80 |
2. I've successfully trained small [text-to-image models](https://x.com/madebyollin/status/1788282620981497981) on Megalith-10m for my own education.
|
81 |
-
3. Megalith-10m was among the datasets used to train [Janus](https://github.com/deepseek-ai/Janus)
|
|
|
|
26 |
* [DrawThings.ai](https://drawthings.ai) have uploaded [`megalith-10m-sharecap`](https://huggingface.co/datasets/drawthingsai/megalith-10m-sharecap) (captions made with [ShareCaptioner](https://huggingface.co/Lin-Chen/ShareCaptioner)) <br/><a href="https://huggingface.co/datasets/drawthingsai/megalith-10m-sharecap"><img src='https://cdn-uploads.huggingface.co/production/uploads/630447d40547362a22a969a2/vXM-x4TNfRn3AQTRGveLn.png' width=720px/></a>
|
27 |
* [AI Picasso](https://aipicasso.app) have uploaded [`megalith-10m-florence2`](https://huggingface.co/datasets/aipicasso/megalith-10m-florence2) (captions made with [Florence 2](https://huggingface.co/microsoft/Florence-2-large)) <br/><a href="https://huggingface.co/datasets/aipicasso/megalith-10m-florence2"><img src='https://cdn-uploads.huggingface.co/production/uploads/630447d40547362a22a969a2/RVHZluYqq4-pB1mFpq5Qj.png' width=720px/></a>
|
28 |
* [CaptionEmporium](https://huggingface.co/CaptionEmporium) have uploaded [`flickr-megalith-10m-internvl2-multi-caption`](https://huggingface.co/datasets/CaptionEmporium/flickr-megalith-10m-internvl2-multi-caption) (captions made with [InternVL2-8B](https://huggingface.co/datasets/CaptionEmporium/flickr-megalith-10m-internvl2-multi-caption/blob/main/OpenGVLab/InternVL2-8B) as well as shorter single-sentence captions made by summarizing the InternVL2/Florence2/ShareCaptioner results with [Llama3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)) <br/><a href="https://huggingface.co/datasets/CaptionEmporium/flickr-megalith-10m-internvl2-multi-caption"><img src='https://cdn-uploads.huggingface.co/production/uploads/630447d40547362a22a969a2/BObamPthy8kiQICGjCQ4f.png' width=720px/></a>
|
29 |
+
* [Moondream](https://moondream.ai) have uploaded [`megalith-mdqa`](https://huggingface.co/datasets/moondream/megalith-mdqa), captions and Q&A pairs made with Moondream
|
30 |
|
31 |
### How can I efficiently download the images referenced by Megalith-10m?
|
32 |
|
|
|
78 |
|
79 |
1. AI Picasso have successfully trained a full text-to-image model [CommonArt β](https://huggingface.co/aipicasso/commonart-beta) on Megalith-10m (and other open datasets).
|
80 |
2. I've successfully trained small [text-to-image models](https://x.com/madebyollin/status/1788282620981497981) on Megalith-10m for my own education.
|
81 |
+
3. Megalith-10m was among the datasets used to train DeepSeek's [Janus](https://github.com/deepseek-ai/Janus) and [DeepSeek-VL2](https://arxiv.org/abs/2412.10302) models
|
82 |
+
4. Megalith-10m was used to train [Bokeh Diffusion](https://atfortes.github.io/projects/bokeh-diffusion/) which adds bokeh control to T2I models
|