|
--- |
|
base_model: |
|
- allura-org/Gemma-3-Glitter-4B |
|
- soob3123/Veiled-Calla-4B |
|
- SicariusSicariiStuff/X-Ray_Alpha |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
- roleplay |
|
- creative |
|
language: |
|
- en |
|
- ru |
|
--- |
|
# merge |
|
|
|
This is a merge of pretrained gemma3 language models |
|
|
|
Some circumstances made me remember what real low-end computer is, so idea of this merge was born. |
|
|
|
Goal of this merge is to create uncensored, somewhat smart all-round model, small enough to run on really ancient hardware. |
|
|
|
This model takes main good features from its components, it is not censored (to a reasonable degree, extremely stuff wasn't tested), creative, able to multilang AND have not bad uncensored vision. |
|
|
|
This model tested on core2duo E8400 with 8gb ddr2 ram, partially offloaded on 2gb gt630. It was more an experiment, but speed was bearable. On normal hardware model is FAST. |
|
|
|
Of course, it is 4B after all, so don't expect 32 or 24b performance, but for it's size it is really good. For vision i used mmproj from SicariusSicariiStuff/X-Ray_Alpha |
|
|
|
tested both q5_k_m and q8, both stable, but i reccomend to use q8 if possible. |