www.minds.com/jelyazko/
21world
AI & ML interests
PRECOMPUTED AI WEIGHTS
! INNOVATIONS ARE USED INSTEAD REFUSALS !
Known Training Algos:
- Backpropagation
- Forward Forward
- Predictive Coding
- Q-Learning ?
-....
|for correct ds1307 pinout 8-stars(0-14B models Q4)| for Diffusion LLM 9-stars | Teach llms on ASCII/Text Based STL Modeling , ic pinouts .optionaly:patents. |context editing| .exact size of all obj dimentions. Virtualization of ai/ml in mcu .Funding require.///until llms are weak///
https://github.com/sponsors/jelspace
https://ko-fi.com/21world
https://rumble.com/c/c-5865836
https://www.buymeacoffee.com/21world
Recent Activity
updated
a collection
about 3 hours ago
57\ Picture Editors
updated
a collection
about 5 hours ago
18\ other models
reacted
to
burtenshaw's
post
with 🧠
about 6 hours ago
Qwen 3 Fine tuning >> MoE. Update the experiment thread to include config and script for fine-tuning the Qwen3-30B-A3B model.
The goal is to make a low latency non-thinking model for a daily driver coding, so 3 billion parameters active should be perfect.
✔️ training running
✔️ evals running
⏭️ improve dataset
The moe isn't going to fit into colab's A100 even with quantization (🙏 @UnslothAI ). So I've been working on HF spaces' H100s for this. Everything is available in the tread and I'll share more tomorrow.
https://huggingface.co/burtenshaw/Qwen3-Code-Lite/discussions/1
Organizations
Collections
57
spaces
3
models
4
datasets
0
None public yet