SymbioticLM-1B
Model Type: Hybrid Symbolic–Transformer
Base Model: Qwen-1B
Framework: PyTorch + HuggingFace Transformers
Purpose: Lightweight, memory-augmented reasoning model for CPU and embedded inference
Overview
SymbioticLM-1B is the compact version of the SymbioticAI architecture. It fuses Qwen’s rotary transformer design with a symbolic processing pipeline and a persistent episodic memory. Though smaller in parameter count, it retains the full cognitive engine: symbolic memory, dynamic thought evolution, and entropy-gated control.
This model is ideal for symbolic reasoning in constrained environments — like research agents, lightweight assistants, and memory-efficient logical processing.
Architecture Highlights
- Backbone: Qwen-1B rotary transformer
- Symbolic Dim: 1024
- Symbolic Modules:
- ThoughtDynamicsLNN
- CrystallineProcessor (DNAConv GNN)
- LiquidThoughtProcessor
- HelicalDNAProcessor
- Memory: 2048 symbolic vectors with entropic and contextual retrieval
- Dream Mode: Symbolic simulation with ThoughtGenerator
Files Included
File | Description |
---|---|
model.bin |
PyTorch model weights |
model.safetensors |
SafeTensor weights |
memory.pt |
Serialized symbolic memory vectors |
config.json |
Model architecture config |
generation_config.json |
Generation strategy configuration |
tokenizer.json |
Tokenizer including custom symbolic tags |
added_tokens.json |
Special tokens such as <THM> , <LEM> , <D_IF> |
special_tokens_map.json |
Tokenizer-to-logic mappings |
Intended Uses
- CPU-optimized symbolic inference
- Educational agents with memory
- Graph-based explanation generation
- Procedural planning, math modeling, small-code generation
Limitations
- Less fluent in free-form language than larger variants
- Symbolic accuracy increases with memory curation
- Dreaming requires warm-up or symbolic seeding for complex queries
Citations
Symbolic components are rooted in cognitive modeling and discrepancy calculus research.
- Downloads last month
- 5