AI sucks, until it doesn't.
Local-first. Open source. Collectively owned.
Built for the people actually using it.
AI is being built to extract value, not create it
Millions of gallons cooling servers for your query
Your data trains their models without consent
Trapped in one provider's ecosystem
No compression, bleeding tokens on repetitive context
Black box models, zero audit trail
Can't work offline or own your infrastructure
Local-first AI with golden rules compression • Your machine, your data, your control
You describe what you want to accomplish, not which API to call
System evaluates cost, latency, and capability across available providers
Request routed to optimal provider with fallback chain for reliability
Result returned in consistent format, regardless of provider
All decisions logged and reproducible. No black boxes.
Savings tracked and visible. Choose cheaper alternatives when available.
What we believe. How we build. Why it matters.
Runs locally first. Nothing leaves your machine unless you choose. No training on your conversations. No selling your patterns.
Every routing decision logged. Every cost visible. Audit trails for accountability, not surveillance.
Open source. Replicable methodology. If you build on this, you own what you build. No vendor lock-in. No extraction.
Why this exists: AI is being built to extract value, not create it. Your data trains their models. Your attention funds their ads. Your creativity becomes their product. We believe AI should be owned by those who use it, transparent in how it decides, fair in how it distributes value, and local by default. This isn't a product. It's infrastructure for a different kind of AI.
This isn't theory. It's running infrastructure proving the principles work.
From text prompt to clean pixel art in under 30 seconds. All local, all automated.
ComfyUI workflow on DGX Spark local GPU. ControlNet + DWPose for sprite sheets. Style-managed generation via dedicated ComfyUI workflow book (5 chapters).
MANDATORY post-processing step. Raw diffusion output has ~64 colors per 8x8 pixel block - noisy, blurry. Median rerender: downscale 1024px to 64px, upscale back to 1024px. Result: ~4 colors per block. Clean pixel separation. 95% file size reduction.
Background execution with auto-respond. User never has to ask "is it done?" - the image arrives in chat the moment generation completes.
Every book written from actual project work, not hypotheticals. Each has one core principle, structured chapters, and contributor attribution.
Lana (all 24), Sepski (19), Leon (2 - weather, climate), Thomas (2 - voice), Daniel (1 - night culture), Furinto (1 - ComfyUI), Flint (2 - dev framework), Huub (1 - Cloudflare), Tim (1 - arcade art), Fesse (1 - security testing)
Automated narrative generation following a hero's journey arc across 9 phases. Running on cron, posting to Telegram, serving a public gallery.
14 dreams per day, scheduled via cron. Each dream gets narrative text, phase-appropriate themes, and generated artwork. Never manually triggered - the golden rule.
World database (locations, characters, concepts), dream journal (narrative entries), and journey tracker (hero's journey phase progression). Daily rebuild with automatic backup.
Served at /dreams - searchable, browseable, with phase filtering. Each dream has its own page with artwork and full narrative text.
Each workspace is an independent universe. Zero information leakage between groups. Tested and audited with a dedicated 7-chapter security book.
MAIN - Full access to all tools, memory, and configuration. FRIEND - Per-workspace whitelist of allowed tools. No memory access. PUBLIC - General assistance only, no tool access.
Silent refusals that never echo back foreign group names. Blocked sections in friend/public context. No group enumeration. No cross-group information flow.
Each workspace gets: isolated memory, whitelisted tools, symlinked skill library, identity files, heartbeat config, and group-specific documentation.
Domain-agnostic recursive orchestration. Same tree structure runs pixel art generation, 3D model pipelines, weather reports, and dream generation.
LEAF - Single action (generate image, send message). SEQUENCE - Ordered steps. PARALLEL - Concurrent execution. CONDITION - Branch on predicate. SUBTREE - Nested tree reference.
Trees compose into larger trees. A pixel art pipeline is a sequence of leaves. A batch generation job is a parallel set of pixel art pipelines. A daily cron run is a sequence of batch jobs.
Every tree is a JSON document. Store it, version it, replay it. Standard Python, no special libraries. Testable at each level, parallelizable with standard threading.
Every image on this page was generated by the system described above
SpriteShaper SDXL on local ComfyUI, post-processed with mandatory median rerender. From the demo showcase collection.
TripoSR on local GPU. Image-to-3D with texture baking and Blender turntable rendering. From the comparison test suite.
Each of the 24 books gets a generated cover. Pixel art style, consistent visual identity across the library.
Real pixel art generation workflow - from prompt to delivered result
User sends "generate pixel art of a forest sprite" via Telegram. The action tree decomposes this into an ordered sequence.
SpriteShaper model generates raw 1024x1024 output on DGX Spark GPU. Background execution - auto-delivers when done.
MANDATORY post-processing. Raw diffusion output has ~64 colors per 8x8 block. Median rerender reduces to ~4 colors per block. 95% file size reduction. Clean pixel separation.
Auto-delivery via background execution. Result posted to the requesting workspace. Full audit trail in session log.
18 workspaces, 10 contributors, 24 books, 111 dreams
Primary agent. Manages 18 workspaces, pixel art generation, voice synthesis, 3D models, weather reports, and friend group coordination. Runs 24/7 via Telegram.
Maintains the 24-book skill library. Promotes group knowledge to shared books, enforces structure conventions (one core principle per book), tracks 10 contributors across 4 categories.
Message classification and intent routing. Three-tier system (MAIN/FRIEND/PUBLIC) with per-workspace whitelists. Handles tool access control and group isolation enforcement.
Automated dream generation - 111 dreams across 9 narrative phases. Runs on cron (14/day), generates stories with artwork, maintains a searchable dream database and public gallery at /dreams.
Real enforcement patterns, real knowledge architecture
Standards that actually get enforced. Not suggestions - real constraints that the system respects.
19 isolated memory files, one per workspace. Daily logs, group-specific context. Tiered access control with strict isolation.
Local-first by default. DGX Spark runs Ollama, ComfyUI, Qwen3 TTS. RTX 5090 for heavy GPU workloads. No cloud dependency for generation.
Each book has one core principle, structured chapters, and contributor attribution. Written from real project work, not hypotheticals.
This is infrastructure, not a product. Fork it. Run it locally. Make it yours.
No sign-up required. No data collected. Or reach out directly:
Already working, actively expanding
Text, pixel art, 3D models, voice synthesis, weather visualizations, music - all running through the same routing layer today. Next: video generation and real-time audio-reactive visuals.
The action tree framework, skill library structure, and workspace isolation patterns are being documented for open release. 24 books of methodology already written.
Active collaboration on Pure Data audio development, climate sonification, and Eurorack module design. Bridging AI generation with live music production tools.
Telegram is already the primary interface - 18 workspaces, voice messages, image generation, and full tool access from any phone. Next: richer inline previews and approval workflows.