uncloseai.

uncloseai-cli: Local LLM Agent

Three Tools for Growing Machine Learning from Seed

The uncloseai-cli repository contains three tools that form a complete local ML pipeline: agent harness, model training, and dataset curation. Zero external dependencies. Pure Python. Public domain.

Install

git clone https://git.unturf.com/engineering/unturf/uncloseai-cli.git
cd uncloseai-cli
make install

Creates uncloseai-cli, unclose, u, microgpt-cli, microgpt in /usr/local/bin.

Source: git.unturf.com/engineering/unturf/uncloseai-cli


uncloseai-cli: Local LLM Agent

A minimal "Claude Code" style tool-calling agent powered by a local 8B parameter LLM. Every request flows through a todo system:

  1. Plan: LLM breaks request into numbered tasks
  2. Trim: Python removes fluff (open/read, report/inform), caps at 5
  3. Execute: each task runs through a ReAct loop with tool access
  4. Forward: prior task results flow to later tasks as context

Usage

# Run a task
unclose "pull and sync"

# Multi-step: agent plans and executes each step
unclose "what time is it in EST? also run ddate"

# Print-only mode (minimal output, just final answer)
unclose -p "summarize SYSTEM-PROMPT.md"

# Verbose mode (show turn numbers)
unclose -v "deploy"

# Interactive REPL
unclose -i

Configuration

All configuration via environment variables. No config files needed:

VariableDefaultDescription
UNCLOSE_BASEhttps://hermes.ai.unturf.com/v1OpenAI-compatible API base URL
UNCLOSE_MODEL(auto-detected)Model name from /v1/models
UNCLOSE_KEYpermacomputerAPI key
UNCLOSE_MAX_TURNS15Max ReAct turns per task
UNCLOSE_MAX_RESULT12000Max chars per tool result

Point it at any OpenAI-compatible endpoint: vLLM, Ollama, or the free uncloseai inference endpoints. By default it connects to hermes.ai.unturf.com. See Model Discovery for all available endpoints and how to query current model IDs.

Tools

The agent has access to 8 built-in tools:

ToolArgsDescription
bashcommandRun a shell command (60s timeout, dangerous commands blocked)
readpathRead a file (truncated at 12K chars)
writepath, contentCreate or overwrite a file
editpath, old, newReplace exact string in file
globpattern, pathFind files by glob pattern
greppattern, pathSearch file contents with regex
fetchurl, depth, keywordsFetch web page, extract text (async crawler)
todo_addcontent, activeFormAdd task to live todo list during execution

Web Fetching

The fetch tool wraps a production-grade async web crawler:

Session Logging

Every session is logged as JSONL to ~/.uncloseai/sessions/. Use unclose-snoop to parse session logs into a readable feed:

cat ~/.uncloseai/sessions/*.jsonl | unclose-snoop

microgpt: Pure-Python GPT

Minimal GPT training & inference with zero external dependencies: only os, math, random, json, argparse. Based on Karpathy's microgpt.

Implements: scalar autograd (Value class), multi-head attention, RMSNorm, MLP, Adam optimizer with linear LR decay, temperature-controlled sampling. JSON model persistence with full metadata.

Usage

# Train on a text dataset (one document per line)
microgpt train --dataset names.txt --steps 1024 --save model.json

# Generate samples from a trained model
microgpt generate --load model.json --samples 10 --temperature 0.5

# Train & generate in one shot
microgpt run --dataset names.txt --steps 1024 --samples 10

# Inspect a saved model
microgpt info model.json

Flags

FlagDefaultSubcommands
--dataset PATHauto-download names.txttrain, run
--save PATHmicrogpt-model.jsontrain, run
--load PATH(required)generate
--steps N1000train, run
--lr FLOAT0.01train, run
--n-embd N16train, run
--n-head N4train, run
--n-layer N1train, run
--block-size N16train, run
--samples N20generate, run
--temperature FLOAT0.5generate, run
--seed N42all
--quietfalsetrain, run

Also ships as microgpt.h, a single-header C library for model inference.


garden.mk: Smol Model Garden

Pull 15 curated character-level datasets, grow GPT models at 3 sizes. Every dataset tracks its upstream origin. Mirrored to HuggingFace: russellbal/smol-seeds.

Datasets

SeedLinesOriginFreshness
names32Kkarpathy/makemoreDormant (2022)
words97Kdwyl/english-wordsStable (2025)
pokemon1Ksindresorhus/pokemonStable (2024)
dinosaurs1.5Kbrunoklein99/deep-learning-notesFossil (2018)
hex-colors32Kxkcd + meodai/color-namesActive
color-names31Kmeodai/color-namesActive
chords1Ktombatossals/chords-dbQuarterly
json-keys4KGitHub OpenAPI specActive
css-classes2Ktwbs/bootstrapQuarterly
make-targets291scraped from major reposActive
commit-msgs20Kangular/angularActive
haiku143Kdocmarionum1/haikurnnFossil (2018)
variable-names3.8KGitHub repo treesActive
last-names225Ksacrificialpancakes/synthetic_demographics_seedStable
occupations831sacrificialpancakes/synthetic_demographics_seedStable

Usage

# Download all datasets
make -f garden.mk pull

# Train all models at embedding size 64
make -f garden.mk grow-64

# Train all at all sizes (64, 128, 512)
make -f garden.mk grow

# Pull + grow one model family
make -f garden.mk names

# List upstream sources
make -f garden.mk origins

# Check upstream freshness
make -f garden.mk freshness

# Sample from a trained model
make -f garden.mk sample MODEL=pokemon-64 N=10

# Show trained model inventory
make -f garden.mk inventory

# Push datasets to HuggingFace mirror
make -f garden.mk mirror

Model Sizes

Sizen_embdn_headn_layerUse
646441Fast, good for testing
12812882Balanced
51251284Slow in pure Python, best quality

Architecture

The agent orchestration follows a plan-execute-forward pattern:

User Message
    ↓
Plan Phase (LLM call with planning-only prompt)
    ↓
JSON task array
    ↓
Trim Phase (Python, no LLM)
  • drop fluff (report/inform)
  • drop prep (open/read)
    ↓
Save to ~/.uncloseai/todos/{session}.json
    ↓
┌─→ Any pending tasks? ─── no → Display final todo list ✓
│       ↓ yes
│   Mark in_progress
│       ↓
│   ReAct Loop (tool calls until done or stuck)
│       ↓
│   Mark completed → Save → Append result (≤500 chars)
└───────┘

Each task in the ReAct loop can call tools, receive results, and iterate up to UNCLOSE_MAX_TURNS times. Stuck-loop detection bails out after 2 identical consecutive tool calls.

Prior task results are forwarded as context to later tasks, so multi-step requests build on earlier work without re-executing commands.

Read-Only Mode

When the agent detects a read-only intent (questions, searches, inspections), it automatically disables write and edit tools. This prevents accidental file modifications on information-gathering requests.


License

Public domain. Knowledge unbound by gatekeepers.