uncloseai.

Piper TTS

Self-host — Not on the public endpoint. Clone the repo to use this engine.

What It Does

When you need speed over everything else. This engine runs entirely on CPU with no GPU required, producing speech in real-time. It's the fastest engine in the raccoon dumpster — built on the Piper project, which we rescued and maintain.

Over 100 English voices are available through LibriTTS voice models. The output is clean 22.05kHz audio, synthesized through an ONNX runtime that runs on virtually any hardware.

If you're building a high-volume batch pipeline, a real-time assistant, or anything where latency matters more than naturalness, this is your engine.

Example

Once self-hosted and enabled, it works through the same OpenAI-compatible API:

from openai import OpenAI

client = OpenAI(
    api_key="not-needed",
    base_url="http://localhost:8000/v1"
)

# Piper voices use the tts-1 model
client.audio.speech.create(
    model="tts-1",
    voice="alloy",
    input="Rescued from the dumpster, running at full speed. No GPU needed, just pure CPU synthesis."
).stream_to_file("piper.mp3")

Technical Details

Self-Hosting

Enable Piper by downloading voice models and adding them to your config:

git clone https://git.unturf.com/engineering/unturf/uncloseai-speech.git
cd uncloseai-speech
make deploy
make voices-piper

Then uncomment the tts-1 section in voice_to_speaker.default.yaml and restart.

← Back to Text-to-Speech overview