Optimized for AI character chat

Reduce LLM inference cost
for AI character chat

TypoMonster Chat is an LLM orchestration layer built for AI character chat. Cut inference costs with intelligent routing, caching, and analytics — all through a drop-in SDK replacement.

Get started in two steps

1

Issue an API key

Create a project and generate an API key from the dashboard. Each key tracks usage and costs independently.

2

Drop in the ai-proxy SDK

Replace your existing AI SDK with @ai-proxy/core. Same interface, lower costs — no code rewrite needed.

app.ts— 2 lines changed
Before
import { generateText } from "ai";import { createGoogleGenerativeAI } from "@ai-sdk/google"; const google = createGoogleGenerativeAI({  apiKey: process.env.GOOGLE_API_KEY,}); const { text } = await generateText({  model: google("gemini-3.1-pro"),  prompt: "Hello!",});
After
import { generateText } from "ai";import { createProxyGoogle } from "@ai-proxy/google"; const google = createProxyGoogle({  apiKey: process.env.TYPOMONSTER_API_KEY,}); const { text } = await generateText({  model: google("gemini-3.1-pro"),  prompt: "Hello!",});

Features

Realtime Analytics

Monitor token usage, latency, and costs across all providers in real time. Spot anomalies and optimize spend from a single dashboard.

Token Usage & Cost
TokensCost
Started TypoMonster Chat$12.73/day$8.21/day-35%cost saved~2M tokens/dayDay 1Day 3Day 5Day 7Day 9Day 11Day 13
Tokens
~stable
Cost
$8.21/day
-35%
cost saved

Playground

Try it without writing any code. Test prompts against multiple models side-by-side, tweak parameters, and see how our system works before integrating.

gemini-3.1-prostreaming
Hey Luna, what's your favorite season?
Oh, definitely autumn! There's something magical about the way leaves change color... it reminds me of how stories transform as you tell them.
That's poetic. Do you write?
temp: 0.9top_p: 0.95max: 2048

Developer Friendly

Works with the tools you already use. Drop in our proxy SDK, use the OpenAI-compatible API, call via cURL, or integrate with the Vercel AI SDK — your choice.

import { createProxyGoogle } from "@ai-proxy/google";

const google = createProxyGoogle({
  apiKey: process.env.TYPOMONSTER_API_KEY,
});