Getting Started

TypoMonster Chat is an LLM orchestration layer that routes your AI requests through a centralized proxy. Use a single API key to access multiple providers, track usage, and manage rate limits — all without changing your existing code.

How It Works

  1. Create an API key on the API Keys page
  2. Install the SDK for your provider
  3. Swap the import — your existing code stays the same

The proxy sits between your app and the AI provider. It authenticates requests with your TypoMonster key, forwards them to the upstream provider, and logs usage to your analytics dashboard.

Quick Start (Google Gemini)

Google Gemini is fully supported today. Install the proxy SDK and the Vercel AI SDK:

bash
npm install @ai-proxy/google ai

Then use it just like @ai-sdk/google — only the import and config change:

typescript
import { createProxyGoogle } from "@ai-proxy/google";
import { generateText } from "ai";
 
const google = createProxyGoogle({
  apiKey: "tmk_your_api_key_here",
});
 
const { text } = await generateText({
  model: google("gemini-2.5-flash"),
  prompt: "Explain quantum computing in one paragraph.",
});
 
console.log(text);

That's it. Streaming, function calling, and all other Vercel AI SDK features work out of the box.

Streaming Example

typescript
import { createProxyGoogle } from "@ai-proxy/google";
import { streamText } from "ai";
 
const google = createProxyGoogle({
  apiKey: "tmk_your_api_key_here",
});
 
const result = streamText({
  model: google("gemini-2.5-flash"),
  prompt: "Write a short poem about coding.",
});
 
for await (const chunk of result.textStream) {
  process.stdout.write(chunk);
}

Try It Out

Head to the Playground to test your API key with a live chat interface — no code required.

What's Next