Getting Started
TypoMonster Chat is an LLM orchestration layer that routes your AI requests through a centralized proxy. Use a single API key to access multiple providers, track usage, and manage rate limits — all without changing your existing code.
How It Works
- Create an API key on the API Keys page
- Install the SDK for your provider
- Swap the import — your existing code stays the same
The proxy sits between your app and the AI provider. It authenticates requests with your TypoMonster key, forwards them to the upstream provider, and logs usage to your analytics dashboard.
Quick Start (Google Gemini)
Google Gemini is fully supported today. Install the proxy SDK and the Vercel AI SDK:
npm install @ai-proxy/google aiThen use it just like @ai-sdk/google — only the import and config change:
import { createProxyGoogle } from "@ai-proxy/google";
import { generateText } from "ai";
const google = createProxyGoogle({
apiKey: "tmk_your_api_key_here",
});
const { text } = await generateText({
model: google("gemini-2.5-flash"),
prompt: "Explain quantum computing in one paragraph.",
});
console.log(text);That's it. Streaming, function calling, and all other Vercel AI SDK features work out of the box.
Streaming Example
import { createProxyGoogle } from "@ai-proxy/google";
import { streamText } from "ai";
const google = createProxyGoogle({
apiKey: "tmk_your_api_key_here",
});
const result = streamText({
model: google("gemini-2.5-flash"),
prompt: "Write a short poem about coding.",
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}Try It Out
Head to the Playground to test your API key with a live chat interface — no code required.
What's Next
- Google Gemini provider — full reference with all models, streaming, reasoning, and structured output
- API Reference — proxy SDK configuration and options
- OpenAI and Anthropic support are coming soon