Skip to main content
Ashish Kumar avatar
Case Studyshipped

Better Chat

One interface for every major AI model.

A modern AI chat application supporting OpenAI, Anthropic Claude, Google Gemini, and Groq — with real-time streaming, file uploads, vision models, and encrypted API key management.

Next.jsTypeScriptNextAuth.jsPostgreSQLMulti-provider AI
Better Chat screenshot

The Problem

Every major AI lab ships their own chat interface. Switching between GPT-4o, Claude, and Gemini means logging into three separate products, each with different UX conventions, no shared history, and no way to compare responses side by side.

For developers who want to test different models — or who just want one consistent interface they can actually configure — the fragmented ecosystem is friction that shouldn’t exist.

Free models from Groq (DeepSeek R1, Llama 4) are genuinely capable, but they’re buried inside a dashboard not designed for casual conversation. Better Chat surfaces them alongside paid models with zero extra setup.

The Solution

Better Chat is a single interface for all of them. OpenAI, Anthropic, Google, and Groq plug in through their respective SDKs. Users bring their own API keys — stored encrypted in the database — or use the free Groq models without any key at all.

Real-time streaming, file uploads with vision support, markdown and LaTeX rendering, and auto-generated chat titles make it feel like a polished product rather than a personal script.

Supported Models

ProviderModels
OpenAIGPT-4o, GPT-4o Mini
AnthropicClaude Opus 4, Claude Sonnet 4
GoogleGemini 2.5 Pro, Gemini 2.5 Flash
GroqDeepSeek R1, Llama 4 (free, no key needed)

Key Features

Streaming responses — AI output is streamed token by token so the interface feels responsive even on slow connections or long generations.

File uploads — Images, PDFs, text files, and code can be attached to any message. Vision-capable models receive the file as context automatically. UploadThing handles storage.

Thinking models — Reasoning-capable models that expose their chain of thought are supported and rendered distinctly from the final response.

Encrypted API key storage — User API keys are encrypted at rest with a server-side key before being written to the database. They’re decrypted only at request time, never returned to the client.

Chat management — Chats are auto-titled based on the first message, can be starred, renamed, and deleted. Usage stats are tracked per-provider.

Architecture

better-chat/
├── app/
│   ├── (main)/
│   │   ├── chat/         # Chat interface
│   │   └── settings/     # API key and profile management
│   └── api/
│       ├── auth/         # NextAuth.js endpoints
│       ├── chats/        # Chat CRUD
│       ├── keys/         # Encrypted key management
│       └── files/        # UploadThing integration
├── lib/
│   ├── ai/               # Provider integrations (OpenAI, Anthropic, Google, Groq)
│   └── services/         # Business logic
└── prisma/               # Schema + migrations

Each AI provider lives in its own module under lib/ai/, exporting a common streaming interface. Adding a new provider is a self-contained change.

Tech Stack

LayerTech
FrameworkNext.js 15 (App Router), TypeScript
StylingTailwind CSS v4, Radix UI
AuthNextAuth.js v5
DatabasePostgreSQL, Prisma
File uploadsUploadThing
AI SDKsOpenAI, Anthropic, Google AI, Groq
DeploymentVercel

Key Decisions

Why bring-your-own API keys instead of proxying through a backend key? A shared backend key means absorbing costs for every user — unsustainable without billing infrastructure. BYOK keeps costs user-side and removes the need for a payment system. Groq’s free tier covers users who don’t want to set up keys at all.

Why encrypt keys in the database rather than storing them only in the client? Storing keys only in the browser (localStorage or cookies) means they’re lost when the user clears storage or switches devices. Server-side encryption lets keys persist across sessions while keeping them inaccessible even if the database is leaked without the encryption key.

Why UploadThing instead of direct S3? UploadThing wraps S3 with a Next.js-native SDK that handles presigned URLs, file type validation, and size limits without custom infrastructure. For a project of this scope the abstraction is worth it.

Why Groq for free models? Groq’s inference hardware is fast enough that free-tier responses feel competitive with paid OpenAI responses. Offering it without an API key lowers the barrier to trying the app significantly.

Outcome

Deployed to Vercel. Supports four AI providers, file uploads, and streaming out of the box. The architecture is provider-agnostic — adding a fifth model is a new module and a dropdown entry.

[AK]

Designed & built by Ashish Kumar

© 2026 — shipped it. don't ask about the commit history.

Built on Astro. Dressed in Tailwind.