Aider is the AI pair programmer that works in your terminal. It understands your entire codebase and makes real changes to your files. But typing detailed prompts slows you down. With WhisperTyping, you speak to Aider naturally - 4x faster than typing. Perfect for vibe coding workflows.
What is Aider?
Aider is an open-source AI coding assistant that runs in your terminal. It can see your entire git repository, understands code context, and makes direct edits to your files. It works with GPT-4, Claude, and other LLMs. Think of it as having an AI pair programmer who can actually write and commit code.
Always Ready When You Are
Here's what makes WhisperTyping essential: your hotkey is always available, no matter what you're doing. Reviewing Aider's changes, testing your app, reading the diff—hit your hotkey and start speaking. Your next prompt is ready before you switch back to the terminal.
See something wrong in the diff? Hit the hotkey: "Actually, use a different approach for the error handling." By the time you're back in Aider, your thought is captured and ready to send.
Double-Tap to Send
The feature developers love most: double-tap your hotkey to automatically press Enter. Dictate your prompt and send it to Aider in one motion - completely hands-free.
Single tap starts recording. Double-tap stops, transcribes, and sends. No reaching for the keyboard. No breaking your flow.
Custom Vocabulary
Speech recognition can struggle with technical terms. WhisperTyping lets you add your stack to its vocabulary:
- Framework names:
React,FastAPI,Django - Functions and classes:
UserService,handleAuth,validateInput - Your project's specific names and conventions
When you say "update the UserService class", it knows exactly what you mean.
Screen-Aware Transcription
WhisperTyping reads your screen using OCR. When you're looking at code or Aider's output, it sees the same function names, error messages, and variables you do - and uses them to transcribe accurately.
Perfect for Architect Mode
Aider's architect mode is ideal for voice input. You describe high-level changes, and Aider plans and implements them:
- "Add authentication to all API endpoints using JWT tokens"
- "Refactor the database layer to use the repository pattern"
- "Create a caching layer for the user queries"
Speaking these detailed architectural instructions is much faster than typing them.
Why Voice for Aider?
Aider excels with detailed, contextual prompts. Voice makes it effortless to provide that context:
- Explain bugs conversationally: "The tests are failing because the mock isn't returning the right format"
- Describe features naturally: "Add a webhook handler that processes Stripe events"
- Give implementation guidance: "Use dependency injection and make sure it's testable"