Aider is the open-source AI pair programmer that works in your terminal. It understands your entire codebase and makes real changes to your files. But typing detailed prompts slows you down. With WhisperTyping, you speak to Aider naturally - 4x faster than typing. Perfect for vibe coding workflows.
What is Aider?
Aider is an open-source AI coding assistant that runs in your terminal. It can see your entire git repository, understands code context, and makes direct edits to your files. It works with GPT-5, Claude, Gemini, and many other models. Think of it as having an AI pair programmer who can actually write and commit code.
Always Ready When You Are
Here's what makes WhisperTyping essential: your hotkey is always available, no matter what you're doing. Reviewing Aider's changes, testing your app, reading the diff - hit your hotkey and start speaking. Your next prompt is ready before you switch back to the terminal.
See something wrong in the diff? Hit the hotkey: "Actually, use a different approach for the error handling." By the time you're back in Aider, your thought is captured and ready to send.
Double-Tap to Send
The feature developers love most: double-tap your hotkey to automatically press Enter. Dictate your prompt and send it to Aider in one motion.
Single tap starts recording. Double-tap stops, transcribes, and sends. No reaching for the keyboard. No breaking your flow.
Combine this with mouse activation and you can control everything with one hand. Use your middle mouse button or map a side button on mice like the Logitech MX Master to trigger WhisperTyping. Click to start recording, speak your prompt, double-click to send. Your other hand stays free for coffee.
Blazing Fast Transcription
Users love WhisperTyping for its snappiness. On a decent internet connection, the median transcription time is just 370 milliseconds. You stop speaking and your text appears almost instantly.
That responsiveness matters when you're pair programming with Aider. There's no awkward pause between finishing your thought and seeing it on screen. It feels like the tool is keeping up with you, not the other way around.
Custom Vocabulary
Common frameworks and libraries are recognized out of the box. Add words that are unique to your world:
- Your project name and internal codenames
- Names of colleagues and collaborators
- Company-specific terms, acronyms, and jargon
- Niche libraries or tools that speech recognition might not know
Screen-Aware Transcription
WhisperTyping reads your screen using OCR. When you're looking at code or Aider's output, it sees the same function names, error messages, and variables you do - and uses them to transcribe accurately.
Perfect for Architect Mode
Aider's architect mode is ideal for voice input. An "architect" model plans the solution while a separate "editor" model translates that into file edits. You describe high-level changes, and Aider plans and implements them:
- "Add authentication to all API endpoints using JWT tokens"
- "Refactor the database layer to use the repository pattern"
- "Create a caching layer for the user queries"
Speaking these detailed architectural instructions is much faster than typing them.
Why Voice for Aider?
Aider excels with detailed, contextual prompts. Voice makes it effortless to provide that context:
- Explain bugs conversationally: "The tests are failing because the mock isn't returning the right format"
- Describe features naturally: "Add a webhook handler that processes Stripe events"
- Give implementation guidance: "Use dependency injection and make sure it's testable"
Tip: Tell Aider You Use Voice
Create a CONVENTIONS.md file in your project and load it with --read CONVENTIONS.md (or add read: CONVENTIONS.md to your .aider.conf.yml). Include a note that your input comes via voice transcription:
"User input comes via voice dictation. Expect possible transcription errors like homophones, missing punctuation, or misheard words. Interpret intent rather than taking input literally."
Once Aider's model knows to expect voice input, you can stop worrying about transcription accuracy. Just speak naturally, be descriptive, and double-tap to send. No need to review your transcription before sending.
Frequently Asked Questions
Can I use speech recognition with Aider?
Yes. WhisperTyping adds speech recognition to Aider on Windows. It runs in the background and types your spoken words directly into Aider's terminal prompt. With a median transcription time of 370 milliseconds, it keeps up with your thinking.
How do I dictate to Aider on Windows?
Install WhisperTyping, set a hotkey or enable mouse activation, and press it to start dictating. Your speech is transcribed and typed into Aider's input. Double-tap your hotkey to transcribe and press Enter, sending your prompt in one motion.
Does Aider support voice input?
Aider itself does not have built-in voice input. You need a separate dictation tool. WhisperTyping works with Aider because it types text wherever your cursor is, including terminal prompts. It also adds custom vocabulary for your project-specific terms and screen OCR for accurate technical transcription.
What's the best way to set up Aider for voice dictation?
Create a CONVENTIONS.md file with a note that your input comes via voice transcription, and load it with --read CONVENTIONS.md or add it to your .aider.conf.yml. This tells the model to interpret intent rather than stumbling over minor transcription errors. Combined with WhisperTyping's custom vocabulary, you can dictate without reviewing your transcription.