Why I Stopped Using My Keyboard for Claude Code
I spent years getting faster at typing. I learned vim motions. I bought a mechanical keyboard. I optimized my terminal shortcuts.
Then I started dictating to Claude Code and realized I’d been solving the wrong problem.
The bottleneck in AI-assisted development isn’t how fast you can type. It’s how fast you can articulate what you want. And speaking is simply faster than typing for most people—especially when explaining complex logic or describing multi-step changes.
The Setup That Changed Everything
On macOS, dictation is built in. You enable it in System Preferences, assign a keyboard shortcut, and start talking. I use the double-tap of the Function key.
Here’s what a typical session looks like:
- Press the dictation shortcut
- Say: “Add a retry mechanism to the API client with exponential backoff starting at 500 milliseconds and maxing out at 30 seconds”
- Release the shortcut
- Claude starts working
No typing. No fighting with syntax. Just describing what I want in plain English.
The Problem With Raw Dictation
Speech-to-text isn’t perfect. It mishears technical terms constantly. “API” becomes “a pie.” “Kubernetes” becomes “Cooper Nettie’s.” “pytest” becomes “pie test.”
I could manually fix these errors, but that defeats the purpose. I want hands-off automation.
So I built a skill that intercepts my dictation and fixes ambiguities before Claude starts executing.
The Ambiguity-Fixing Skill
Create this file at .claude/skills/fix-ambiguity/prompt.md:
| |
Now when I dictate something garbled, Claude fixes it automatically before starting work.
Set Up Voice Dictation for Claude Code
Configure macOS dictation and the ambiguity-fixing skill for hands-free Claude Code interaction
Enable macOS Dictation
Open System Preferences (or System Settings on newer macOS versions). Go to Keyboard, then Dictation. Turn it on and set your preferred shortcut—I recommend double-pressing the Function key.
Download the enhanced dictation option if you want offline support with better accuracy.
Create the Skills Directory
If you don’t already have a skills directory, create one:
| |
This can be in your home directory (~/.claude/skills/) for global availability or in a specific project.
Add the Ambiguity-Fixing Skill
.claude/skills/fix-ambiguity/prompt.md with the prompt template shown above. Customize the common corrections list based on your own technical vocabulary and the terms you use frequently.Test the Workflow
/fix-ambiguity before your dictated text. Verify that Claude correctly interprets your intent.Making It Even Faster With Shortcuts
I experimented with Apple Shortcuts to chain the entire flow: dictate, transcribe, send to Claude for correction, then execute. The automation looked something like:
- Shortcut captures voice input via dictation
- Passes transcribed text to a shell script
- Shell script prepends the fix-ambiguity skill invocation
- Result goes directly to Claude Code
It worked, but honestly the simpler approach is better. Just dictate into Claude Code directly and invoke the skill when needed. Over-automation creates its own maintenance burden.
When Voice Input Works Best
Voice dictation shines for:
- High-level task descriptions: “Refactor the authentication module to use JWT instead of session cookies”
- Multi-step instructions: “First update the database schema, then modify the model, then adjust the API endpoints to match”
- Explaining bugs: “The login form submits but the session isn’t being persisted, I think it’s something with the cookie domain”
- Code review requests: “Look at the payment processing function and check if there are any race conditions”
It’s less useful for:
- Dictating exact code syntax (though Claude handles that anyway)
- Complex regex patterns or mathematical formulas
- Anything requiring precise special characters
The Habit Shift
The hardest part wasn’t the technical setup. It was breaking the habit of reaching for the keyboard.
For the first week, I had to consciously stop myself from typing. My fingers would start moving automatically, and I’d catch myself mid-keystroke. But once voice input became muscle memory, going back to typing felt painfully slow.
There’s also a thinking benefit. When you speak your prompts, you naturally structure them more clearly. You can’t type half a thought and then backspace—you have to articulate the whole idea. This leads to better prompts and better Claude output.
FAQ
Does this work with Windows or Linux?
What about code that requires specific syntax?
Is there latency from the ambiguity-fixing step?
What if I'm in an open office and can't speak out loud?
How do I add project-specific technical terms to the correction list?
prompt.md file and add your custom terms to the corrections section. For project-wide terms, put the skill in the project’s .claude/skills/ directory. For personal terms, use your home directory’s global skills folder.Conclusion
Key Takeaways
- Voice dictation removes the typing bottleneck from AI-assisted development
- macOS dictation works out of the box with Claude Code via a simple keyboard shortcut
- A custom skill fixes common speech-to-text errors before Claude starts working
- Speaking prompts forces clearer thinking and better-structured requests
- The workflow works for high-level tasks but not for dictating exact syntax
- Breaking the keyboard habit takes about a week of conscious effort
- Combining dictation with the fix-ambiguity skill provides a 3x productivity boost
The fastest keyboard in the world is still slower than speaking. Once you’ve trained yourself to describe what you want instead of typing it, you won’t go back. The setup takes ten minutes. The productivity gain lasts forever.