VibeWhisper
Features Pricing About Blog FAQ

Use Cases

Voice-to-text for developers

VS Code iTerm2 Cursor Slack Xcode

Push-to-talk dictation that works in your IDE, terminal, browser, and every other text field on macOS. Built for the way developers work.

Developer workflows

Voice dictation fits naturally into the way developers already work. Here are the most common use cases.

Dictate AI prompts

Describe code changes, explain bugs, and write detailed prompts for Claude, Copilot, or Cursor — all by voice, directly in your IDE.

AI prompts

"Add a caching layer to the user profile endpoint using Redis with a 5 minute TTL and cache invalidation on profile updates"

Write documentation

Draft READMEs, API documentation, and architecture notes by speaking your thoughts. First drafts in a fraction of the typing time.

Commit messages

"Fix race condition in WebSocket reconnect logic by adding mutex lock and exponential backoff with jitter"

Quick messages

Reply to Slack messages, write PR descriptions, and compose emails without leaving your editor. Hold the key, speak, done.

Code comments

"This function validates the incoming webhook payload against the Stripe signature and rejects requests that don't match"

Capture notes

Jot down meeting notes, design decisions, and ideas by voice into Obsidian, Apple Notes, or any text field.

Slack messages

"Hey team, the deploy is done. I fixed the auth bug and added rate limiting to the login route. Let me know if you see anything odd in staging"

Speak and type — real examples

What it actually sounds like when developers use voice dictation with VibeWhisper.

Feature request prompt

Create a REST endpoint at slash API slash users that supports pagination with cursor-based navigation and returns the total count in the response headers

Bug report for AI assistant

The sidebar component re-renders on every keystroke in the search bar because the parent state is updating. Memoize the sidebar and move the search state into a local hook

Architecture note

We should split the monolith into three services: auth, billing, and notifications. Auth owns the user table, billing talks to Stripe, and notifications handles email and push via a message queue

Pull request description

This PR adds input validation to all public API routes using Zod schemas. It also standardizes error responses to follow the RFC 7807 problem details format

Built on macOS native APIs

Under the hood

Hold shortcut A global keyboard shortcut is registered via CGEvent tap. When you press and hold the configured key, VibeWhisper activates — no matter which app is focused.
Capture audio AVAudioEngine opens a low-latency microphone stream. Audio is buffered in memory while the key is held. Nothing is written to disk.
Transcribe with Whisper On key release, the audio buffer is sent directly to the OpenAI Whisper API using your own API key. Transcription typically completes in under a second.
Inject text The transcribed text is inserted at the cursor position in the focused text field using the macOS Accessibility API. No clipboard involvement — your clipboard stays untouched.
Resume coding The entire cycle takes about one second. You're back to flow coding immediately — no context switch, no window management, no app to close.
CGEvent tap for global keyboard shortcut registration
AVAudioEngine for low-latency microphone capture
Accessibility API for direct text injection at cursor position
Keychain Services for secure API key storage

Works everywhere you type

VibeWhisper uses the macOS Accessibility API for text injection. If an app has a text field, you can speak and type into it.

IDEs & Editors

VS CodeIntelliJ IDEAXcodeNeovimSublime Text

Terminals

TerminaliTerm2WarpAlacritty

Communication

SlackDiscordMicrosoft TeamsTelegram

Notes & Docs

NotionObsidianApple NotesConfluence

Dev Tools

GitHubLinearJiraFigma

Other

Any app with a text field

Privacy-first architecture

VibeWhisper never sees your data. Your voice goes directly from your Mac to OpenAI — nothing in between.

Your API key

Your OpenAI API key is stored in the macOS Keychain — the same secure storage used for passwords and certificates. It never leaves your machine and is never sent to VibeWhisper servers.

Direct to OpenAI

Audio is sent straight to the OpenAI Whisper API. There is no intermediary server, no proxy, and no data retention on our side. You can verify this in your OpenAI usage dashboard.

Zero telemetry

VibeWhisper collects no analytics on your dictations, no usage telemetry, and no crash reports that include voice data. What you say stays between you and OpenAI.

Developer FAQ

Does VibeWhisper work in terminal apps like iTerm or Warp?

Yes. VibeWhisper injects text via the macOS Accessibility API, which works with terminal emulators including Terminal.app, iTerm2, Warp, Alacritty, and Kitty.

Will it interfere with my IDE keyboard shortcuts?

No. VibeWhisper uses a configurable push-to-talk shortcut. You choose a key or combination that doesn't conflict with your existing bindings. The shortcut only activates while held.

How does it handle code-specific vocabulary?

VibeWhisper uses OpenAI Whisper, which was trained on diverse data including technical content. It handles terms like API, JWT, WebSocket, PostgreSQL, and similar developer vocabulary accurately.

Does it store or log my dictations?

No. Audio is sent directly to OpenAI for transcription and the temporary buffer is discarded immediately. VibeWhisper stores nothing — no audio files, no transcription logs, no telemetry.

What is the latency like?

Typical end-to-end latency is under one second from key release to text appearing. The bottleneck is the OpenAI API round-trip, which depends on your internet connection and audio length.

Voice to text, built for developers

Push-to-talk voice dictation that works in your IDE, terminal, browser, and every text field on macOS. $19 one-time.