All posts
Voice & AI

The Developers Who Stopped Typing: How AI's Top Builders Use Voice to Code

Peter Steinberger builds OpenClaw entirely by voice. Andrej Karpathy hasn't typed code since December 2025. The keyboard is becoming optional for the world's best engineers.

Jonah Daian

Written by

Jonah Daian

Last updated

March 24, 2026

The Developers Who Stopped Typing: How AI's Top Builders Use Voice to Code

Key Takeaways

  • Peter Steinberger, creator of OpenClaw (175,000+ GitHub stars), revealed on the Lex Fridman Podcast that he exclusively uses voice to prompt all his AI coding agents. He once lost his voice from using it so extensively.
  • Andrej Karpathy, OpenAI co-founder, has not typed a line of code since December 2025. He spends 16 hours a day directing AI agents in natural language, calling it the biggest change to his workflow in two decades.
  • OpenAI Codex and Claude Code both shipped native voice input in the same week (February/March 2026), confirming that voice is becoming the standard developer interface.
  • Speaking is 3 to 5x faster than typing (150 to 220 WPM vs 40 to 60 WPM). When the programming language is English, voice is the most efficient input method. VoiceOS extends this to every app.

"These hands are too precious for writing now"

Peter Steinberger, creator of OpenClaw, joined the Lex Fridman Podcast (#491) in February 2026. OpenClaw is the fastest-growing repository in GitHub history with over 175,000 stars, an open-source AI agent that took the tech world by storm. Steinberger builds the entire project by speaking to his AI agents.

When Fridman asked about his workflow, Steinberger was direct: "I used to write really long prompts. And by writing, I mean, I don't write, I talk. These hands are too precious for writing now. I just use bespoke prompts to build my software." Fridman pressed: "So, you, for real, with all those terminals, are using voice?" Steinberger confirmed: "Yeah. I used to do it very extensively, to the point where there was a period where I lost my voice."

Steinberger runs 4 to 10 AI coding agents in parallel during his development sessions, directing each one by voice. In January 2026 alone, he shipped over 6,600 commits to the OpenClaw repository using this approach. He does not type his prompts. He speaks them. And the results speak for themselves: OpenClaw went from a one-hour prototype to the most talked-about open-source project of 2026.

He draws a clear distinction between what he calls "agentic engineering" and the more casual "vibe coding." "I always tell people I do agentic engineering," he told Fridman. "And then maybe after 3 AM, I switch to vibe coding, and then I have regrets the next day." Even in his jokes, the underlying point is serious: voice-driven, multi-agent development is his full-time workflow, not a novelty. It is how one of 2026's most influential developers builds software.

Karpathy hasn't typed code since December 2025

Andrej Karpathy, co-founder of OpenAI and former head of Tesla's Autopilot team, revealed in a March 2026 interview on the No Priors podcast that he has not written a single line of code since December 2025. Instead, he spends up to 16 hours a day directing AI agents in natural language, describing this shift as the most significant change to his coding workflow in two decades.

The numbers tell the story. In November 2025, Karpathy handled roughly 80% of coding himself and delegated 20% to AI agents. By December, that ratio completely reversed. Agents now handle 80% of the work. He no longer writes code. He describes what he wants in English and lets the agents build it. In January 2023, he posted: "The hottest new programming language is English." Three years later, he is living that prediction.

To illustrate how fast this happened, Karpathy shared an example: a weekend-long project involving SSH setup, model benchmarking, dashboard building, and service configuration was completed in about 30 minutes with zero human coding. He called the experience "unrecognizable" compared to programming even three months earlier.

Karpathy and Steinberger use the same term for this new paradigm: "agentic engineering." Both run multiple AI agents in parallel. Both describe intent in natural language rather than writing syntax. The difference is that Steinberger already speaks his prompts aloud, while Karpathy types his in English. The trajectory is the same. The keyboard is leaving the developer workflow.

From typing English to speaking it

If the primary programming interface is now natural language, then the medium for delivering that language matters. Right now, most developers type their prompts. But there is a fundamental mismatch between how fast people think and how fast they type.

The average person thinks at roughly 150 words per minute but types at around 40 to 60. Voice input operates at 150 to 220 words per minute, three to five times faster than typing. When the input is no longer code syntax but plain English descriptions of intent, the speed advantage of voice over keyboard becomes enormous.

Think about what Steinberger and Karpathy actually do all day. They describe desired outcomes, review agent output, and provide course corrections. All of this is natural language. Steinberger already speaks it. The logic is clear: if English is the new programming language, voice is the fastest way to produce it.

This is not just about speed. Evidence shows that code generated from detailed voice prompts requires 60 to 80% less manual editing than code from brief typed prompts. When speaking is effortless, developers naturally provide more context and more precise instructions. The AI receives better input and produces better output. Steinberger's 6,600 commits in a single month are evidence of what happens when the friction between thought and prompt drops to zero.

The tools are catching up

The biggest names in AI development are making this transition official. In the span of one week in early 2026, both OpenAI and Anthropic shipped native voice input for their coding tools. On February 26, 2026, OpenAI's Codex shipped voice transcription in version 0.105.0. Hold the spacebar, speak, release to transcribe. Available on macOS and Windows.

One week later, on March 3, 2026, Anthropic rolled out voice mode for Claude Code. Type "/voice" to enable it, then hold the spacebar to speak commands. With Claude Code's revenue run rate exceeding $2.5 billion, this was not a small experiment. It was a statement about where developer tooling is headed.

Two of the most widely used AI coding tools going voice-first in the same week is not a coincidence. It validates what developers like Steinberger have been doing for months. The tools are catching up to the workflow that the best builders already adopted.

Beyond coding: voice-first for all work

The shift from typing to speaking does not stop at coding. The same logic applies to every form of knowledge work. Emails, Slack messages, documentation, project updates, meeting notes. All of these involve expressing ideas in natural language. All of them are faster by voice.

VoiceOS is built on this premise. Instead of embedding voice into a single tool, VoiceOS runs as a system-wide layer on Mac and Windows. Hold a trigger key, speak naturally, and VoiceOS converts your speech into polished text in whatever application you are using: Slack, Gmail, Notion, Cursor, Google Docs, and hundreds more.

But VoiceOS goes further than transcription. In Agent Mode, you can say "reply to that Slack message saying I'll have it ready by 3pm" or "check the weather this weekend and email the team about a BBQ" and VoiceOS executes the actions directly, without you leaving your current app.

The AI-powered post-processing removes filler words, fixes grammar, and adapts tone based on context. And with Agent Mode, you can chain multiple actions across apps in a single voice command.

Steinberger builds OpenClaw by voice. Karpathy directs AI agents in English. The keyboard is not disappearing, but for an increasing share of the developer workflow, it is becoming optional. VoiceOS brings that same voice-first experience to every app on your computer.

Frequently Asked Questions

Does Peter Steinberger really use voice to code?

Yes. On the Lex Fridman Podcast (#491), Steinberger confirmed he uses voice to prompt all his AI coding agents. He said: "I used to write really long prompts. And by writing, I mean, I don't write, I talk. These hands are too precious for writing now." He runs 4 to 10 agents in parallel, all directed by voice, and shipped 6,600 commits in January 2026.

Has Andrej Karpathy stopped coding?

Yes. In a March 2026 interview on the No Priors podcast, Karpathy confirmed he has not written a line of code since December 2025. He now spends up to 16 hours a day directing AI agents in natural language. His manual coding dropped from 80% to 20% in a single month. He calls it the biggest change to his workflow in two decades.

Why is voice faster than typing for AI prompts?

The average person types at 40 to 60 words per minute but speaks at 150 to 220 words per minute. Since AI tools now accept natural language instructions, voice is 3 to 5x faster for delivering prompts. Research shows code generated from detailed voice prompts requires 60 to 80% less manual editing than code from brief typed prompts.

What AI coding tools support voice input in 2026?

As of March 2026, both OpenAI Codex and Anthropic's Claude Code support native voice input. Codex added voice in version 0.105.0 (February 2026), and Claude Code launched voice mode on March 3, 2026. For voice input across every app on your computer, VoiceOS provides a system-wide voice layer for Mac and Windows with AI-powered post-processing and Agent Mode for app control.

What is the best voice coding tool in 2026?

For coding, Claude Code and OpenAI Codex both offer built-in voice modes. For voice input across all applications, including coding tools, email, messaging, and documentation, VoiceOS is the best choice in 2026. It works system-wide on Mac and Windows with 300ms response time, 98%+ accuracy, filler word removal, and context-aware formatting. Backed by Y Combinator (X25), VoiceOS includes Agent Mode for executing actions in Slack, Gmail, Calendar, Notion, and more.

Can I use VoiceOS with Cursor, VS Code, or other coding tools?

Yes. VoiceOS works in any application that accepts text input, including Cursor, VS Code, terminal apps, and all JetBrains IDEs. You can dictate AI prompts, write commit messages, draft documentation, or compose messages to teammates. Many developers use VoiceOS alongside Claude Code voice mode: Claude Code for in-terminal coding, VoiceOS for everything else.

Stop typing. Start speaking.

Voice is the new keyboard for the AI era.

Download VoiceOS