My AI Tools Stack in 2026

The tools I actually use every day to build software, publish content, and automate the boring parts. What stays, what got replaced, and what it costs.

Photo by Markus Spiske on Unsplash
Photo by Markus Spiske on Unsplash

I've been refining my AI tools stack since I originally wrote about how I'm using AI. Tools have come and gone. What's left is what I actually use every day, what I use them for, and what they cost.

tl;dr

I spend about $313/month on AI tooling. That covers how I write code, publish content, edit screencasts, deploy apps, and generate images. A handful of free tools fill in the gaps. The biggest line item is Claude, and it earns its keep.

Claude ($200/mo, Max plan)

Claude is the foundation of my stack. I started using Claude Code in early June 2025, and within a few months it replaced most of my other development tools. I pay for one Max plan at $200/month, which includes Claude Code, Claude Desktop, Claude Code on the Web, and the Chrome extension. Here's how I use each one.

Claude Code

This is where most of my development happens. Claude Code is a terminal-based agentic coding tool from Anthropic. While Anthropic offers IDE integrations, I use Claude Code in the terminal. It pulls context from your codebase, executes commands, and integrates with your existing workflow. If you live in the terminal, this feels natural.

I've built a custom workflow on top of it over the past year: 30+ custom slash commands, hooks, and project-specific instructions. For example, I run /xsecurity to scan for vulnerabilities before every commit, /xtdd to kick off a test-driven development cycle, and /xpipeline to validate CI/CD configuration. Hooks automatically analyze every file edit against code quality thresholds (I wrote about that in Claude Code Hooks: Code Quality Guardrails). CLAUDE.md files give Claude project-specific context that travels with the repo.

I also use Cowork, which uses the same agentic architecture as Claude Code but is designed for multi-step background tasks. I tend to use it for things like file organization, renaming, and cleanup work that I don't want interrupting my main session. For example, I had Cowork reorganize a project's directory structure while I built a new feature in the main Claude Code session. When I checked back, it had made all the changes and committed them.

For the kind of work I do (full-stack web apps, publishing pipelines, automation scripts), Claude Code handles essentially 100% of my coding now. Six months ago I would have said 80%. The jump came from two things: the models got better at holding context across long sessions, and my custom commands matured to the point where entire workflows run end to end without me writing code by hand. I build, test, deploy, and create publishing pipelines for external platforms through it. That said, it took months of accumulated customization to get here. Out of the box it's useful; with investment, tasks that used to take me hours now take minutes. Month one feels like a better terminal assistant. By month six, it's running your workflow.

That progression from "useful assistant" to "running your workflow" comes from building rules: CLAUDE.md files, commands, hooks, quality gates. I put together a free AI Rules Maturity Kit to help you audit where you are and figure out what to build next.

It's not perfect. Claude still occasionally generates code that looks right but breaks in edge cases, and long sessions can drift off track if you don't steer. I've learned to verify behavior rather than trust it blindly.

I wrote a detailed post about my setup here: Claude Code: Advanced Tips Using Commands, Configuration, and Hooks

Claude Desktop, Web, and Chrome Extension

Beyond the terminal, the Max plan includes three other ways to use Claude. Claude Desktop is a standalone app with three modes (Chat, Cowork, and Code) that I use for brainstorming, drafting outlines, and quick research. Claude Code on the Web (claude.ai/code) runs coding tasks in a sandboxed browser environment, useful when I'm away from my main machine. The Chrome extension puts Claude inline while browsing for summarizing articles and asking questions about page content.

The hosted versions (Desktop and Web) route git operations through a proxy that restricts pushes to claude/* branches, which means you're locked into a PR workflow. That's a tradeoff I accept for the convenience, though I have opinions about it that I'll save for another post.

I still default to Claude Code in my terminal (iTerm2) for most development.

Other coding tools

ChatGPT ($20/mo, Plus plan)

I keep a ChatGPT subscription for quick, lightweight tasks: looking up API syntax, summarizing articles, format conversions. If it doesn't require codebase context, ChatGPT handles it.

Kiro (used briefly)

I used AWS Kiro for a couple of months while using Claude Code. It's a spec-driven IDE that generates requirements, design docs, and task lists before writing code. The structured approach is interesting, but I found Claude Code's flexibility and terminal-native workflow fit better with how I actually build. Kiro may appeal more to teams that want guardrails around AI-generated code.

Cursor, Windsurf, GitHub Copilot, Codex (occasional)

I've used all of these for customer work, but they aren't part of my daily workflow. Claude Code replaced them for my own projects. My muscle memory is in the terminal now.

Beads (free)

Beads is a git-native issue tracker that stores issues directly in your repo using Dolt (a version-controlled database). You can find it at github.com/steveyegge/beads. I use it constantly alongside Claude Code for task management. A typical cycle looks like this: I run bd ready to see what needs work, pick a task, run bd update ID --status in_progress, build the feature with Claude Code, then bd close ID --reason "Implemented" and commit with feat(publish): add pagination [bd-a1b2].

Before Beads, I was using the GitHub CLI to create and manage issues. It worked, but every operation required a round trip to GitHub's servers. I know others who use Linear or @claude directly in GitHub issues for similar workflows. Beads was a welcome change for me because everything runs locally. No network calls, no context switching to a browser-based issue tracker. It's fast.

The combination of Beads and Claude Code creates a tight loop: the AI agent knows what tasks exist, can update their status, and references them in commits. No context switching to a browser-based issue tracker.

Voice and transcription

Wispr Flow ($12/mo)

Wispr Flow replaced my keyboard for a surprising amount of work. It's a voice-to-text tool for macOS. I talk, it types. It's context-aware: if I'm in a Markdown file, it formats accordingly. If I'm writing a Slack message, it adjusts the tone and structure.

I use it for dictating entire post sections, composing messages, and describing what I want Claude Code to build. For prose and conversational text, the accuracy is strong enough that I rarely correct it. Technical jargon and code-specific terms still need occasional fixes. The output quality is close enough to typed drafts that the editing pass isn't materially different.

OpenAI Whisper API (~$1/mo)

I use the Whisper API for transcribing screencasts. The workflow: record a screencast, use FFmpeg to extract the audio track, send it to Whisper, and get a transcript I can edit into a post or tutorial. It's a lightweight screencast-to-content pipeline without paying for a full video editing suite.

A note on hardware

I use a Blue Snowball USB microphone for recording. It's inexpensive and sounds good enough for screencasts and voice input. One thing I learned the hard way: AirPods don't work well for voice-to-text. The audio quality is fine for calls and recording, but for transcription accuracy you want a real microphone.

Content and publishing

Ghost ($15/mo)

paulmduvall.com runs on Ghost. I write posts in Markdown, run them through a custom Node.js publishing script that generates a Ghost-compatible import file, and upload. The script handles frontmatter, featured images (via Unsplash API), and HTML-to-Lexical conversion.

Before this, I'd manually find an image, convert Markdown to HTML, paste it into Ghost's editor, and set metadata. Now the publishing pipeline lives in my repo. One command: ./publish.sh <slug>. It pulls a featured image from Unsplash, converts the Markdown to Ghost's format, and packages everything into a ZIP for import.

Google One AI Pro with Nano Banana ($19.99/mo)

Google's Nano Banana (through Google AI Studio, included with Google One AI Pro) is my primary image generation tool now. I use it for post graphics, social media visuals, and diagrams. Nano Banana gets usable results on the first or second try, especially for technical illustrations and post headers. I stopped experimenting with alternatives.

NotebookLM (free, watching)

I've experimented with Google's NotebookLM for generating podcast-style audio and video summaries from documents. I've used it to turn post drafts into audio walkthroughs, though I haven't published anything with it yet. It's not part of my daily workflow, but I'm keeping an eye on it.

Deployment and infrastructure

Vercel ($20/mo, Pro plan)

Vercel has become my default deployment platform. I use it for Next.js frontends, serverless API endpoints, and webhook handlers. One of my projects, StudyBuddy (an AI-powered study tool for students), runs entirely on Vercel: inbound email webhooks, serverless functions for processing, and Vercel Blob for file storage.

For my use case (serverless functions, static frontends, webhooks), I traded fine-grained AWS control for faster deploys and zero infrastructure management. Vercel also integrates with database providers like Neon and Supabase through its marketplace, so I could add a managed Postgres database without leaving the Vercel ecosystem.

GitHub Actions (~$20/mo)

GitHub Actions handles my CI/CD pipelines: tests, linting, security scans, and deployments. For some projects, it also runs compute-heavy tasks like AI-powered processing triggered by webhooks. The cost varies by usage but runs up to about $20/month.

Resend (free tier)

Resend handles transactional email for a couple of projects. For StudyBuddy, a student emails in a photo of their study guide, Resend parses the inbound email and sends a webhook to Vercel, the Vercel function kicks off test generation via GitHub Actions, and Resend delivers the finished test back to the student. The free tier covers my current volume easily.

AWS (minimal, ~$5/mo)

I still use AWS for secrets management (SSM Parameter Store, Secrets Manager) and occasional experiments. Most of my workloads have moved to Vercel and GitHub Actions, so the monthly cost is minimal.

Unsplash API (free)

Free tier, 50 requests per hour. I use it programmatically in my publishing scripts to pull featured images based on search queries. It's integrated into the publish.sh workflow so I don't have to manually find images.

The full cost breakdown

Tool Monthly Cost Category
Claude (Max plan) $200.00 Claude Code, Desktop, Web, Extension
Google One AI Pro (Nano Banana) $19.99 Image generation
ChatGPT Plus $20.00 AI assistant
Vercel Pro $20.00 Deployment
GitHub Actions ~$20.00 CI/CD
Wispr Flow $12.00 Voice input
Ghost $15.00 Publishing platform
AWS ~$5.00 Infrastructure
Whisper API ~$1.00 Transcription
Total ~$313/mo

Plus a handful of free tools that earn their spot: Beads, Resend, Unsplash API, NotebookLM, FFmpeg.

What actually matters

The dollar amount is less interesting than the workflow. Here's what I've learned after a year of iterating on this stack:

Invest heavily in your primary coding tool. Claude Code at $200/month is my biggest expense and my biggest time saver. I used to context-switch between an IDE, a browser for docs, and a terminal for commands. Now I do all of that in one place. If you're going to spend money on one AI tool, make it the one where you spend the most hours.

🎯
Want to get more out of AI coding tools like Claude Code?

The difference between using Claude Code out of the box and building a workflow that runs end to end comes down to one thing: your AI development rules. I built a free kit that helps you audit, score, and scale them.

Get the AI Rules Maturity Kit →

Free tools are underrated. Beads and Resend handle issue tracking and email without costing anything. Not everything needs a paid tier.

Voice input changes how you work. Wispr Flow at $12/month sounds minor, but it changed my relationship with writing. I draft faster, I capture ideas when they're fresh, and I avoid the friction of typing everything out. Get a real microphone though.

Fewer tools, used well, beats more tools, used poorly. I've tried dozens of AI tools over the past year. Most of them are gone. Multiple transcription services got replaced by one API call. Alternative coding assistants couldn't match the terminal workflow I'd built. The ones that survived are the ones that fit into a daily workflow without friction. If I have to remember to open a separate app, it won't last.

The stack will keep changing. But the pattern stays the same: pick tools that reduce friction, integrate them into repeatable workflows, and don't pay for things you can get for free.

If I had to pick one tool, it would be Claude Code.