Aider: The Git-Native AI Pair Programmer Guide
Every AI coding tool promises to "understand your codebase." Aider actually proves it by committing the changes to git for you. That's the core pitch: you describe what you want in plain English, Aider edits the files, and every change gets a clean, descriptive git commit -- automatically. If the result is wrong, you git revert and move on. No magic undo buttons, no proprietary history. Just git.
Aider is open-source, runs in your terminal, supports 100+ programming languages, and works with virtually every LLM you can throw at it -- Claude, GPT-4o, DeepSeek, Gemini, or a local model running on Ollama. It doesn't lock you into a subscription or an IDE. You bring your own API keys, pick your model, and start pair programming.
I've been running Aider alongside Claude Code and Copilot for several months now. This guide covers how to set it up, what makes it genuinely different, and when to reach for it over the alternatives.
π What You'll Need
- Python 3.9+ -- check with
python --version(3.12 recommended for best compatibility) - Git -- Aider's entire workflow revolves around it
- A terminal -- Bash, Zsh, PowerShell -- Aider is terminal-native
- An API key -- from Anthropic, OpenAI, DeepSeek, Google, or any OpenAI-compatible provider
- A git repository -- Aider works inside existing repos or creates one for you
π Installation and First Run
Aider gives you multiple installation paths depending on your setup. The recommended approach in 2026 uses uv, which is dramatically faster than pip and avoids dependency conflicts.
Recommended: One-Line Install
The fastest way to get started, even if you don't have Python installed yet:
curl -LsSf https://aider.chat/install.sh | sh
This downloads uv, installs Python 3.12 if needed, and sets up Aider in an isolated environment. On macOS, Linux, and WSL, this just works.
Using uv Directly
If you already have uv installed:
uv tool install --python python3.12 aider-chat
Using pip (If You Must)
python -m pip install aider-install
aider-install
This is the older method. It still works, but users report more dependency headaches than with uv. If you run into import errors or version conflicts, switch to the uv method.
Using pipx
pipx install aider-chat
Good for keeping Aider isolated from your system Python, though uv is faster.
Verify the Install
aider --version
Your First Session
Navigate to any git repo and launch Aider with your preferred model:
cd your-project
aider --model sonnet --api-key anthropic=sk-ant-your-key-here
You'll see the aider > prompt. Type a request in plain English:
aider > Add a /health endpoint to app.py that returns {"status": "ok"} with a 200 response
Aider reads the relevant files, proposes changes, edits the code, and commits. Done. You can verify with git log --oneline to see the commit it just made.
--api-key every time? Set the environment variable instead: export ANTHROPIC_API_KEY=sk-ant-your-key-here in your shell profile.
π§ Chat Modes: The Four Ways to Talk to Aider
Aider isn't a one-trick tool. It has four distinct chat modes, each designed for a different part of your workflow. Understanding when to use each one is the difference between productive pair programming and fighting the tool.
/code -- The Default
This is where Aider lives most of the time. You describe a change, Aider edits files and commits. Simple.
aider > /code Refactor the database connection to use connection pooling
The model reads the files in the chat, generates edits using a diff format, applies them, and creates a commit. If something breaks, you just git diff HEAD~1 to see what changed and git revert HEAD to undo it.
/architect -- Think First, Edit Second
This is Aider's power mode. Instead of asking one model to both reason about the problem and produce file edits, architect mode splits the job across two models:
- The architect model analyzes your request and proposes a solution plan
- The editor model translates that plan into precise file edits
aider > /architect Redesign the authentication system to support OAuth2 with Google and GitHub providers
Why does this matter? Some models (like OpenAI's o1 or o3-mini) are exceptional at reasoning but mediocre at producing structured file diffs. Architect mode pairs them with a model that's great at editing, like GPT-4o or Claude Sonnet. You get the best of both worlds.
# Launch directly in architect mode with a specific editor model
aider --architect --model o3-mini --editor-model sonnet
The downside: two LLM calls per request means higher latency and roughly double the token cost. Use architect mode for complex, multi-file changes where getting the plan right matters more than speed.
/ask -- No Edits, Just Answers
Sometimes you need to understand the code before changing it. /ask mode lets you interrogate your codebase without Aider touching any files:
aider > /ask How does the rate limiter middleware work? Walk me through the request flow.
This is perfect for onboarding to an unfamiliar codebase, understanding a dependency before refactoring, or getting a second opinion on your design. No commits, no edits, just conversation.
/help -- Tool Documentation
aider > /help How do I configure a YAML settings file?
Pulls from Aider's own documentation. Useful when you can't remember a flag or want to know about a feature without leaving the terminal.
/code to set the default, then drop into /ask or /architect for individual messages as needed.
βοΈ Essential Commands and Configuration
Aider's in-chat commands handle everything from file management to git operations. Here's the reference you'll actually use daily.
File Management
| Command | What It Does |
|---|---|
/add file.py |
Add a file to the chat (Aider can now read and edit it) |
/add src/*.py |
Add multiple files with glob patterns |
/drop file.py |
Remove a file from the chat |
/read-only docs/spec.md |
Add a file for reference only -- Aider can read but not edit it |
/ls |
List all known files and which are in the chat |
Git Operations
| Command | What It Does |
|---|---|
/commit |
Commit any pending changes (with an AI-generated message) |
/diff |
Show all changes since your last message |
/undo |
Undo the last Aider-generated commit |
Model and Context
| Command | What It Does |
|---|---|
/model sonnet |
Switch models mid-conversation |
/tokens |
Show token usage for the current chat context |
/settings |
Display current configuration |
/map |
Show the repository map Aider has built |
/map-refresh |
Force a refresh of the repo map |
/reset |
Drop all files and clear chat history |
Code Quality
| Command | What It Does |
|---|---|
/run pytest |
Run a shell command, optionally add output to chat |
/test pytest tests/ |
Run tests; if they fail, Aider automatically tries to fix them |
/lint |
Lint files in the chat and auto-fix issues |
Configuration File
Instead of passing flags every time, create an .aider.conf.yml file in your home directory or project root:
# ~/.aider.conf.yml
# Default model
model: sonnet
# Editor model for architect mode
editor-model: gpt-4o
# Auto-commit changes (default: true)
auto-commits: true
# Use dark mode for terminal output
dark-mode: true
# Files to always load as read-only context
read:
- CONVENTIONS.md
- docs/architecture.md
# API keys (OpenAI and Anthropic only; use .env for others)
openai-api-key: sk-your-openai-key
anthropic-api-key: sk-ant-your-anthropic-key
Aider checks three locations for this file, in order of priority:
- Current directory --
.aider.conf.yml(project-specific settings) - Git repo root --
.aider.conf.yml(shared team settings) - Home directory --
~/.aider.conf.yml(global defaults)
Files loaded later override earlier ones, so your project config trumps your global defaults.
.env file in your project root. Aider reads it automatically. Format: DEEPSEEK_API_KEY=your-key-here.
π€ Choosing the Right Model
Aider works with virtually every LLM available, but model choice dramatically affects quality, speed, and cost. Here's what actually works well in 2026.
Model Recommendations
| Model | Best For | Architect? | Editor? | Relative Cost |
|---|---|---|---|---|
| Claude Sonnet 4.6 | Best all-rounder for code editing | β | β | $$ |
| Claude Opus 4.6 | Complex reasoning, large refactors | β | β | $$$$ |
| GPT-4o | Fast, capable, good as editor | β | β | $$ |
| o3-mini | Strong reasoning, weak at diffs | β | β | $$ |
| DeepSeek V3 | Budget-friendly, surprisingly good | β | β | $ |
| DeepSeek R1 | Reasoning on a budget | β | β | $ |
| Gemini 2.5 Pro | Large context, good for big repos | β | β | $$ |
Recommended Combinations
Daily driver (best quality):
aider --model sonnet
Claude Sonnet consistently tops Aider's own code editing leaderboard. It's fast, accurate at producing diffs, and reasonably priced.
Architect mode (complex tasks):
aider --architect --model o3-mini --editor-model sonnet
Let o3-mini do the thinking, Claude Sonnet do the editing. This combo handles multi-file refactors and architectural changes well.
Budget mode (personal projects):
aider --model deepseek
DeepSeek V3 is remarkably capable for its price. For personal projects where you're not on a deadline, it's hard to beat the cost-effectiveness.
Local / private mode (no data leaves your machine):
aider --model ollama_chat/deepseek-coder-v2
Run a local model via Ollama when you can't send code to external APIs. Quality drops compared to cloud models, but your code stays on your hardware.
Cost Reality Check
Aider itself is free. You pay only for the LLM API calls. Here's what typical development sessions cost:
| Model | Typical Feature (1K-5K tokens) | Heavy Session (50K tokens) |
|---|---|---|
| Claude Sonnet 4.6 | $0.02 - $0.10 | ~$0.50 |
| GPT-4o | $0.01 - $0.08 | ~$0.40 |
| DeepSeek V3 | $0.001 - $0.005 | ~$0.03 |
| o3-mini + Sonnet (architect) | $0.04 - $0.20 | ~$1.00 |
For most developers, Aider costs $5-30/month in API fees. That's significantly cheaper than tool-specific subscriptions, and you only pay for what you use.
π Aider vs the Competition
You're probably already using one or more AI coding tools. Here's where Aider fits and when you should reach for it over the alternatives.
Feature Comparison
| Feature | Aider | Claude Code | GitHub Copilot | Cursor |
|---|---|---|---|---|
| Open source | β | β | β | β |
| Terminal-native | β | β | β οΈ CLI available | β |
| Auto git commits | β | β οΈ On request | β | β |
| Bring your own model | β Any LLM | β Claude only | β οΈ Limited | β Multiple |
| Architect/editor split | β | β | β | β |
| Voice input | β | β | β | β |
| Image/screenshot input | β | β | β | β |
| IDE integration | β οΈ Via plugins | β | β Native | β Native |
| Auto lint and test | β | β οΈ Manual | β | β |
| Local model support | β Ollama | β | β | β οΈ Limited |
| Monthly cost | $0 + API fees | $20-200/mo | $0-39/mo | $0-40/mo |
When to Use Each
Choose Aider when:
- You want full control over which models you use (and switch freely between them)
- Git-native workflow matters -- you want every AI change tracked as a commit
- You're cost-conscious and prefer pay-per-use over subscriptions
- You work across multiple languages and frameworks in the terminal
- You want to use local models for private codebases
Choose Claude Code when:
- You need deep autonomous reasoning -- letting the agent explore, plan, and implement complex features end-to-end
- MCP integrations matter (connecting to GitHub, databases, Sentry, etc.)
- You're already on a Claude subscription and want a unified experience
- You need hooks, CLAUDE.md project context, and the Plan/Auto-Accept mode system
Choose GitHub Copilot when:
- Inline autocomplete speed is your priority
- Your team already pays for GitHub Enterprise
- You want agent mode that turns GitHub Issues into PRs
- Zero configuration matters more than maximum capability
Choose Cursor when:
- You want an AI-native IDE with visual diff previews
- Multi-model support in a GUI matters
- Your workflow is heavily IDE-centric, not terminal-centric
The winning strategy for most developers in 2026: use Aider or Claude Code for complex multi-file tasks in the terminal, and Copilot for everyday autocomplete in your editor. They complement each other rather than compete.
π§ Troubleshooting
"Model not found" or API key errors
Problem: Aider says it can't find your model or your API key is invalid.
Fix: Double-check the key format. Aider expects provider-specific environment variables:
# Correct
export ANTHROPIC_API_KEY=sk-ant-xxxx
export OPENAI_API_KEY=sk-xxxx
export DEEPSEEK_API_KEY=sk-xxxx
# Or pass inline
aider --model sonnet --api-key anthropic=sk-ant-xxxx
If you're using a custom or self-hosted model, you may need --openai-api-base to point at your API endpoint.
Token limit / context window overflow
Problem: Aider errors with "context window exceeded" or produces garbled output.
Fix: You're sending too much code to the model. Solutions:
- Drop files you're not actively editing: /drop largefile.py
- Break changes into smaller requests
- Switch to a model with a larger context window (Gemini 2.5 Pro offers 1M tokens)
- Use /tokens to check your current usage
Aider keeps editing the wrong files
Problem: You asked for a change in api.py but Aider modified utils.py too.
Fix: Be explicit about which files to touch. Use /add to include only the relevant files, and /drop everything else. Aider will only edit files that are in the chat.
Git commits are too granular or too broad
Problem: Every tiny change gets its own commit, or large changes are lumped together.
Fix: Control auto-commit behavior:
# Disable auto-commits (you commit manually)
aider --no-auto-commits
# Or in .aider.conf.yml
auto-commits: false
Then use /commit when you're ready to bundle changes into logical commits.
Slow performance or high latency
Problem: Aider takes 30+ seconds per response.
Fix: This is almost always the LLM provider, not Aider. Try:
- Switch to a faster model (GPT-4o is typically faster than Claude Opus)
- Use architect mode with a fast editor model
- Check your internet connection and API provider status page
πΊοΈ What's Next
Once you're productive with the basics, here's where to go:
- Set up
.aider.conf.ymlper project with your team's preferred models, read-only convention files, and linting rules - Experiment with architect mode combinations -- pair reasoning models (o3-mini, DeepSeek R1) with fast editors (Sonnet, GPT-4o) to find your sweet spot
- Integrate Aider into CI/CD using its scripting interface (
aider --message "..." --yes) for automated code maintenance tasks - Try Aider's voice mode (
/voice) for hands-free coding sessions -- surprisingly useful for prototyping - Connect local models via Ollama for private repos where code can't leave your network
Aider occupies a unique spot in the 2026 AI coding landscape: it's the tool that trusts git more than any proprietary undo system, lets you bring any model you want, and costs nothing beyond your API fees. For developers who think in commits and live in the terminal, it's worth adding to your toolkit.
Want to see how other AI coding tools compare? Read our AI Coding Agents Compared for a full breakdown of Cursor, Copilot, Claude Code, and Windsurf. Already using Claude Code? Check out our Claude Code Workflow Guide for advanced tips on CLAUDE.md, hooks, and MCP integrations. For a deep dive into GitHub Copilot's latest agent features, see the GitHub Copilot Agent Mode Guide. And if you want to run models entirely offline, our Local LLM + Ollama RAG Guide covers the full setup.