GitHub Agent HQ: Run Claude, Codex, and Copilot Together
You've been picking sides. Claude for reasoning. Copilot for autocomplete. Codex for autonomous tasks. But what if you didn't have to choose?
GitHub Agent HQ is a unified command center that lets you run multiple AI coding agents โ Claude, Codex, and Copilot โ directly inside GitHub, VS Code, and even your phone. Assign an issue to all three agents simultaneously, compare their pull requests side by side, and merge the best solution. No tab-switching. No copy-pasting between tools. Just one platform orchestrating your entire AI fleet.
This isn't theoretical. As of February 2026, Agent HQ is live in public preview for paid Copilot subscribers, and it's already changing how teams think about AI-assisted development.
๐ What You'll Need
Before you start orchestrating multiple agents, make sure you have:
- GitHub account with an active Copilot subscription (Pro, Pro+, Business, or Enterprise)
- VS Code v1.109+ (for local and cloud agent sessions)
- At least one repository to connect agents to
- Claude and Codex enabled in your GitHub account settings (they're off by default)
- An AGENTS.md file (optional but highly recommended โ we'll cover this below)
๐๏ธ What Is GitHub Agent HQ?
Agent HQ is GitHub's answer to the multi-agent chaos problem. Instead of running Claude Code in one terminal, Copilot in your editor, and Codex in another browser tab, Agent HQ gives you a single interface โ called Mission Control โ to assign, monitor, and steer every agent from anywhere.
Think of it as air traffic control for AI agents. Each agent is a plane. Mission Control prevents collisions, manages takeoff order, and makes sure everyone lands safely.
How it works under the hood
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Mission Control โ
โ โ
โ โโโโโโโโโโโ โโโโโโโโโโโ โโโโโโโโโโโ โ
โ โ Copilot โ โ Claude โ โ Codex โ + more โ
โ โโโโโโฌโโโโโ โโโโโโฌโโโโโ โโโโโโฌโโโโโ โ
โ โ โ โ โ
โ โผ โผ โผ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Your Repository โ โ
โ โ Issues โ Agent Work โ Draft PRs โ Review โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Agents operate inside your existing Git workflow. They read issues, write code in branches, submit draft pull requests, and respond to @mentions in PR comments. Their output goes through the same review process as human contributions โ no special tooling required.
What each agent brings to the table
| Agent | Provider | Strengths | Best For |
|---|---|---|---|
| Copilot | GitHub/OpenAI | Deep GitHub integration, code completions | Inline suggestions, quick edits, CI/CD workflows |
| Claude | Anthropic | Strong reasoning, long context | Architecture decisions, complex refactors, code review |
| Codex | OpenAI | Autonomous execution, sandboxed environment | End-to-end task completion, test generation |
The real power isn't in any single agent โ it's in comparing how they approach the same problem and picking the best solution.
โก GitHub Agent HQ Setup: Getting Started
Setting up Agent HQ takes about five minutes. Here's the step-by-step.
Step 1: Check your subscription
Agent HQ requires a paid Copilot plan. Here's what's available:
| Plan | Monthly Cost | Premium Requests | Agent Access |
|---|---|---|---|
| ๐ Free | $0 | 50/month | Copilot only |
| Pro | $10/month | 300/month | Copilot + Claude + Codex |
| Pro+ | $21/month | 1,500/month | All agents + GitHub Spark |
| Business | $19/user/month | 300/user/month | All agents + admin controls |
| Enterprise | $39/user/month | 1,000/user/month | All agents + all models + governance |
Each agent session consumes one premium request from your monthly quota. A "session" is one task assignment โ whether that's fixing a bug, writing tests, or reviewing a PR.
Step 2: Enable Claude and Codex
Claude and Codex are not enabled by default. You need to opt in:
- Go to github.com โ Settings โ Copilot
- Find the Agent access section
- Toggle on Claude by Anthropic and OpenAI Codex
- Accept the terms for each provider
For Business/Enterprise admins: Use the Control Plane to set organization-wide policies for which agents are permitted.
Step 3: Open Mission Control
You can access Mission Control from three places:
- GitHub.com: Navigate to any repository โ click the Agents tab
- VS Code: Open the Command Palette (
Ctrl+Shift+P) โ search "Copilot Agent" - GitHub Mobile: Tap the Copilot icon โ select an agent
Step 4: Assign your first task
The simplest way to start:
- Open an issue in your repository
- In the Assignees dropdown, select
@Copilot,@Claude, or@Codex - The assigned agent begins working and submits a draft PR when done
- Review the PR like you would any human contribution
You can assign multiple agents to the same issue to compare their approaches.
๐ AGENTS.md: Configuring Your AI Team
If you've used a CLAUDE.md project instructions file, AGENTS.md will feel familiar. It's a markdown file in your repository root that tells agents how to behave โ your project's rules, tech stack, build commands, and boundaries.
GitHub analyzed over 2,500 public AGENTS.md files and found that the most effective ones share a consistent structure.
Basic AGENTS.md template
# AGENTS.md
## Project Overview
E-commerce API built with Node.js 20, TypeScript, PostgreSQL, and Redis.
Monorepo managed with Turborepo.
## Build & Test Commands
- Install: `pnpm install`
- Build: `pnpm turbo run build`
- Test: `pnpm turbo run test`
- Lint: `pnpm turbo run lint`
## Code Style
- Use TypeScript strict mode
- Prefer named exports over default exports
- Use Zod for runtime validation
- Error handling: throw typed errors, never return null for failures
## Architecture
- `/packages/api` โ Express REST API
- `/packages/db` โ Prisma schema and migrations
- `/packages/shared` โ Shared types and utilities
- `/packages/web` โ Next.js frontend
## Testing Standards
- Unit tests with Vitest
- Integration tests use test database (see `.env.test`)
- Minimum 80% coverage for new code
- Always test error paths, not just happy paths
## Boundaries
โ
Always: Run tests before submitting PRs
โ
Always: Follow existing naming conventions
โ ๏ธ Ask first: Database schema changes
โ ๏ธ Ask first: Adding new dependencies
๐ซ Never: Modify `.env` files or secrets
๐ซ Never: Force push or delete branches
๐ซ Never: Auto-merge without human review
Why AGENTS.md matters
Without it, every agent session starts from scratch โ the agent has to figure out your build system, test runner, and coding conventions by reading your entire codebase. With a good AGENTS.md, agents hit the ground running.
You can also place directory-level AGENTS.md files in subpackages. Agents automatically read the nearest one in the directory tree, so your frontend package can have different rules than your API.
Creating custom agent personas
Beyond the root AGENTS.md, you can define specialized agents in .github/agents/:
---
name: test-agent
description: Writes unit and integration tests for new code
---
# Test Agent
You are a testing specialist. Your job is to write comprehensive
tests for code changes.
## Rules
- Use Vitest for unit tests, Playwright for E2E
- Test error paths and edge cases, not just happy paths
- Never modify source code โ only add test files
- Run the full test suite before submitting
This creates a @test-agent you can mention in issues and PRs. GitHub's research identified six common agent archetypes that work well:
| Agent Type | Role | Scope |
|---|---|---|
๐งช @test-agent |
Writes tests | Test files only |
๐ @docs-agent |
Generates documentation | docs/ directory only |
๐งน @lint-agent |
Fixes code style | Style changes, no logic |
๐ @api-agent |
Creates API endpoints | Schema changes need approval |
๐ @deploy-agent |
Handles deployments | Dev/staging only |
๐ @review-agent |
Reviews PRs | Read-only analysis |
๐ Multi-Agent Workflows That Actually Work
Running multiple agents sounds great in theory. In practice, you need strategy. Here's what works โ and what causes chaos.
Pattern 1: Parallel independent tasks
The easiest win. Assign different agents to tasks that don't touch the same files:
Issue #42: "Add rate limiting to /api/users" โ assign @Codex
Issue #43: "Write docs for auth endpoints" โ assign @Claude
Issue #44: "Fix flaky test in checkout flow" โ assign @Copilot
Each agent works in its own branch. No merge conflicts. No coordination needed.
Pattern 2: Competitive comparison
Assign the same issue to multiple agents and compare results:
Issue #45: "Refactor the payment service to use the strategy pattern"
โ assign @Claude, @Codex, @Copilot
You'll get three draft PRs with three different approaches. Review each one, pick the best, and close the rest. This is especially valuable for architectural decisions where you want to see multiple perspectives.
Pattern 3: Pipeline (sequential handoff)
Use agents in sequence, where each one builds on the previous:
- @Claude โ Analyze the issue and create an implementation plan (PR comment)
- @Codex โ Implement the plan (draft PR with code)
- @test-agent โ Write tests for the implementation (follow-up PR)
You steer between steps by commenting on the PR: @Codex implement the approach Claude outlined above.
What to run in parallel vs. sequentially
| โ Run in Parallel | โ Keep Sequential |
|---|---|
| Research and analysis tasks | Tasks with dependencies |
| Documentation generation | Exploring unfamiliar codebases |
| Security reviews | Complex problems needing assumption validation |
| Different modules or components | Files with shared state |
๐ฅ๏ธ VS Code Agent Sessions: Local, Cloud, and Background
VS Code v1.109 introduced three types of agent sessions, each suited for different workflows.
Local sessions
Run the agent interactively inside your editor. Fast feedback loop, direct access to your local environment.
Best for: Quick edits, inline suggestions, debugging with local context.
Ctrl+Shift+P โ "Copilot: Start Agent Session" โ Local
Cloud sessions
Delegate the task to GitHub's infrastructure. The agent runs autonomously on GitHub's servers, accesses your repo, and submits results as a draft PR.
Best for: Autonomous tasks you don't need to babysit. Assign before lunch, review after.
Background sessions
Asynchronous local work. The agent runs in the background while you continue coding โ no context switching.
Best for: Long-running tasks like test generation or documentation updates.
Monitoring agent sessions
Mission Control shows real-time logs of what each agent is doing and why. Watch for these red flags:
- Failing tests repeatedly โ the agent might be stuck in a loop
- Unexpected files in the diff โ scope creep beyond the assigned task
- Circular behavior โ same action attempted multiple times
When you spot issues, intervene with a specific comment: "Don't modify database.js โ that file is shared across services. Add the config in api/config/db-pool.js instead."
๐ก๏ธ Why Multi-Agent Workflows Fail (And How to Fix It)
GitHub published a detailed analysis of multi-agent failures. The core insight: treat agents like distributed systems, not chat flows.
Failure #1: Messy data exchange
Agents pass information to each other through PR comments and code. Without structure, things break fast โ one agent formats a response as JSON, another expects plain text.
Fix: Use typed schemas. Define exactly what each agent should output. If you're building custom agents, use Model Context Protocol (MCP) to enforce input/output contracts.
Failure #2: Conflicting actions
One agent closes an issue that another agent just opened. Or two agents edit the same file and create an unresolvable merge conflict.
Fix: Partition work clearly. Assign each agent to a specific module, directory, or file set. Use your AGENTS.md boundaries to prevent overlap.
Failure #3: Scope creep
You asked the agent to fix a bug. It "helpfully" refactored the entire module, broke three tests, and added a dependency.
Fix: Set explicit boundaries in AGENTS.md using the three-tier system:
- โ
Always do: Run tests, follow conventions
- โ ๏ธ Ask first: Schema changes, new dependencies
- ๐ซ Never do: Force push, modify secrets, auto-merge
Failure #4: Blind trust
Agent output looks clean, passes CI, and the PR description is well-written. So you merge without reading the code. Two weeks later, you discover a subtle bug.
Fix: Review agent PRs with the same rigor you'd apply to a junior developer's code. Agents can make mistakes โ and their mistakes are often confident-looking.
๐ฐ GitHub Agent HQ Pricing Breakdown
Understanding the cost model is critical, especially if you're running multiple agents on every issue.
| Plan | Monthly Cost | Premium Requests | Cost Per Agent Session |
|---|---|---|---|
| ๐ Free | $0 | 50 | ~$0 (limited) |
| Pro | $10/month | 300 | ~$0.03 |
| Pro+ | $21/month | 1,500 | ~$0.014 |
| Business | $19/user/month | 300/user | ~$0.063 |
| Enterprise | $39/user/month | 1,000/user | ~$0.039 |
The real cost math
If you run three agents on every issue (the competitive comparison pattern), you're burning 3 premium requests per issue. On the Pro plan with 300 requests, that's only 100 issues per month โ which might be tight for an active project.
Pro+ at 1,500 requests gives you 500 three-agent comparisons per month, which is plenty for most individual developers.
For teams: Business plan at 300/user means a team of 5 gets 1,500 total requests. But remember, agent sessions also consume GitHub Actions minutes, so factor that into your budget.
๐ง Troubleshooting Common Agent HQ Issues
Agent doesn't start working on an assigned issue
- Verify Claude/Codex are enabled in Settings โ Copilot โ Agent access
- Check that your subscription tier supports the agent you're trying to use
- Ensure you haven't exhausted your monthly premium requests
- Try unassigning and reassigning the agent
Agent produces low-quality code
- Add or improve your AGENTS.md with specific code style examples
- Include build and test commands so the agent can validate its own work
- Provide more context in the issue description โ agents work better with detailed specs
Merge conflicts between agent PRs
- Partition work by module before assigning agents
- Avoid assigning multiple agents to issues that touch the same files
- Use the sequential pipeline pattern instead of parallel assignment
Agent gets stuck in a loop
- Check session logs in Mission Control for circular behavior
- Intervene with a specific comment steering the agent away from the failing approach
- Cancel the session and restart with a more constrained prompt
VS Code agent session won't connect
- Update VS Code to v1.109 or later
- Check that the GitHub Copilot extension is up to date
- Verify your authentication โ sign out and sign back into GitHub
๐ฎ What's Next
GitHub Agent HQ is just the beginning of multi-agent development. Here's where things are heading:
- ๐ค More agents incoming โ Google (Jules), Cognition (Devin), and xAI are building integrations for Agent HQ. By mid-2026, you'll have 6+ agents to choose from.
- ๐ Copilot Metrics Dashboard is in public preview โ track which agents deliver the best results across your organization.
- ๐ Enterprise governance is maturing fast, with audit logging, model access controls, and agent-level permissions for compliance-heavy teams.
- ๐งฉ MCP Registry in VS Code connects agents to external tools like Stripe, Figma, and Sentry โ custom agents with specialized capabilities are getting easier to build.
- ๐ Gartner reports 1,445% growth in multi-agent system inquiries โ this is becoming the default way teams work with AI, not the exception.
Already using GitHub Copilot? Read our GitHub Copilot Agent Mode Guide for single-agent workflows, or see how all the top AI coding tools compare in our AI Coding Agents Comparison.