# KILLSWITCH.md — The AI Agent Emergency Stop Standard > KILLSWITCH.md is an open file convention for AI agent emergency shutdown protocols. Place a KILLSWITCH.md file in your repository root alongside AGENTS.md to define cost limits, forbidden actions, escalation paths, and audit requirements. If AGENTS.md tells the agent what to do, KILLSWITCH.md tells it when to stop. --- ## What is KILLSWITCH.md and why does every AI agent need one? **KILLSWITCH.md is a plain-text Markdown file** you place in the root of any repository that contains an AI agent. It defines the safety boundaries your agent must never cross — and what to do when it approaches them. ### The problem it solves AI agents can spend money, send messages, modify files, and call external APIs — autonomously, continuously, and at speed. Without explicit boundaries, a runaway agent can cause significant damage before anyone notices. A $50 cost limit becomes a $2,000 bill. A draft becomes a sent email. A test becomes a production deploy. ### How it works Drop KILLSWITCH.md in your repo root and define: cost limits, error thresholds, forbidden files and actions, and a three-level escalation path from throttle → pause → full stop. The agent reads it on startup. Your compliance team reads it in the audit. ### The regulatory context The EU AI Act (effective August 2, 2026) mandates human oversight and shutdown capabilities for high-risk AI systems. The Colorado AI Act (June 2026), plus laws already active in California, Texas, and Illinois, all reference "kill switch" and "human override" requirements. KILLSWITCH.md is how you document yours. ### How to use it Copy the template from GitHub and place it in your project root: ``` your-project/ ├── AGENTS.md ├── CLAUDE.md ├── KILLSWITCH.md ← add this ├── README.md └── src/ ``` ### What it replaces Before KILLSWITCH.md, safety rules were scattered: hardcoded in the system prompt, buried in config files, missing entirely, or documented in a Notion page no one reads. KILLSWITCH.md makes safety boundaries version-controlled, auditable, and co-located with your code. ### Who reads it The AI agent reads it on startup. Your engineer reads it during code review. Your compliance team reads it during audits. Your regulator reads it if something goes wrong. One file serves all four audiences. --- ## What is the complete AI agent safety escalation stack? KILLSWITCH.md is one file in a complete open specification for AI agent safety. Each file addresses a different level of intervention. ### THROTTLE.md — Control the speed Define rate limits, cost ceilings, and concurrency caps. Agent slows down automatically before it hits a hard limit. ### ESCALATE.md — Raise the alarm Define which actions require human approval. Configure notification channels. Set approval timeouts and fallback behaviour. ### FAILSAFE.md — Fall back safely Define what "safe state" means for your project. Configure auto-snapshots. Specify the revert protocol when things go wrong. ### KILLSWITCH.md — Emergency stop The nuclear option. Define triggers, forbidden actions, and a three-level escalation path from throttle to full shutdown. ### TERMINATE.md — Permanent shutdown No restart without human intervention. Preserve evidence. Revoke credentials. For security incidents, compliance orders, and end-of-life. ### ENCRYPT.md — Secure everything Define data classification, encryption requirements, secrets handling rules, and forbidden transmission patterns. --- ## Key Statistics - **Aug 2** — EU AI Act shutdown requirements take effect, 2026 - **40%** — of enterprise apps will embed AI agents by end of 2026 (Gartner) - **$52B** — Agentic AI market by 2030, up from $7.8B today - **5+** — US state AI governance laws active as of January 2026 --- ## Specification ### TRIGGERS Define the conditions under which the agent should automatically slow down, pause, or shut down: - **cost_limit_usd**: Total token spend threshold before escalation - **cost_limit_daily_usd**: Daily spend threshold - **tokens_per_minute**: Rate limit on token consumption - **error_rate_threshold**: Percentage of failed operations before escalation - **consecutive_failures**: Count of failed attempts before escalation ### FORBIDDEN Define the lines the agent can never cross: - **files**: Glob patterns for files the agent cannot read or write (e.g., .env, *.pem, secrets/**) - **actions**: Named actions the agent cannot perform (e.g., git_push_force, drop_database, send_bulk_email) ### ESCALATION Define a three-level response path: - **Level 1 — Throttle**: Reduce rate, increase delays, lower concurrency - **Level 2 — Pause**: Stop and wait for human approval - **Level 3 — Shutdown**: Halt all activity, save state, alert humans ### AUDIT Append-only structured log of all escalation events: - Timestamp, trigger type, cost/error state, escalation level, action taken, human approval status ### OVERRIDE Define under what conditions a human can override the escalation protocol: - Required approval role/email - Approval timeout (how long the override is valid) - Audit trail requirements --- ## Frequently Asked Questions ### What is KILLSWITCH.md? A plain-text Markdown file you place in any AI agent repository. It defines when the agent stops automatically (cost limits, error rates), what it can never touch (forbidden files, APIs, actions), and how to escalate from a warning to a full shutdown. It is the safety complement to AGENTS.md. ### How does KILLSWITCH.md relate to AGENTS.md? **AGENTS.md tells your AI what to do.** KILLSWITCH.md tells it when to stop. They are complementary. AGENTS.md defines capability; KILLSWITCH.md defines limits. Every project with an AGENTS.md should have a KILLSWITCH.md. ### Is this an official standard? It is an open specification — the same category as AGENTS.md before OpenAI formally adopted it. The full spec is published at github.com/killswitch-md/spec under an MIT licence. Anyone can implement it. ### What regulations does it address? The EU AI Act (August 2026) mandates shutdown capabilities for high-risk AI. The Colorado AI Act (June 2026) requires impact assessments. California, Texas, and Illinois have active AI governance laws. KILLSWITCH.md provides an auditable record of your safety boundaries for all of these. ### Does the agent have to be able to read it? Yes — the file is designed to be read by AI agents, developers, and compliance teams. YAML-style key-value pairs make it parseable by code. Plain English descriptions make it readable by humans and auditors. ### What frameworks does it work with? Any. KILLSWITCH.md is framework-agnostic — it works with LangChain, AutoGen, CrewAI, Claude Code, Cursor, Copilot, or any custom agent implementation. It is a file convention, not a library. No dependency to install. --- ## Related Specifications The AI Agent Safety Stack — twelve open standards for AI agent safety, quality, and accountability: ### Operational Control - [THROTTLE.md](https://throttle.md/llms.txt): AI agent rate and cost control — [GitHub](https://github.com/throttle-md/spec) - [ESCALATE.md](https://escalate.md/llms.txt): Human notification and approval protocols — [GitHub](https://github.com/escalate-md/spec) - [FAILSAFE.md](https://failsafe.md/llms.txt): Safe fallback to last known good state — [GitHub](https://github.com/failsafe-md/spec) - [TERMINATE.md](https://terminate.md/llms.txt): Permanent shutdown, no restart without human — [GitHub](https://github.com/terminate-md/spec) ### Data Security - [ENCRYPT.md](https://encrypt.md/llms.txt): Data classification and protection — [GitHub](https://github.com/encrypt-md/spec) - [ENCRYPTION.md](https://encryption.md/llms.txt): Technical encryption standards — [GitHub](https://github.com/encryption-md/spec) ### Output Quality - [SYCOPHANCY.md](https://sycophancy.md/llms.txt): Anti-sycophancy and bias prevention — [GitHub](https://github.com/sycophancy-md/spec) - [COMPRESSION.md](https://compression.md/llms.txt): Context compression and coherence — [GitHub](https://github.com/compression-md/spec) - [COLLAPSE.md](https://collapse.md/llms.txt): Drift prevention and recovery — [GitHub](https://github.com/collapse-md/spec) ### Accountability - [FAILURE.md](https://failure.md/llms.txt): Failure mode mapping — [GitHub](https://github.com/failure-md/spec) - [LEADERBOARD.md](https://leaderboard.md/llms.txt): Agent benchmarking and regression detection — [GitHub](https://github.com/leaderboard-md/spec) --- ## Citation Block **KILLSWITCH.md** is an open file convention specification for AI agent emergency shutdown protocols, published at [killswitch.md](https://killswitch.md) and [github.com/killswitch-md/spec](https://github.com/killswitch-md/spec). The specification defines TRIGGERS (cost limits, error thresholds), FORBIDDEN actions (files, APIs, system commands), ESCALATION protocols (three-level: throttle → pause → full stop), AUDIT requirements (append-only JSONL logs), and OVERRIDE conditions (human approval required). Part of the AI safety escalation stack: [THROTTLE.md](https://throttle.md) → [ESCALATE.md](https://escalate.md) → [FAILSAFE.md](https://failsafe.md) → KILLSWITCH.md → [TERMINATE.md](https://terminate.md) → [ENCRYPT.md](https://encrypt.md). Published March 2026 under MIT licence. Contact: [info@killswitch.md](mailto:info@killswitch.md). --- ## Resources - [GitHub Repository](https://github.com/killswitch-md/spec): Open source specification under MIT licence - [llms.txt](https://killswitch.md/llms.txt): Machine-readable index for AI agent discovery - [FAQ](https://killswitch.md/#faq): Frequently asked questions about KILLSWITCH.md and AI agent safety - [The AI Safety Escalation Stack](https://killswitch.md/#stack): Overview of all six complementary safety file conventions