The New Plumbing Beneath Everything You're Building
Key Takeaway: Four protocols, MCP, A2A, NLWeb, and AGENTS.md, are becoming the infrastructure layer that determines how AI agents access information, communicate with each other, and interact with business systems. Understanding them isn't optional for anyone building with AI.
In 2004, if you asked a business owner about HTTP, they'd have said "that's for engineers." The businesses that understood how the protocol worked built the web's first moats.
In 2026, the same conversation is happening about MCP, A2A, NLWeb, and AGENTS.md.
These aren't acronyms for a developer blog post. They're the plumbing beneath every AI system being built right now, including the ones running inside your business.
Four Protocols, One Infrastructure Layer
Start with MCP, the Model Context Protocol, created by Anthropic and now the de facto standard for connecting AI models to external tools and data. Within its first year, MCP reached 97 million monthly SDK downloads. OpenAI, Google, and Microsoft all adopted it. That speed of adoption tells you something: the problem it solved was urgent.
The problem was an M×N integration challenge. M AI models, N business systems. Without a standard protocol, connecting them required custom code for every combination. MCP reduced that to M+N. One standard for everyone.
A2A (Agent-to-Agent protocol) handles what MCP doesn't: communication between AI agents. MCP connects an agent to a tool. A2A allows an agent to delegate to another agent, negotiate tasks, and report outcomes. This is the protocol that makes multi-agent systems architecturally clean rather than held together with custom middleware.
NLWeb takes a different angle. Instead of connecting agents to systems, it makes websites natively queryable by AI. Every page that implements NLWeb becomes a data source that any AI agent can read in a structured way, building on existing Schema.org markup. If your website has structured data today, you're already partially NLWeb-ready.
AGENTS.md is the most practical entry point. Similar to how robots.txt tells web crawlers what to index, AGENTS.md tells AI coding agents how to understand and work with your codebase. It's a configuration file. You write it once and every AI assistant working in your repository has context about your architecture, conventions, and constraints.
Why This Matters Beyond Engineering
Eight competing AI companies, including AWS, Anthropic, Google, Microsoft, and OpenAI, jointly formed the Agentic AI Foundation to govern these protocols. Competing companies sharing infrastructure governance. That only happens when the infrastructure is important enough that no single company benefits from owning it.
From my perspective building systems at Madison AI and GEOflux, these protocols are the foundation that determines whether AI systems scale cleanly or accumulate technical debt. Building on MCP means your AI tools interoperate. Ignoring it means you're building a custom integration layer that becomes your problem to maintain.
For business leaders outside engineering, the relevant question is: do your technology decisions account for these protocols?
If your CRM, analytics platform, and content systems support MCP, your AI workflows can access them without custom connectors. If they don't, you're either locked out of AI-native workflows or paying an integration tax that your competitors on MCP-native stacks aren't paying.
This is not a 2027 concern. MCP has 97 million monthly downloads today. A2A is already supported by major platforms. The gap between companies building on these protocols and those ignoring them is opening now.
The Schema.org Shortcut
There's a practical insight buried in how NLWeb works: it builds on Schema.org structured data. Businesses that already implemented Schema.org markup for SEO purposes are closer to AI-queryability than they realize.
This is the systems thinking angle. Investments in technical SEO, structured data, semantic markup, clean site architecture, weren't just good for Google. They were building infrastructure that AI systems prefer to read.
The principle generalizes. Well-structured data, clear semantic labeling, consistent naming conventions, these are properties that both human developers and AI systems work better with. Investing in them compounds across multiple use cases.
The web's infrastructure layer is being rebuilt. The businesses that understand what MCP, A2A, NLWeb, and AGENTS.md are will make better decisions about which tools to adopt, which integrations to build, and which platforms to trust with their data architecture.
That's not a technical decision. That's a strategic one. If you're thinking about AI agents running business processes and wondering how to connect them to your existing systems cleanly, these protocols are the answer.
FAQ
What is MCP (Model Context Protocol)?
MCP is a standard protocol created by Anthropic that allows AI models to connect to external tools, databases, and systems. It reached 97 million monthly SDK downloads in its first year and is now supported by OpenAI, Google, and Microsoft, making it the industry standard for AI-to-tool integration.
Do I need to understand these protocols if I'm not an engineer?
You don't need to implement them, but you should understand what they do. They determine which AI tools will interoperate cleanly, which business systems will be accessible to AI workflows, and where integration costs will accumulate. These are business architecture decisions, not just engineering ones.
What is AGENTS.md and how is it different from robots.txt?
AGENTS.md is a configuration file that tells AI coding agents about your codebase: its architecture, conventions, and constraints. Where robots.txt controls web crawler behavior, AGENTS.md controls how AI development tools understand and work with your code. It's one of the simpler practical steps any development team can take today.
