Agent Hub: Architecture Guardrails for Autonomous AI Agents - Archyl Blog

AI coding agents write code fast. Without guardrails, they also drift fast. Today we're launching Agent Hub — a new section in Archyl that lets you define conformance rules, browse a catalog of 96 pre-built guardrails, and give AI agents the architectural context they need before they write a single line of code.

Agent Hub: Architecture Guardrails for Autonomous AI Agents

Something fundamental has shifted in how software gets written.

Six months ago, an engineer would spend two days building a new service. They'd check the ADRs, ask a colleague which database to use, follow the naming conventions they'd internalized over months, and push code that fit the architecture because they understood the architecture.

Today, an AI agent does it in twenty minutes. The code compiles. The tests pass. But the agent used MongoDB when the team standardized on PostgreSQL. It scattered fmt.Println calls instead of using structured logging. It put a database query directly in an HTTP handler, bypassing the service layer that took three sprints to establish.

The agent is faster. But it doesn't know what it doesn't know.

The Real Cost of Speed Without Governance

We're entering an era where code velocity is essentially unlimited. Claude Code, Cursor, Copilot Workspace, Devin — these tools can produce entire services in minutes. Teams that were shipping weekly are now shipping daily. The bottleneck has moved from "how fast can we write code" to "how fast can architecture entropy destroy our system."

Consider what happens when five AI agents work on the same codebase simultaneously:

  • Agent A adds a new REST endpoint using Express patterns. Agent B adds another using Fiber patterns. Now you have two API frameworks in the same service.
  • Agent C creates a payment module that directly queries the database. Agent D creates an order module that goes through the repository layer. Now you have two data access patterns.
  • Agent E picks up a library that's been deprecated by your tech radar six months ago. It works perfectly. It's also a ticking time bomb.

None of these are bugs. They all pass CI. They all work in isolation. But together, they're silently turning your codebase into a patchwork of conflicting patterns that will cost you months to untangle.

This is the problem we built Agent Hub to solve.

Architecture Guardrails: The Linter for Your Architecture

Conformance rules are deterministic checks — no AI involved — that validate code changes against your architectural decisions. Think of them as ESLint for architecture: they don't care about syntax, they care about structure.

Seven rule types cover the governance needs we've seen across hundreds of teams:

Required Pattern — The simplest and most powerful. Define patterns that must exist or must not exist in your code. Forbid fmt.Println in Go, console.log in TypeScript, eval() anywhere. Require set -euo pipefail in every shell script. Ban SELECT * from SQL queries.

Naming Convention — Enforce naming rules across files, types, and functions. Go files must be snake_case. React components must be PascalCase. Python modules must follow PEP 8. When an agent generates code, it follows your team's conventions, not its own defaults.

Technology Constraint — Lock down the technology stack per container. Backend must be Go only. Frontend must be TypeScript, not JavaScript. No lodash (use native JS). No moment.js (use date-fns). The agent can't accidentally introduce a dependency your team has already decided against.

Layer Boundary — This is where it gets powerful. Define your architecture layers and which layers can import from which. Domain cannot import from adapter. Service only from domain. Handlers must go through services, never access repositories directly. Clean Architecture, Hexagonal Architecture, DDD — enforced automatically, on every change, regardless of who (or what) wrote the code.

Contract Compliance — Validate that code endpoints match your API contracts. If you have an OpenAPI spec in Archyl, the conformance engine checks that your handlers actually implement it. No phantom endpoints, no missing routes.

Dependency Rule — Every import in code must have a corresponding relationship in the C4 model. If Service A suddenly starts calling Service B, but there's no "uses" relationship in the architecture, the rule catches it. Architecture drift becomes visible immediately.

Event Channel Compliance — If your system uses Kafka, NATS, or any message broker, this rule validates that producers and consumers in code match the event channels declared in your architecture. No rogue topics, no undeclared consumers.

96 Rules, Zero Configuration

Writing regex patterns is tedious. So we built a catalog of 96 pre-built rules covering 21 technologies, ready to use with one click.

The catalog is organized by category:

Security (11 rules) — No hardcoded passwords, API keys, or secrets. No eval(). No SQL string concatenation. No disabled TLS verification. No CORS wildcard origins. No MD5 or SHA1 for hashing. These aren't suggestions — in an agentic world, they're non-negotiable. An AI agent will happily hardcode a database password in a config file if nobody tells it not to.

Infrastructure (22 rules) — No :latest tag in Dockerfiles. Require resource limits in Kubernetes manifests. No privileged containers. No hostNetwork. Require health checks. Pin GitHub Actions versions to commit SHAs. No hardcoded credentials in Terraform. No wildcard IAM policies. When agents generate infrastructure-as-code, these rules ensure the output is production-ready, not demo-ready.

Code Quality (18 rules across 10 languages) — Go: no panic(), no init(), no global mutable state. TypeScript: no any type, no var declarations. Python: no bare except:, no mutable default arguments, no import *. Java: no System.out.println, no empty catch blocks. Rust: no unwrap(), no unsafe. Each rule exists because AI agents consistently make these mistakes when they lack context.

Architecture (5 rules) — Clean Architecture, Hexagonal Architecture, DDD layered, MVC separation, handler-service-repository enforcement. These are the structural guardrails that prevent the most expensive kind of drift — the kind where your architecture slowly mutates into something nobody designed.

Testing (3 rules) — No skipped tests committed. No .only() left in test suites. No TODO/FIXME in production code. Small rules that prevent the kind of sloppy commits that AI agents generate when they're optimizing for "it works" rather than "it's ready."

Agent Context: One Call, Full Knowledge

Rules tell agents what they can't do. Context tells them what they should do.

The get_agent_context MCP tool gives any connected agent a complete architectural briefing in a single call:

  • C4 Model — Every system, container, component, and relationship in the project
  • Architecture Decision Records — Active ADRs with their rationale and decisions
  • Technology Stack — What technologies are in use across the organization
  • Active Guardrails — Every conformance rule, so the agent knows the boundaries before writing code
  • API Contracts — OpenAPI, gRPC, GraphQL specs that define the API surface
  • Event Channels — Kafka topics, NATS subjects, message schemas

The tool also generates a markdown version — a CLAUDE.md file that you can commit to your repository. Any agent that reads it starts with perfect architectural knowledge, without needing to connect to Archyl's MCP server.

This is the difference between an agent that guesses and an agent that knows. Between code that works and code that belongs.

Why This Matters Now

The AI coding landscape is moving fast. In six months, most professional development will involve some form of AI agent. In a year, multi-agent workflows will be common — different agents working on different parts of the same system simultaneously.

Without governance, this leads to chaos. Not the dramatic kind — the slow, insidious kind where every commit is individually reasonable but the aggregate effect is architectural decay. The kind where you look at your codebase six months later and can't explain why there are three different logging libraries, two ORM patterns, and a service that somehow depends on everything.

Architecture guardrails change this dynamic fundamentally:

For teams adopting AI agents: Your architectural decisions are no longer tribal knowledge that gets lost when an agent writes code. They're encoded as rules that are enforced automatically. The agent gets the same governance that a senior engineer would provide in a code review — but instantly, on every change, without review fatigue.

For platform teams: You can standardize patterns across dozens of services and hundreds of AI-generated changes without manually reviewing each one. Define the rules once, apply everywhere. When a team spins up a new service with an AI agent, it automatically follows your platform's conventions.

For regulated industries: Compliance requirements can be encoded as conformance rules. "All services must have health checks." "No PII in logs." "Encryption at rest required." These become verifiable, not just documented. Audit trails show that every AI-generated change was validated against the rules before it was merged.

For open source maintainers: Contributors (human or AI) who submit PRs get instant feedback on architectural conformance. No more reviewing PRs that violate conventions the contributor didn't know about. The rules document your architecture's expectations as executable constraints.

The teams that will thrive in an agentic world aren't the ones with the best AI agents. They're the ones with the clearest architectural boundaries. The agents are interchangeable. The architecture is not.

Get Started

Agent Hub is available now for all Archyl users. Click the Agent icon in the sidebar. Browse the catalog, add some guardrails, and let your AI agents work within the boundaries you've defined.

Your architecture decisions shouldn't be optional for AI. Now they're not.