I've been reading Ham on Rye again. Bukowski's semi-autobiographical novel about growing up poor in Los Angeles during the Depression, about a kid who finds himself at odds with every system designed to help him — school, family, social services. The book isn't about triumph. It's about survival through accumulated small dignities. Making something work that was never designed to work for you.
I kept thinking about this while staring at the platform's documentation problem — and while building the MCP ecosystem that would eventually solve it.
The honest shape of the problem
Here's what nobody talks about at conferences: most enterprise codebases are archaeological sites. Layers of decisions made by people who aren't here anymore, documented by systems that no longer exist, maintained by developers who learned the codebase through oral tradition and strategic grep commands.
This platform had the usual symptoms. Documentation lived in Confluence, in README files, in Swagger specs — some generated, some handwritten, none current — and in the heads of a few senior engineers who'd been there since the beginning.
When LLM agents started to be integrated into workflows, the promise was autonomy. Point an agent at a task, let it figure out what tools exist, let it call them. It didn't work that way. Every conversation started with a bunch of pasted context. "Here's the function signature. Here's what this parameter actually means — ignore the docs, they're wrong. Here's the service that owns this data."
The agents worked. But they worked like I work when my GPS is broken — many times stopping to ask for directions or deciding to guess on a road that leads nowhere safe.
What Model Context Protocol actually solves
MCP is Anthropic's answer to a question that sounds simple: how do you give an LLM structured access to external capabilities?
The naive answer is "put everything in the system prompt". This works until it doesn't. Context windows have limits. Prompts become unwieldy. Worse, you're making the LLM do the parsing work every time — figuring out which of the fifty functions you've described is actually relevant to this specific query.
MCP inverts this. Instead of describing everything upfront, you expose a discovery layer. Tools, resources, prompts — all described in a schema the LLM can query dynamically. The agent asks "what can I do?" and gets back a structured list. It asks "what's this tool's interface?" and gets a JSON schema. Then it calls what it needs.
The protocol is elegant. What isn't elegant is getting your actual codebase into that shape.
The generator: teaching a parser to read your mind
The first package I built was @mxconsulting/mcp-gen, a manifest generator that reads source code and extracts tool definitions from JSDoc annotations.
The simple version would just look for @mcp tags and scrape whatever's nearby. But the actual problem is harder. JSDoc doesn't have a standard for MCP tools. Developers write comments inconsistently. Sometimes the function name is on the next line. Sometimes it's an arrow function assigned to a const. Sometimes it's exported directly, sometimes through a re-export.
So the parser needed to be forensic:
1private extractFunctionName(2 content: string,3 commentEndLine: number4): { lineNumber: number; functionName: string } | null {5 const lines = content.split('\n');67 for (let i = commentEndLine; i < Math.min(commentEndLine + 5, lines.length); i++) {8 const line = lines[i].trim();910 // Pattern 1: function functionName(11 const functionDeclaration = line.match(12 /^\s*(?:export\s+)?function\s+([a-zA-Z_$][a-zA-Z0-9_$]*)\s*\(/13 );14 if (functionDeclaration) {15 return { lineNumber: i + 1, functionName: functionDeclaration[1] };16 }1718 // Pattern 2: const functionName = function(19 // Pattern 3: const functionName = (20 // Pattern 4: const functionName = async (21 // Pattern 5: exports.functionName =22 // Pattern 6: functionName: function(23 // ... and more24 }2526 return null;27}
This isn't glamorous code. It's code that handles reality — the reality that your codebase has six different ways of defining functions, and you need to find them all.
The generator also validates what it finds. JSDoc parameters with type names in the wrong position, PHP-style variable prefixes copied by accident, descriptions that contain syntax rather than explanations — all caught before they become malformed tool definitions that confuse your agent at runtime.
1private validateJSDocParameter(param: Spec, index: number): {2 isValid: boolean;3 errors: string[];4} {5 const errors: string[] = [];67 // Check for type names being used as parameter names8 const jsDocTypes = ['string', 'number', 'boolean', 'array', 'object'];9 if (param.name && jsDocTypes.includes(param.name.toLowerCase())) {10 errors.push(11 `Parameter ${index + 1}: '${param.name}' appears to be a type name ` +12 `rather than a parameter name. Malformed JSDoc syntax.`13 );14 }1516 // ... more validation17}
When validation fails, you get line numbers. You get the original function name. You get a specific error that tells you what to fix. Because debugging "tool call failed" at 2 AM is not how anyone should spend their time.
The Kubernetes problem within the problem
Generating manifests is one thing. Getting them to the right place, keeping them current, and doing this across a cluster — that's the infrastructure layer.
Our platform runs on Kubernetes. The manifests needed to be ConfigMaps. The MCP server needed to mount those ConfigMaps. And when documentation changed (because it does change, if you've built a system where changing it is easy), the server needed to pick up the new version without manual intervention.
The traditional approach is to annotate your Deployment with ConfigMap checksums, then use something like Stakater Reloader to watch for changes and trigger rolling updates. This works for static configurations. It's insufficient when your ConfigMaps are generated, numerous, and can change independently of your code deployments.
So I built two components: a mutating admission webhook and a reloader controller.
The webhook: surgery at admission time
A mutating admission webhook intercepts pod creation and modifies the spec before Kubernetes acts on it. Mine looks for pods with specific annotations, finds matching ConfigMaps by label selector, and injects them as volumes.
1const mutateHandler: express.RequestHandler = async (req, res) => {2 const pod = admissionReview.request?.object3 const annotations = pod.metadata?.annotations || {}45 // Check for opt-in annotation6 const mountConfigMapsByLabel =7 annotations['mcp.mxconsulting.net/mount-configmaps-by-label'] === 'true'89 if (!mountConfigMapsByLabel) {10 // Not our concern, allow without modification11 return res.json({ response: { allowed: true } })12 }1314 // Process indexed label selectors15 let index = 016 while (true) {17 const selector = annotations[`mcp.mxconsulting.net/configmap-label-selector.${index}`]18 const mountDir = annotations[`mcp.mxconsulting.net/configmap-base-mount-directory.${index}`]1920 if (!selector) break2122 const configMaps = await k8sApi.listConfigMapForAllNamespaces({23 labelSelector: selector,24 })2526 for (const cm of configMaps.items) {27 // Calculate checksum for change detection28 const hash = crypto.createHash('sha256').update(JSON.stringify(cm.data)).digest('hex')2930 // Inject volume and mount via JSON Patch31 patch.push({32 op: 'add',33 path: `/metadata/annotations/checksum~1configmap-${cm.metadata.name}`,34 value: hash,35 })36 // ... volume injection37 }3839 index++40 }41}
The checksum calculation is the key detail. By storing a hash of each ConfigMap's contents as a pod annotation, we create a declarative trigger for the reloader. When the content changes, the checksum changes. When the checksum changes, the reloader notices.
The reloader: smart reconciliation
The reloader controller watches Deployments and ConfigMaps. When a relevant ConfigMap changes, it needs to update the corresponding Deployment's annotations so that Kubernetes sees a spec change and triggers a rollout.
The naive approach is brute force: on any ConfigMap change, check every Deployment, recalculate all checksums. This doesn't scale. With hundreds of ConfigMaps and dozens of Deployments, you spend all your time reconciling.
So the reloader implements smart reconciliation:
1export async function main(): Promise<void> {2 const kubernetesController = new KubernetesController()3 const deploymentReconciler = new DeploymentReconciler(4 kubernetesController.getAppsApi(),5 kubernetesController.getCoreApi()6 )78 const smartReconciler = new SmartReconciler(9 kubernetesController.getAppsApi(),10 deploymentReconciler,11 memoryManager,12 metricsManager13 )1415 const bruteForceReconciler = new BruteForceReconciler(/* ... */)1617 // Smart reconciliation handles targeted updates18 // Brute force runs periodically as a safety net19 setInterval(async () => {20 if (ENABLE_SMART_RECONCILIATION) {21 await bruteForceReconciler.reconcileAllRelevantDeployments()22 }23 }, FULL_RECONCILIATION_INTERVAL_MS)24}
The smart reconciler tracks which Deployments care about which ConfigMaps. When ConfigMap X changes, it updates only Deployments that mount X. The brute force reconciler runs every few minutes as a safety net — catching any drift that smart reconciliation missed.
Both reconcilers share infrastructure: rate limiting (so we don't hammer the API server), caching (so we don't refetch unchanged data), and memory management (so long-running controllers don't leak).
The server: where it all comes together
The MCP server itself — @mxconsulting/mcp — loads manifest files from its mounted ConfigMap volumes, parses them, and exposes everything over HTTP (or stdio for local development).
1export function setupServerWithManifests(server: McpServer) {2 const manifests = loadManifests()34 for (const manifest of manifests) {5 // Register manifest-level resource (summary of what's in this manifest)6 registerManifestResource(server, manifest)78 // Register individual tool resources (JSON descriptions)9 registerToolResources(server, manifest)1011 // Register actual callable tools12 registerMcpTools(server, manifest)1314 // Register static resources and templates15 registerManifestResources(server, manifest)16 registerManifestResourceTemplates(server, manifest)17 }1819 // Category-based organization20 registerCategoryResources(server)21 registerOverviewResources(server)2223 // Prompts and completion handlers24 registerPrompts(server)25 registerCompletionHandler(server)26}
The server exposes not just tools, but resources about tools. An LLM can fetch a category overview to understand what's available before diving into specifics. It can read a manifest summary to understand the shape of a subsystem. The protocol supports this naturally; the server makes it useful.
What Bukowski understood about systems
In Ham on Rye, young Henry Chinaski is assigned to write about President Hoover's visit to Los Angeles. He can't attend — Saturdays are consumed by his father's lawn ritual, where missing a single blade of grass means a beating. So he invents the whole thing: the motorcade, the secret service agents, the sun breaking through clouds as the President speaks. His teacher reads the essay to the class, calls it remarkable. Later she asks if he was actually there. He admits he wasn't. "That makes it all the more remarkable," she says. Walking home, Henry realizes: "So, that's what they wanted: lies. Beautiful lies. That's what they needed."
Enterprise documentation is full of beautiful lies. Swagger files that look complete but haven't been updated in eighteen months. README files that describe an architecture from three versions ago. Confluence pages that exist to prove documentation exists. Everyone pretends the system works because admitting otherwise means admitting nobody has time to fix it.
The system I built doesn't solve these problems through policy or discipline. It solves them through automation. Write a JSDoc comment above your function — the manifest generator finds it. Deploy a new ConfigMap — the webhook mounts it and the reloader propagates changes. Point an LLM at the server — it discovers everything automatically.
There's no dignity in a broken documentation system. There's also no dignity in pretending you can fix it through willpower. The dignity is in building something that survives contact with reality.
The result
Our LLM agents now discover available tools at runtime. They understand parameter schemas because the schemas are generated from actual type annotations. When we add a new capability, we add a JSDoc comment and redeploy — the agents pick it up automatically.
Was it worth four weeks of TypeScript compilation errors, webhook debugging, and Kubernetes API mocking? The honest answer is: ask me in a year, when I've forgotten the pain and only remember the outcomes.
But right now, at 3 AM, watching an agent autonomously discover and use tools written years ago without any extra prompting... Yeah, I think it was worth it.
Some systems are designed to be elegant. Some are designed to survive. The best ones manage both. I don't know which category this falls into yet.
But it works. And that's something.