I had the privilege of spending a day at the Code & Cloud Conference hosted by RVA Tech, listening to teams talk about the velocity with which they build, ship, and automate everything while somehow staying ahead of the ever-increasing pace of testing and catching bugs. Making the mistake of equating velocity with efficiency was a common theme throughout the day. We know engineers, and if you give them a metric to hit, well, they find all types of ways to gamify the system to hit those markers. When speed becomes the only metric, load-bearing tech debt accumulates, quality and testing fall to the wayside, and the process itself becomes burdened by the practices meant to promote velocity.
With the advent of AI, this imbalance gets amplified. We now have the ability to generate code, integrate tools, and connect systems faster than ever before. As one presenter joked, “We can screw up faster than ever before!” AI has the power to magnify any environment in all its facets, including not only the good but also the bad. Healthy systems become more powerful and efficient, while chaotic ones become unmanageable at this increased speed.
Anthropic’s recent blog, "Code execution with MCP: Building more efficient agents," should not only be used as a technical update from Anthropic but also as a strategic update for most MCP users. MCP, or Model Context Protocol, provides our AI agents with access to external tools. Our AI agent can now access hundreds, or even thousands, of services, allowing agents to coordinate workflows that used to be executed with complex glue code. However, letting the AI create these workflows instantly has a cost. It's too much information, all at once, in the limited context window. As agents connected to more tools through the MCP, they became slower, more expensive, and ultimately less capable of doing standard tasks. The AI agent ran into the same velocity trap our human engineers faced.
Anthropic’s solution was to move tool interaction into the code execution environment, giving humans control over how and when the agent loads each definition. Instead of pushing thousands of tool schemas into the model’s context, we can now give the agent a filesystem-like structure where it loads only what it needs and executes targeted logic. In Anthropic’s own example, this shift reduced the context requirement from 150,000 tokens to just 2,000—a 98.7% reduction. The agent no longer needs to understand everything upfront. It can navigate the environment intentionally and pull in only the relevant information for each step.
Human engineers operate in the same way. They don’t read the whole codebase to fix a bug; they go to the relevant files. Engineers don’t process the entire database; they query it to retrieve the specific information they are looking for. And Anthropic’s update essentially teaches agents to work the way humans do at their best.
Tools are meant to enhance human beings, not mimic their worst habits. The MCP, without structure or intention, just created an agent that was more expensive and less capable due to overloading. When agents were allowed to connect to everything at once, they became scattered and overloaded, mirroring exactly what happens when humans take on too many priorities or carry too much tech debt. The problem wasn’t a lack of power; it was an excess of unstructured access. And that’s what AI tends to do—it doesn’t magically correct our bad habits; it amplifies them.
The conference conversations kept circling back to the same idea: people must be at the center. If we want AI to elevate human capability, we need to design it to enhance the way we already work at our best, because speed alone doesn’t create progress. Without guardrails, speed just magnifies the mess. The real promise of AI is not acceleration—it’s alignment.
AI should help us move with intention, not haste. It should help us build better habits, not race deeper into our old ones. Tools exist to support us, not to accelerate the dysfunction we already struggle with. The MCP update is a practical example of AI design that acknowledges this, as it reins in overload rather than celebrating it. It forces the agent to work intentionally, to retrieve only the context it needs, and to operate in a way that preserves clarity rather than destroying it. In a sense, Anthropic created a healthier work environment—and the result is a healthier system.
If we want AI to truly elevate human capability, we must build it with the same discipline, clarity, and intentionality we expect from healthy engineering teams. Speed alone doesn’t create progress; alignment does. AI should help us move with focus, not frenzy—amplifying our strengths rather than accelerating our dysfunctions.
That’s the kind of transformation we help achieve at Ippon Technologies: creating architectures, workflows, and AI strategies that keep people at the center and keep systems healthy as they scale. If you're exploring how to bring AI into your engineering culture in a way that strengthens it rather than strains it, let's talk. We’d love to help you build something durable, human-centered, and future-proof.
Want to learn more? Check out our latest eBook and AI Journey in Banking Infographic.