MCP Hit 97 Million Installs. Your Interview Questions Are Now Obsolete.

Model Context Protocol is now foundational infrastructure. Here's what that means for how you assess AI skills in candidates.

By AISA Team··6 min read
industrymodelsassessmentmcphiringworkflow-and-applicationdeveloper-toolsai-infrastructure

Ninety-seven million installs. That's where Model Context Protocol (MCP) stands as of this week — supported by every major AI provider, integrated into Claude Code, Codex, and virtually every serious agentic framework. MCP isn't a hot new tool anymore. It's plumbing. And if your hiring process still treats "AI skills" as knowing how to write a good ChatGPT prompt, you're screening for the wrong thing entirely.

What MCP Actually Changes

For anyone who hasn't been tracking this closely: MCP is a standardized protocol that lets AI models connect to external tools, data sources, and services. Think of it as USB-C for AI — a universal interface that means models can read your Figma files, query your databases, trigger your CI/CD pipelines, and orchestrate across services without custom glue code for each integration.

The Codex update this week made this concrete. Over a million weekly users now have first-class plugin support with MCP baked in, including direct Figma integration. CrewAI is running 12 million daily agent executions with native MCP support. This isn't experimental. This is production infrastructure that your candidates will need to work with on day one.

Here's the implication that matters for hiring: the skill that separates effective AI practitioners from everyone else is shifting from prompt crafting to system composition. Knowing how to talk to a model is table stakes. Knowing how to wire a model into a workflow — choosing which tools to expose via MCP, defining appropriate scopes and permissions, understanding what context a model actually needs versus what will drown it — that's the skill that's becoming critical.

The AISA Dimension This Hits Hardest

Of the five dimensions in the AISA rubric, this shift lands squarely on Workflow & Application (25% of the overall score). This dimension measures whether a candidate can move beyond isolated prompts and think about AI as part of a larger system — selecting the right tools, sequencing tasks, and integrating AI into real processes.

In our first 175 assessments, the pattern we see is telling: candidates who score well on Prompting & Communication often plateau hard on Workflow & Application. They can construct a beautiful prompt in isolation. Ask them how they'd connect that capability to an existing codebase, a design tool, or a data pipeline, and the conversation gets thin fast.

MCP makes this gap wider, not narrower. When the integration layer was bespoke — sorry, when it was custom-built for every tool — you could forgive candidates for not knowing the specifics. Now that there's a universal standard with nearly 100 million installs, "I haven't worked with tool integrations" is like a developer saying they haven't used an API. It signals a fundamental gap in how someone thinks about AI in practice.

What Hiring Managers Should Change Right Now

Stop asking about models. Start asking about architectures.

"Which AI model do you prefer?" is a bad interview question in 2026. Models are increasingly interchangeable — Vercel's AI SDK v6 literally lets you swap providers with zero friction. The better question: "Walk me through how you'd design an AI-assisted workflow for [specific task in your domain]. What tools does the model need access to? What permissions would you scope? What happens when the model gets it wrong?"

This is exactly the kind of question that surfaces in an AISA assessment. The conversational format means candidates can't just name-drop MCP — they have to reason through how they'd actually use it. Our anti-gaming detection catches candidates who paste in memorized architecture diagrams versus those who can think through tradeoffs in real time.

Assess for composition, not just generation.

The median enterprise now uses over 2,000 cloud apps, many with embedded AI. Your new hire won't be building AI from scratch. They'll be composing AI capabilities across existing systems. That requires a different mental model than "write a prompt, get an output."

In AISA terms, this is the difference between a Tactician (someone who uses AI strategically for specific tasks) and a Conductor or Architect (someone who orchestrates AI across complex, multi-step workflows). With MCP as standard infrastructure, the Conductor-level skill set — understanding how to coordinate multiple AI-powered tools, manage context flow between them, and handle failure modes — moves from "nice to have" to "baseline expectation" for senior roles.

Don't ignore the safety dimension.

MCP makes models more powerful by giving them access to more tools and data. That's exactly why Safety & Responsibility (10% of the AISA score) matters more than its weight might suggest. A candidate who enthusiastically connects a model to production databases via MCP without thinking about data exposure, permission scoping, or audit trails is a liability. The White House National AI Policy Framework released last week reinforces this — sector-specific oversight means your team needs people who think about these boundaries instinctively, not as an afterthought.

In our assessments over the past 30 days — 94 completed — we consistently see that safety reasoning is the dimension where even otherwise strong candidates stumble. They can build the workflow. They don't always think about what could go wrong when the workflow has access to everything.

The Concrete Shift

MCP at 97 million installs means tool integration is no longer a specialized skill. It's literacy. The product managers you hire need to understand what MCP enables so they can spec realistic AI features. The designers need to understand that AI can now pull live data from their design tools. The developers need to think in terms of tool graphs, not just prompt chains.

If you're building an AI skills assessment into your hiring process, make sure it tests for this. A multiple-choice quiz about MCP terminology won't tell you anything useful — that's why conversational assessment matters. You need to hear a candidate reason through a real integration problem, make tradeoffs, and acknowledge what they don't know.

Take a free AI skills assessment yourself. Pay attention to how the Workflow & Application questions feel. If they seem hard, that's the point — they're testing the skill that just became non-negotiable.

Learn more about how AISA assesses developers.