AI’s Next Big Leap Isn’t an LLM Model. It’s MCP - backed by Google, Anthropic & OpenAI
This Week in Products we explore why Google, OpenAI & Anthropic are quietly betting on Model Context Protocol (MCP) and what it means to product leaders and founders.
Just weeks after OpenAI announced it would adopt Anthropic’s Model Context Protocol (MCP), Google has followed suit.
In a post on X, DeepMind CEO Demis Hassabis confirmed that Google’s Gemini models will soon support MCP (though he didn’t share a timeline).
With this move, three of the biggest players in AI - OpenAI, Google, and Anthropic - are aligning around a common protocol. And yet, this shift is flying under the radar🤦♂️
This Week in Products, we explore why MCP is a big deal for those building the future of software and products
🚨 The Problem Today (Life Before MCP)
Right now, if you want an AI model (like ChatGPT or Claude) to do something useful inside your company's tools, say…
Read your Google Calendar,
Pull data from Salesforce,
Trigger a task in Notion,
Or, Query an internal database...
You either have to:
Manually feed the data into the chat (copy-paste),
Write custom code or use a plugin,
Or do prompt-based hacks that are brittle and don't scale.
All of these are messy, slow, or insecure.
💡What is MCP and Why Should You Care?
Think of MCP as the USB of AI agents - a plug-and-play protocol that lets your AI seamlessly access, understand, and act on live business context.
At its core, MCP (Model Context Protocol) is an open standard that enables
two-way connections between AI agents and real-world business systems: CRMs, calendars, internal tools, knowledge bases, and more.
With MCP, AI models can:
Fetch real-time data from software tools, databases, and APIs (think tools like Salesforce, Notion, Jira, or internal business databases).
Send instructions back to those systems (like updating a CRM, triggering a workflow, etc).
And, interact with software like a mini agent.
→Checkout 10 wild examples of MCP in action
Most importantly MCP allows AI models to connect to external systems and tools in a secure, structured, and model-agnostic way.
🛠️ How MCP Works (in plain terms)
Here’s a simplified breakdown:
MCP Server → A piece of software that securely exposes your system’s data and functions (e.g., Salesforce, Jira, Notion). This is the data source side.
MCP Client → The AI application or agent that talks to the MCP server (requests data or sends instructions). This is the user-facing AI agent or interface.
Real-World Example:
Imagine your support manager wants an AI assistant to:
Check customer complaints from Zendesk,
Look up those customers in Salesforce,
Draft personalized apology emails, and
Schedule a follow-up task in Asana.
Here’s how MCP helps:
Zendesk and Salesforce run as MCP servers (they expose data).
Your AI assistant (MCP client) connects to both.
It pulls context (tickets, account info),
It generates actions (emails, tasks), and
It sends those actions back to the right tools, automatically.
No copy-pasting. No fragile plugins. Just clean, protocol-based automation.
🚀 Who's Already Adopting MCP?
Since Anthropic open-sourced the MCP protocol, OpenAI is building support, Google Gemini is joining in, and companies like Block, Apollo, Replit, Codeium, and Sourcegraph are already integrating MCP.
What does this mean?
It’s like when browsers standardized around HTML and JavaScrip. Suddenly the web exploded with innovation.
MCP could do that for AI apps. MCP is on track to become the de facto operating standard for AI-powered workflows.
Developers only have to build integrations once. You don’t need to build custom API plumbing for every model.
It lays the foundation for true AI agents that don’t just chat, but do work.
More companies will feel safe and capable of integrating AI agents.
🧭 What This Means for Product Founders & Leaders
Let’s switch gears now… from the what to the so what.
If you’re a product leader, here’s why MCP deserves a permanent place on your roadmap and in your strategy decks:
🧠 Shift from AI-Enhanced to AI-Native
MCP isn’t about slapping ChatGPT into a chatbot box. It’s about building products that know what your users want, know where the relevant data lives, and know how to act autonomously.
It enables a new kind of product thinking from UI-driven flows to goal-oriented agents.
🔌Model-Agnostic = Future-Proof
By integrating MCP, your AI experiences work across OpenAI, Claude, Gemini, and whoever comes next.
You avoid vendor lock-in and gain flexibility to test, optimize, and switch. This is how you build AI infrastructure that scales — not just features that expire.
🧬Design for Orchestration, Not Just Interfaces
Your product doesn’t need to own the entire interface to deliver value. Instead, you can:
Expose your system as an MCP server,
Let agents and third-party tools plug in and create value,
Join the AI ecosystem as a node, not a silo.
It’s a strategic shift — from being a closed app to a composable platform.
📊 Compete on Context, Not Just Features
In the agent era, context is currency. Whoever gives AI the most complete, real-time understanding of the user’s world wins. Your edge may no longer be in having the most beautiful UI or the fastest API. It’ll be in how rich your context is, and how easily AI can act on it.
📌 What You Should Be Doing Right Now
Here’s a quick action list if you’re a product leader or founder:
✅ To-Do
Identify key systems your users rely on and start mapping what AI needs to see and act on.
Expose your product via MCP Server and become a node in the AI ecosystem.
Build agents as MCP clients. They’ll work across Claude, ChatGPT, Gemini.
Treat workflows as products. Automations are the new UX!
Start competing on dynamic context (Rich context = sticky user experiences).
💬Just a quick personal note…
MCP may sound like another developer protocol (and to some extent, maybe it is).
But at a strategic level, it’s so much more.
It’s a bet on the future where AI agents can access, reason about, and act on business environments the way humans do. It’s the quiet infrastructure shift that will define how AI becomes useful and trusted inside products.
As product builders, the opportunity isn’t just to use MCP. It’s to reimagine what your product can be when AI has eyes, ears, and hands across your user’s digital universe.
A video I found intriguing
Microsoft Copilot interviews Bill Gates, Steve Ballmer, and Satya Nadella!
A few weeks ago, I wrote about how AI is reshaping journalism and how we at ProdWrks think about product storytelling and the role of AI.
As Microsoft celebrated its 50th anniversary, a Copilot-moderated conversation between Gates, Ballmer, and Nadella offered a peek into the future of what AI could do. The real star wasn’t the tech honchos in the conversation; it was Copilot, not just moderating, but shaping the conversation.
This wasn’t just a tech flex. It was a signal that AI won’t just assist… it may author, interview, and narrate the next chapter of storytelling, business, and work itself.
Watch the video here, and revisit our earlier piece on AI in journalism here.
And don’t forget to share your thoughts.
📬I hope you enjoyed this week's curated stories and resources. Check your inbox again next week, or read previous editions of this newsletter for more insights. To get instant updates, connect with me on LinkedIn.
Cheers!
Khuze Siam
Founder: Siam Computing & ProdWrks



