MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
mcp
Search

Using the Model Context Protocol in Azure and beyond

Thursday May 1, 2025. 11:00 AM , from InfoWorld
One of the biggest issues with large language models (LLMs) is working with your own data. They may have been trained on terabytes of text from across the internet, but that only provides them with a way to predict and generate text based on your prompts. If we’re to trust their output, we need to use them for what they were intended: providing a natural language interface to our applications and data.

There are many different ways to link LLMs to data, from creating your own embeddings and using retrieval-augmented generation (using graph databases to expose deeper relationships in your data) to using LLMs to summarize and narrate data returned via OpenAPI calls constructed in response to user prompts. Other tools include the plug-in model used by OpenAI’s ChatGPT. And then there is the Model Context Protocol, or MCP.

The Model Context Protocol

At the end of 2024, Anthropic released the specification of the Model Context Protocol, intended to standardize connections between LLMs and your own applications and data. The name is important: It emphasizes that AI applications require context if they’re to deliver coherent outputs. Simply expecting a chat prompt to deliver sensible outputs is at best optimistic, and at worst dangerous. If we’re going to build semi-autonomous agents that build their own workflows around our data, we need a reliable way to deliver that data to them.

MCP is an open source project with SDK implementations for most common languages and a GitHub repository that includes documentation for anyone wanting to either implement or use an MCP server. The development team describes it as a “USB-C port for AI applications,” as it standardizes connections to many different data sources. As these build on a standard, there are already connectors for existing projects, working with many different LLMs and inferencing providers.

The architecture used by MCP is familiar, taking its cues from classic client/server implementations where a broker translates MCP requests into local or remote requests. You could think of MCP as an interface definition language such as CORBA, where it becomes an effective interoperability layer that allows you to quickly change information sources or LLM applications as needed. It uses a JSON RPC connection, so it can be controlled at an individual user level using tools such as Azure API Management.

With MCP it’s possible to think about building generic interfaces to your code for AI, and so it’s not surprising to see multiple MCP implementations appearing across Microsoft’s various AI development platforms, from inside the Semantic Kernel model orchestration tool to MCP servers that work with both Azure OpenAI and Azure AI Foundry. Microsoft is also adding tools to Azure API Management to control access to data using user credentials.

The Azure MCP Server

One early implementation of MCP on the Microsoft’s platform is the open source Azure MCP Server, which recently entered public preview. This provides a common broker for AI access to key Azure services. Like many recent Azure projects, this is open source, with code available on GitHub. It offers access to much of the Azure platform, including databases, storage, and services like the Azure CLI.

Support for the Azure CLI (and the Developer CLI) is interesting as it allows your MCP-powered agents to use Azure directly, treating MCP calls as an operator. This lets you build agents that provide a natural language self-service interface to Azure, for example, taking a description of an infrastructure and then building the Arm templates needed to deploy it. You can even imagine a multimodel agent that analyzes a picture of a whiteboard sketch, creates a description of the resources needed to implement the sketch, and then deploys it so you’re ready to build code. Other system administration services available through the Azure MCP Server include listing current resource groups and using KQL to query Azure Monitoring logs.

Using Azure MCP server with GitHub Copilot Chat

As it’s based on MCP, this new server works from any AI tool that supports MCP, for example, the GitHub Copilot Agent Mode. Simply add the server to your tenant and start asking questions via Copilot, either directly or with the Visual Studio Code integration. In practice, this last option is an effective way to learn how to use MCP and to build the prompts in your own MCP-based AI applications.

Microsoft has yet to release its own MCP tool for its languages, so you’ll need the official SDKs to write your own code. With TypeScript, C#, and Python all supported, you should have most of the tools you need to start writing your own Azure MCP agents. You can experiment from inside Visual Studio Code, using existing Azure credentials.

The server runs on your own development PC and needs node.js. You install it from the project’s GitHub directly into VS Code. Once installed, make sure you have both the GitHub Copilot and GitHub Copilot Chat extensions configured to use the experimental agent mode (do this from the VS Code settings tool). Next, open the GitHub Copilot chat pane and switch to agent mode. From the tools dropdown, check that the Azure MCP Server is installed. You can now ask it questions, for example, “List my Azure subscriptions.”

The result is a useful tool for anyone working with Azure, but it’s not only for Copilot. The Azure MCP server will install anywhere node.js will run, ready for you to build it into your own agents.

MCP in Azure AI Foundry

Microsoft is quickly building out a fleet of other MCP tools, exposing existing functionality through MCP or making it consumable in your own agentic applications. It’s a rapid rollout, too; tools for Copilot Studio’s no-code agent development were announced while I was working on this article.

Azure AI Foundry is Microsoft’s main development platform for at-scale AI application development, so it’s not surprising that it too is working on an MCP Server to work alongside Azure’s AI Agent Service. This is intended to link agents running in Azure AI Foundry with clients running as part of other AI applications.

It’s an interesting service that lets you quickly repurpose existing AI code and services and link them to new applications. With services like Fabric exposing their own agent features as AI Agent Service endpoints, it quickly links AI applications to core line-of-business data, providing essential grounding to reduce the risks of hallucination and errors.

Once installed, the server provides a set of MCP actions to connect to agents and send them queries; it also lists available agents or uses a default agent for a specific task. There’s support for conversation threads, providing a basic semantic memory for agents that offer contextual conversations. You will need the Azure AI Agent Service agent IDs to call them using MCP.

The server is written in Python and installs with the Azure CLI via pip. Alternatively, you can use a TypeScript version if you prefer. Like the Azure MCP Server, this runs outside of the AI Foundry environment, so it will install on a development PC or as part of a cloud-hosted application in its own container or VM, with support for Windows, macOS, and Linux.

Using MCP servers from Semantic Kernel AI applications

Because MCP is an open standard, the server should work with any client. The GitHub repository includes instructions on how to add a connection using Anthropic’s Claude Desktop, but the real value comes when building your own agent workflows in Semantic Kernel.

Microsoft has delivered some sample code that shows how to build MCP support into a Semantic Kernel orchestration, using it as a kernel plug-in that works with familiar function calls. The same integrations can be wrapped as agents and accessed as needed. Using MCP from Semantic Kernel is still a work in progress, but for now, it seems to fit nicely alongside its existing feature set, requiring very little additional code to expose MCP tools from a server to your AI applications.

Tools like MCP are a crucial element of a modern AI stack, providing a common way to build discoverable interfaces to both local and remote applications. Once defined, MCP tools are easy to call, with a server providing a list of the available tools and MCP giving LLMs a standard way to call those tools and use their outputs. The technique goes a long way to providing a universal grounding tool for AI applications, working with standard APIs, database queries, and AI agents alike.
https://www.infoworld.com/article/3975058/using-the-model-context-protocol-in-azure-and-beyond.html

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Current Date
May, Fri 2 - 15:13 CEST