MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
agent
Search

Unpacking the Microsoft Agent Framework

Thursday October 9, 2025. 11:00 AM , from InfoWorld
Earlier this year, Microsoft said it would bring its two different agent development platforms together, merging Semantic Kernel and AutoGen. It has now launched that merged platform as the Microsoft Agent Framework. At the time of the announcement, I noted that it made sense; Semantic Kernel was an in-production AI workflow engine that had evolved to support new agentic models, while AutoGen came from a research background and offered new ways to build multi-agent applications without writing code.

Both tools were open source, so it’s not surprising that Microsoft Agent Framework is too, developed in the open on GitHub, with sample code ready for experiments in your own systems or in a ready-to-try Codespace virtual development environment. This approach has allowed it to quickly adopt new agent development practices as they’ve become popular, adding support for Model Context Protocol (MCP), Agent2Agent, and more, as well as allowing you to choose your own AI models and providers.

The agent of workflow

Workflow is still at the heart of the new framework. Building on the strengths of the Semantic Kernel and AutoGen agent implementations, the new framework offers support for workflow orchestration and agent orchestration. Workflow orchestration builds on Semantic Kernel and implements existing business processes and logic, calling a chain of agents as needed, constructing prompts using predefined formats, and populating values using results from earlier calls. Meanwhile, agent orchestration uses AutoGen’s LLM-driven approach to dynamically create chains of agents based on open-ended prompts. Both approaches have their role to play (and can be embedded in each other).

Agent orchestration is probably the most interesting part of this first release, as it supports several different orchestration models that provide support for different types of workflow. The simplest option is sequential orchestration. Agents are called one at a time, waiting for the response from the first agent before using it to build the prompt for the next. More complex scenarios can use concurrent orchestration. The initial query data calls several agents at the same time, working in parallel, moving on to the next workflow phase once all the agents have responded. Many of these orchestration models are drawn directly from traditional workflow processes, much like those used by tools like BizTalk. The remaining orchestration models are new and depend on the behavior of LLM-based agents.

Orchestration in a world of language models

The first new model is group chat orchestration. The agents in a process can communicate with each other, sharing results and updating based on that data until they converge on a single response. The second, hand-off orchestration, is a more evolved version of sequential orchestration, where not only does the data passed between agents update, so do the prompts, responding to changes in the context of the workflow. Finally, there’s support for what’s being called “magentic” workflow. This implements a supervisory manager agent that coordinates a subset of agents, orchestrating them as needed and bringing in humans where necessary. This last option is intended for complex problems, which may not have been considered for process automation using existing non-AI techniques.

These approaches are quite different from how we’ve built workflows in the past, and it’s important to experiment before deploying them. You’ll need to be careful with base prompts, ensuring that operations terminate and that if an answer or a consensus can’t be found, your agents will generate a suitable error message rather than simply generating a plausible output.

Microsoft intends its Agent Framework to act as a bridge between its various agent products, from the low-code Copilot Studio to the high-end Azure AI Foundry (which provides a host for Fabric’s data agents), as well as to agents built using Microsoft 365’s tool.

Agent-powered business processes, anywhere

You’re not limited to Azure; agents are able to run anywhere, from on-premises to any public cloud, with support for container-based portability. The same goes for connections to services and data. One key element of Microsoft’s approach to agent development is its support for OpenAPI definitions. If a service provides an API description using this standard (or its predecessor, Swagger), then the framework uses it to call the API as part of an agent.

Along with the Agent Framework comes an updated version of Visual Studio Code’s AI Toolkit. This is where things get very interesting. If you’ve got a PC with an NPU (for example, one of the Arm-powered Copilot+ PCs, like recent Surfaces), you can write agents that use local NPU-ready small language models (SLMs) rather than cloud-hosted LLMs.

For now, there aren’t many agent-ready SLMs available with support for tool integration, but the fact that Microsoft is using this approach with its Mu SLM to provide a Settings agent in Windows should encourage the release of NPU-optimized models with support for common runtimes. Hopefully, the public availability of tools like this will add pressure on vendors and encourage Microsoft to give Mu a public release.

Bringing old code forward

Even as Agent Framework brings new capabilities to Microsoft’s AI orchestration platforms, it still needs to be able to support migrating existing code from both Semantic Kernel and AutoGen. For C# code running on Semantic Kernel, you need to move to new.NET namespaces, including the Microsoft.Extensions.AI core building blocks, and make some changes to how it works with LLMs and with plug-ins. It’s important to remember that you’re now orchestrating agents and working with external tools via APIs and protocols like MCP. These changes mean that plug-ins are replaced by tools (or as MCP servers), and the core Kernel function is now a set of agents.

It’s not an instant migration for Semantic Kernel or for AutoGen, but it is a credible pathway. It’s also essential. Although the existing platforms will still get support, future development is focusing on the new tools. Bringing two platforms into one is a logical choice; the field continues to move fast, and it’s clear that using AI-powered agents to manage context in long business processes is becoming the primary enterprise use case for both LLMs and SLMs outside of managing natural language interfaces.

Building agents and workflows

So, what is it like to build a new Agent Framework application? Working in.NET, you’ll need.NET 9 or later, as well as access to models, either local or hosted. You can use Azure AI Foundry models or GitHub models to get a quick start, installing the Agent Framework using the.NET CLI. Most of what you need is packaged in Microsoft.Agents.AI, with Microsoft.Extensions.AI managing access to models.

Building a new agent is straightforward. Start with a chat client interface that oversees connections to your chosen model. This can then be called by an agent method, providing a name and a base prompt that serves as the agent instructions. Finally, you can run the agent you’ve created, using an asynchronous call.

The client interface is key to standardizing agent development as it provides the necessary abstractions for working with any model. You can swap out cloud and local models or Azure OpenAI and GitHub Models without having to change your agent code. Once you have an agent interface, you can reuse it with different instructions as part of a workflow.

This is the point where the Agent Framework’s orchestration tools come into action. The AgentWorkflowBuilder defines the type of orchestration you plan to use and the agents that it will use, so a sequential workflow takes the output of one agent and feeds it into the next. Agents can then be attached to tools, which add specific structure to inputs and outputs, or to external services and sources, using existing MCP servers or other APIs.

One useful aspect of this approach is that you’re building.NET code, so it hosts, runs, and deploys exactly the same way as your existing code. You can even take advantage of approaches like.NET Aspire to build it into distributed applications that work with Azure and other services.

It’s early days for Microsoft’s Agent Framework, but it does seem that Microsoft has managed to pull off the integration of two very different approaches to agent orchestration. You get the best of both worlds—and a pathway for experimentation that lets you build what’s right for your business. With an evolving ecosystem and developing standards, it’s a pragmatic approach that helps future-proof a growing enterprise AI platform.
https://www.infoworld.com/article/4069808/unpacking-the-microsoft-agent-framework.html

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Current Date
Oct, Thu 9 - 23:15 CEST