Navigation
Search
|
What’s next for Microsoft’s Semantic Kernel?
Thursday February 27, 2025. 10:00 AM , from InfoWorld
At the heart of Microsoft’s AI application development strategy is Semantic Kernel, an open source set of tools for managing and orchestrating AI prompts. Since its launch as a way to simplify building retrieval-augmented generation (RAG) applications, it has grown into a framework for building and managing agentic AI.
At Ignite in 2024, Microsoft announced several new features for Semantic Kernel, positioning it as its preferred tool for building large-scale agentic AI applications. That announcement formed the basis of Semantic Kernel’s 2025 road map, with the first elements already being delivered. Building agentic workflows with Agent Framework One of the more important new features in Semantic Kernel is Agent Framework, which will soon move out of preview into general availability. This will ensure a stable, supported set of tools ready to deliver production-grade enterprise AI applications. The Agent Framework will form the basis of Semantic Kernel’s planned integration with Microsoft Research’s AutoGen, along with the release of a common runtime for agents that is built using both platforms. The Agent Framework is intended to help build applications around agent-like patterns, offering a way to add autonomy to applications and to deliver what Microsoft calls “goal-oriented applications.” This is a good definition of what modern agentic AI should be: a way of using AI tools to construct and manage a workflow based on a user request. It then allows multiple agents to collaborate, sharing data and managing what can be thought of as long transactions that work across many different application APIs and endpoints. Available as an extension to the base Semantic Kernel, the Agent Framework is delivered as a set of.NET libraries, which help manage human/agent interactions and provide access to OpenAI’s Assistant API. It’s intended to be controlled via conversation, though it’s easy enough to build and run agents that respond to system events rather than direct human actions (and to add human approval steps as part of a dynamic workflow). This lets you focus on using agents to manage tasks. Semantic Kernel’s agent features are designed to extend the concepts and tools used to build RAG-powered AI workflows. As always, Semantic Kernel is how both the overall orchestration and individual agents run, managing context and state as well as handling calls to AI endpoints via Azure AI Foundry and similar services. Building a Semantic Kernel agent requires an Agent class before using an Agent Chat to support interactions between your agent workflow and the AI and API endpoints used to complete the current task. If multiple agents need to be called, you can use an Agent Group Chat to manage these internal prompts by using Semantic Kernel to interact and pass results between each other. An Agent Group Chat can be dynamic, adding and removing participant agents as needed. You’re able to build on existing Semantic Kernel techniques, too. For example, agents can use existing or new plug-ins as well as call functions. Working with external applications is key to building enterprise agents, as they need to be able to dynamically generate workflows around both humans and software. Having Semantic Kernel manage agents ensures you can manage both instructions and prompts for the large language model (LLM) you’re using, as well as control access to the APIs. Your code can manage authorization as necessary and add plug-in objects. Your plug-ins will manage API calls, with the agent constructing queries by parsing user inputs. No-code agent development with AutoGen Semantic Kernel’s integration with AutoGen builds on its Process Framework. This is designed to manage long-running business processes and works with distributed application frameworks such as Dapr and Orleans. Workflows are event-driven, with steps built around Semantic Kernel Functions. A process isn’t an agent, as it’s a defined workflow and there is no self-orchestration. However, a step can contain an agent if it has well-defined inputs and outputs. Processes can take advantage of common patterns, and there’s no reason to have functions operate sequentially—they can run asynchronously in parallel, allowing you to have flows that fan out or that depend on multiple inputs. The two platforms converge in their use of Orleans, which ensures they have similar approaches to working in event-driven environments. This is an important foundation, as Orleans’ move from being a Microsoft Research project to being the foundational distributed computing architecture for modern.NET has been key to wider uptake. Using AutoGen as part of its agent tooling will help deliver better support for multi-agent operations in Semantic Kernel. As it’s been a research project, there’s still some work necessary to bring the two platforms together, with AutoGen supporting both.NET and Python, much like Semantic Kernel. Certainly AutoGen simplifies the process of building agents, with a no-code GUI and support for a variety of different LLMs such as OpenAI (and Azure OpenAI). There’s also support for Ollama, Azure Foundry-hosted models, Gemini, and a Semantic Kernel adapter that lets you use Sematic Kernel’s model clients. Getting started with AutoGen requires the core AutoGen application and a model client. Once installed, you can build a simple agent with a handful of lines of code. Things get interesting when you build a multi-agent application or, as AutoGen calls it, a team. Teams are brought together in a group chat where users give agents tasks. It comes with prebuilt agents that can be used as building blocks, such as a user proxy, a web surfer, or an assistant. You can quickly add your own extensions to customize actions within the AutoGen layered framework. This provides specific roles for elements of an agent, starting with the core API that provides tools for event handling and messaging, giving you an asynchronous hub for agent operations. Above that is the AgentChat API. This is designed to help you quickly build agents using prebuilt components and your own code, as well as tools for handling instructions and prompts. Finally, the Extensions API is where you can add support for both new LLMs and your own code. Much of the documentation focuses on Python. Although there is a.NET implementation of AutoGen, it’s missing documentation for key features such as AgentChat. Even so,.NET is likely the best tool to build agents that run across distributed systems, using its support for.NET Aspire and, through that, frameworks like Dapr. Building multi-agent teams in AutoGen Studio AutoGen Studio is perhaps the most interesting part and would work well as part of the Semantic Kernel Visual Studio Code extension. It installs as a local web application and provides a place to construct teams of agents and extensions, with the aim of constructing a multi-agent application without needing to write any additional code (though you can use it to edit generated-configuration JSON). It builds on top of AutoGen’s AgentChat service. Applications are constructed by dragging components onto the Studio canvas and adding termination conditions. This last option is important: This is how an agent “knows” it has completed a task and needs to deliver results to either a user or a calling function. Agents can be further configured by adding models and extensions, for example, using an extension to deliver a RAG query against enterprise data. Multiple model support helps you choose a suitable AI model for an agent, perhaps one that’s been fine-tuned or that offers multi-model actions so you can work with images and audio as well as text prompts. Nodes in a team can be edited to add parameters where necessary. Under the hood, AutoGen is a declarative agent development environment, with JSON description of the various elements that go into making an agent. You can switch to a JSON view to make changes or even convert AutoGen AgentChat Python to JSON and edit it in Studio. To simplify building new applications, it offers a gallery where agents and other components can be shared with other users. Once you’ve built an agent, you can evaluate it inside Studio’s playground before building it into a larger process. Using declarative programming techniques to build agent teams makes sense; often the knowledge needed to construct elements of a workflow or business process is embedded in the process itself as knowledge passes from worker to worker. If we’re to build AI-based agents to automate elements of those processes, who better to design those tasks than the people who know exactly what needs to be done? There’s a lot yet to come for Semantic Kernel in 2025. Now that we’re coming out of the experimental phase of enterprise AI where we used chatbots to learn how to build effective prompts, it’s time to use those lessons to build workflow tools more suited to the multi-channel, multi-event processes that form the backbone of our businesses. Semantic Kernel is starting to step out into the enterprise IT world. It’ll be interesting to watch how it and AutoGen take advantage of the skills and knowledge that exist across our organizations, beyond IT and development teams.
https://www.infoworld.com/article/3833938/2025-semantic-kernels-big-year.html
Related News |
25 sources
Current Date
Feb, Thu 27 - 20:09 CET
|