Navigation
Search
|
Using Valkey on Azure and in .NET Aspire
Thursday October 16, 2025. 11:00 AM , from InfoWorld
Big changes to the license used by the popular open source key/value store Redis prompted a fork, with the launch of Valkey. In the time since that fork in March 2024, the two projects have diverged. The Valkey project is concentrating on performance improvements, with recent releases redesigning key low-level features to reduce memory usage and increase throughput.
While core features stay similar, there are enough differences between the two stores to make it worth choosing the one best suited to your project. Key/value stores are an important development tool for cloud-native applications, especially when you need to store state so that user sessions are not interrupted when Kubernetes applications scale up or down, or as a cache for commonly accessed data. Version 9.0 of Valkey is due to launch very soon, but most users are still focused on the current 8.1 release. You can read the road map for Valkey on GitHub, where the Version 9.0 dashboard shows that only a handful of work items remain to be completed before it’s ready for general release. Valkey in Azure Kubernetes Service Microsoft only provides managed Redis instances as part of its Azure platform, with its own billing structures. If you want to run Valkey you’re going to need to roll your own install, at least for now. However, it’s clearly in demand. Microsoft has pages to show how to implement a Valkey cluster for use with Azure Kubernetes Service (AKS) alongside other open source projects like Apache Airflow and Apache Kafka. Microsoft’s documentation differs from Valkey’s own Kubernetes documentation, as it takes advantage of how AKS works with Azure regions to provide high availability. There are no Windows versions of Valkey, so you’ll need a Linux host for the service, and local development will need a Windows Subsystem for Linux (WSL) installation. That’s much less of a problem than it might have been a few years ago, but now services like AKS are built on top of Linux, which ensures you will be able to use either Azure IaaS virtual machines to host a Valkey install, or use a container from Docker or a similar repository inside AKS. Microsoft’s guidelines for implementing a Valkey resource for use with AKS start by implementing an AKS cluster, along with a KeyVault to hold necessary secrets. Your Valkey cache cluster is implemented as a node pool in the AKS cluster, running on Standard D4 VM hosts. Configuring Valkey in AKS Installing Valkey is a matter of first copying a current Valkey image from the Docker Hub repository and adding it to your own Azure Container Registry. This keeps the image private and allows you to prepopulate any necessary settings and secrets before deploying the Valkey container in your live application. You can now set up a Kubernetes namespace for your cluster before creating a configuration file as a resource. Follow this by creating configurations for both main and replica pods to ensure that your cache has a well-defined cluster that can handle failover and scale as necessary. Main and replica need different affinities to make sure they are deployed in separate zones. Initially you should have three main Valkey pods and three replicas, running in separate Azure data centers. You can then set up rules that make sure that only one pod is down at a time during AKS maintenance. The aim is to have your cache and application always running to give users the best experience possible. This is the same role that Azure’s managed Redis plays; however, this time you’re doing the configuration and cache management. AKS will handle most of the work for you, but you’re still responsible for ensuring that your Valkey installation is secure and up to date. With a cluster up and running in AKS, you can configure it for operation. This will require first deploying hash slots to the main nodes before linking the replica nodes to them, one per node. You can use kubectl via the Azure command line to test that replication has been set up properly and that your cluster will manage failover. Testing with Locust As part of its AKS Valkey documentation, Microsoft suggests the Locust load-testing framework to ensure that your cache runs correctly, can deal with failures, and can handle load. Locust is programmable using Python and can be configured to simulate a client application ramping load over time, allowing you to see how your cluster responds to increasing load. You can then use kubectl to remove a pod from operation without spawning a replacement, watching how your Valkey cluster responds as it moves a replica into operating as a main instance. Using Valkey in an AKS instance works well, and it’s a workable alternative to Redis. There are still plenty of circumstances where you might prefer to use managed Redis with Azure-hosted AKS applications, but Valkey could be a good option for AKS applications running on the edge using Azure Local (Microsoft renamed Azure Stack HCI in June 2025), where license management may be more of an issue. Valkey and.NET Aspire AKS isn’t the only Microsoft project adding support for Valkey. It’s also available for.NET Aspire, as it supports the Redis Serialization Protocol (RESP). You can work with existing or new Valkey instances, with.NET pulling in a Valkey Docker container as needed. You will need to install the Aspire.Hosting.Valkey package to set up the necessary Valkey resource. Once this is added to your project, you have access to a Valkey cache. As Aspire is focused on cloud-native container-based development, any instance will need a separate data volume outside the Valkey container to ensure persistence between sessions. This can be either a data volume or a binding to an existing store. It’s recommended to use a volume as they’re more portable. You may prefer a binding for debugging on a local development machine, so be sure to change to a volume for testing and deployment. Using Aspire to manage your application’s Valkey instance simplifies things considerably: It uses a containerized Valkey and provides the necessary abstractions to work as a cache for your application. As it uses the same protocol as Redis, you can quickly swap one for the other, simply changing the calls that load the database and its container. If you’ve got a working Redis-based application, it’s a good opportunity to try out an alternative. Using RESP to link Valkey to code.NET Aspire’s existing Redis client will work with Valkey; all you need to do is ensure that you’re using the correct connectionName. Microsoft provides Aspire implementation details for three different Valkey scenarios: standard cache, distributed cache, and output cache. The documentation isn’t quite complete, as it often refers to Redis rather than Valkey, but Aspire treats the two interchangeably so it’s not too difficult to understand what to do and when. Another advantage to using Valkey with Aspire: You can take advantage of Aspire’s observability tools, health checks, logging, and its built-in developer dashboard to monitor operations—including your cache. Having tools that manage application health is important, especially when building the distributed, cloud-native applications that rely on services like Valkey. As Valkey continues to diverge from Redis, it’s worth keeping an eye on both projects, as each will address different use cases and support different application architectures. For now, however, thanks to RESP, they can be used relatively interchangeably, allowing you to choose one or the other and switch to whichever works best for you and your project. With basic support in both AKS and.NET Aspire, and a major new release of Valkey around the corner, it’s a suitable time to give it a try.
https://www.infoworld.com/article/4073230/using-valkey-on-azure-and-in-net-aspire.html
Related News |
25 sources
Current Date
Oct, Thu 16 - 23:23 CEST
|