MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
redis
Search

Getting started with Azure Managed Redis

Thursday December 19, 2024. 10:00 AM , from InfoWorld
Microsoft has made a lot of big bets in its preferred cloud-native infrastructure. You only need to look at.NET Aspire and Radius to see how the company thinks you should be designing and building code: a growing cloud-native stack that builds on Kubernetes and associated tools to quickly build distributed applications at scale.

Azure CTO Mark Russinovich has regularly talked about one overarching goal for Azure: to make everything serverless. It’s why many tools we used to install on dedicated servers are now offered as managed services and Azure manages the underlying infrastructure. Services like Azure Container Instances allow you to quickly deploy Kubernetes applications, but what of the other services necessary to build at-scale containerized applications?

One key service is a caching in-memory database. Here Azure has relied on Redis, providing support for the Redis enterprise release as well as offering Azure Cache for Redis based on the lower-performance Redis community edition.

If you’re building a large application, you need more than a single-threaded database, as you’re leaving compute capabilities on the table. Azure Cache for Redis is unlikely to use all of the capabilities of the underlying vCPUs. As it requires two vCPUs for each instance, one primary and one replica for backup or failover, Azure Cache for Redis is not economical to use at scale.

Introducing Azure Managed Redis

At Ignite 2024, Microsoft announced the public preview of a managed service based on Redis Enterprise, taking advantage of it to improve performance so you can deliver bigger applications without requiring significant additional infrastructure. With data center power usage increasingly an issue, improving performance without increasing the number of required vCPUs helps Azure use its underlying servers more efficiently.

Shipping in preview as Azure Managed Redis, this new service is a higher-performance alternative to Azure Cache for Redis. It takes advantage of a new architecture that should significantly improve operations and support new deployment options.

Instead of one instance per VM, you’re now able to stack multiple instances behind a Redis proxy. There’s another big change: Although you still use two nodes, both nodes run a mix of primary and replica processes. A primary instance uses more resources than a replica, so this approach lets you get the best possible performance out of your VMs. At the same time, this mix of primary and replica nodes automatically clusters data to speed up access and enable support for geo-replication across regions.

Azure Managed Redis has two different clustering policies, OSS and Enterprise. The OSS option is the same as used by the community edition, with direct connections to individual shards. This works well, with close-to-linear scaling, but it does require specific support in any client libraries you’re using in your code. The alternative, Enterprise, works through a single proxy node, simplifying connection requirements for clients at the expense of performance.

Why would you use Redis in an application? In many cases it’s a tool for keeping regularly accessed data cached in memory, allowing quick read/write access. It’s being used any place you need a fast key/value store with support for modern features such as vector indexing. Using Redis as an in-memory vector index helps keep latency to a minimum in AI applications based on retrieval-augmented generation (RAG). Cloud-native applications can use Redis as a session store to manage state across container applications, so AI applications can use Redis as a cache for recent output, using it as semantic memory in frameworks like Semantic Kernel.

Setting up Azure Managed Redis

Microsoft is offering four different tiers that support different use cases and balance cost and performance. The lowest-cost option, best suited for development and test, is the memory-optimized tier. Here you have an 8:1 memory to vCPU ratio that works best when you need to store a lot of data for quick access but don’t need to process it significantly. The downside is a lower throughput, something that’s likely to be more of an issue in production.

Most applications will use the balanced option. This has a 4:1 memory to vCPU ratio, which will allow you to work with cached data as well as deliver it to applications. If you need more performance, there’s a compute-optimized tier, with a 2:1 memory to vCPU ratio, making it ideal for high-performance applications that may not need to cache a lot of data.

There’s one final configuration intended for applications that need to cache a lot of data, using NVMe flash to store data that’s not needed as often. These instances automatically tier your data for you, and although there may be a performance hit, it keeps the costs of large instances to a minimum. The architecture of this tier is interesting: It keeps all the keys in RAM and only stores values in flash when they’re found to be “cold.”

This approach is different from the standard data persistence feature, which makes a disk backup of in-memory data that can recover an instance if there’s an outage. Using this feature writes data to disk and only reads it when necessary; normal operations continue to be purely in-memory.

Once you have chosen the type of service you’re implementing, you can add it to an Azure resource group, using the Azure Portal. Start by creating a new resource and choosing the Redis Cache option from Databases. It’s important to note that there’s no explicit Azure Managed Redis control plane; it shares the same basic configuration screen as the existing Azure Cache for Redis. You’re not limited to using the portal, as there are Azure CLI, Bicep, and PowerShell options.

On the New Redis Cache screen, you’ll need to choose a Managed Redis SKU to unlock its configuration tools. This isn’t particularly clear, and you can easily miss this choice and accidentally pick the default Azure Cache for Redis settings. Once you’ve chosen a correct SKU, you can add modules from the Advanced tab, along with configuring the clustering policy and whether you’re using data persistence. You can now click Create and your instance will spin up. This can take some time, but once it’s running, it’s ready for use.

Choosing memory and compute

Within each tier are different memory and vCPU options, allowing you to choose the configuration that’s best for your application. For example, a balanced system starts with 500MB of storage and 2 vCPUs, with up to 1TB of storage and 256 vCPUs. Each option has its own list of storage and compute options, allowing you to dial in a specification that fits your workloads.

Available modules include support for search, time-series content, as well as JSON. It’s important to check that they work with your choice of SKU: If you’re using flash-optimized instances, you can’t use the time-series module or the Bloom probabilistic data structures, which are designed for working with streaming data.

Once configured you won’t be able to change many of its policies. Clustering, geo-replication, and modules are all configured during setup, and any changes require completely rebuilding your Redis setup from scratch. That’s not an issue for development and test, but it’s something to consider when running in production.

If you’re already using Azure Cache for Redis in your applications, you won’t need to make many changes to your code. The same Redis clients work for the new service, so all you need to do is configure the appropriate endpoints and use familiar query APIs. Azure recommends the StackExchange.Redis library for.NET developers. Redis keeps a list of its own client libraries for other languages, from Python to Java and more.

Azure’s latest Redis offering brings enterprise capabilities beyond its existing Azure Cache for Redis. This should help you build larger, more flexible cloud-native applications, as well as use in-memory data for more than simple key/value caching. As Azure Managed Redis moves towards general availability, hopefully we‘ll see Microsoft‘s cloud-native development frameworks like Aspire and Radius add support for automatically configuring its new services alongside their existing Redis support.
https://www.infoworld.com/article/3627839/getting-started-with-azure-managed-redis.html

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Dec, Sat 21 - 11:43 CET