|
Navigation
Search
|
Google adds tiered storage to NoSQL Bigtable to reduce complexity, costs
Thursday October 30, 2025. 08:20 AM , from InfoWorld
Google has added a fully managed tiered storage capability inside its NoSQL database service Bigtable to help enterprises reduce the complexity and expenditure of storing data.
The new feature works by automatically moving less frequently accessed data from high-performance SSDs to infrequent access storage while allowing applications or queries to still access the less-used data, thereby lowering costs but maintaining access, the company explained in a blog post. Cloud storage tiering is hardly a novel concept for reducing costs, but what analysts expect will appeal to enterprises is storage tiering directly inside a database. “Enterprises relying heavily on high-speed SSDs often face steep costs, so they move infrequently accessed data to cheaper media. The trade-off has traditionally been complexity and latency, as accessing cold data can require switching systems and waiting through delays. What Google is now doing is eliminating those hurdles by making both hot and cold data accessible through the same database,” said Bradley Shimmin, lead of the data intelligence, analytics, and infrastructure practice at The Futurum Group. The analyst was referring to storage products offered by nearly all hyperscalers: Google’s Cloud Storage, Amazon’s S3, and Azure’s Blob Storage, all of which provide frequent (hot), infrequent (cold), and archive tiers while integrating with their respective database offerings. “In these integrations, the database offloads cold data to this external system. Enterprises often have to manage two separate systems, deal with data movement pipelines, and potentially use different query methods for hot vs cold data,” Shimmin said. The other challenge of these integrations, according to analysts, is the added cost of data retrieval from cold and archive tiers: Google itself charges $0.02 per GB and $0.05 per GB for retrieving data from cold and archive storage tiers, respectively, over and above operation and network charges. AWS and Azure, too, have data retrieval charges, with the former offering automated tiering for an additional fee without retrieval charges as an option, and the latter offering an archiving tier. Handy for agentic workloads Further, analysts pointed out that the new capability will also help enterprises rein in costs, specifically those adopting AI workloads, particularly agentic, which is currently witnessing exponential proliferation. “The new capability could have significant ramifications for the agentic era of AI, wherein we find ourselves generating tremendous amounts of data, such as vector indexes, which can get out of hand pretty quickly and force companies to prioritize only frequently accessed or updated vector embeddings to reduce costs,” Shimmin said. However, with the new capability, enterprises can explore the use of new representations of data (vector embeddings, context logs, etc.) that drive AI consumption, Shimmin added. Seconding Shimmin, HyperFRAME Research’s practice leader of AI Stack, Stephanie Walter, pointed out that the capability gives enterprises a “pragmatic” option to scale vector-heavy workloads without paying SSD prices for everything, leading to friendlier unit economics. Automated storage tiering is also an option in Google’s distributed database Spanner, and the capability was introduced earlier this year in March.
https://www.infoworld.com/article/4081636/google-adds-tiered-storage-to-nosql-bigtable-to-reduce-com...
Related News |
25 sources
Current Date
Oct, Thu 30 - 15:46 CET
|







