Navigation
Search
|
Google Releases VaultGemma, Its First Privacy-Preserving LLM
Tuesday September 16, 2025. 03:00 PM , from Slashdot
![]() Adding differential privacy to a model comes with drawbacks in terms of accuracy and compute requirements. No one has bothered to figure out the degree to which that alters the scaling laws of AI models until now. The team worked from the assumption that model performance would be primarily affected by the noise-batch ratio, which compares the volume of randomized noise to the size of the original training data. By running experiments with varying model sizes and noise-batch ratios, the team established a basic understanding of differential privacy scaling laws, which is a balance between the compute budget, privacy budget, and data budget. In short, more noise leads to lower-quality outputs unless offset with a higher compute budget (FLOPs) or data budget (tokens). The paper details the scaling laws for private LLMs, which could help developers find an ideal noise-batch ratio to make a model more private. The work the team has done here has led to a new Google model called VaultGemma, its first open-weight model trained with differential privacy to minimize memorization risks. It's built on the older Gemma 2 foundation and sized at 1 billion parameters, which the company says performs comparably to non-private models of similar size. It's available now from Hugging Face and Kaggle. Read more of this story at Slashdot.
https://yro.slashdot.org/story/25/09/16/000202/google-releases-vaultgemma-its-first-privacy-preservi...
Related News |
25 sources
Current Date
Sep, Tue 16 - 22:01 CEST
|