Navigation
Search
|
Google releases Gemma 3n models for on-device AI
Thursday July 10, 2025. 01:22 AM , from InfoWorld
Google has released its Gemma 3n AI model, positioned as an advancement for on-device AI and bringing multimodal capabilities and higher performance to edge devices.
Previewed in May, Gemma 3n is multimodal by design, with native support for image, audio, video, and text inputs and outputs, Google said. Optimized for edge devices such as phones, tablets, laptops, desktops, or single cloud accelerators, Gemma 3n models are available in two sizes based on “effective” parameters, E2B and E4B. Whereas the raw parameter counts for E2B and E4B are 5B and 8B, respectively, these models run with a memory footprint comparable to traditional 2B and 4B models, running with as little as 2GB and 3GB of memory, Google said. Announced as a production release June 26, Gemma 3n models can be downloaded from Hugging Face and Kaggle. Developers also can try out Gemma 3n in Google AI Studio. Gemma 3n is built on the same technology as Google’s Gemini nano models, Google said. Gemma 3n components are offered such as the MatFormer architecture for compute flexibility, Per Layer Embeddings (PLE) for memory efficiency, LAuReL and AltUp for architectural efficiency, and audio and vision encoders optimized for on-device use cases. Additionally, 140 languages are supported for text and 35 languages for multimodal understanding. The E4B-sized version achieves an LMArena score of more than 1300, making it the first model below 10 billion parameters to reach this benchmark, Google said. The first Gemma model family was launched in 2024. The family includes more than a dozen specialized models for tasks ranging from safeguarding to medical applications as well as community innovations including enterprise computer vision to Japanese Gemma variants, the company said.
https://www.infoworld.com/article/4019759/google-releases-gemma-3n-models-for-on-device-ai.html
Related News |
25 sources
Current Date
Jul, Thu 10 - 13:39 CEST
|