Back to Headlines
Technology AI Analysis

Google's Gemma 4 model goes fully open-source and unlocks powerful local AI - even on phones

AI
AI Legal Analyst
April 3, 2026, 8:56 AM 6 min read 1 views

Summary

Innovation Home Innovation Artificial Intelligence Google's Gemma 4 model goes fully open-source and unlocks powerful local AI - even on phones Now open-source under Apache 2.0, Gemma 4 brings offline, multimodal AI to servers, phones, and Raspberry Pi - giving developers total local control over edge and on-premises deployments. The models are being released under the Apache 2.0 license, making them truly open source compared to the permissive but still controlled license of earlier Gemma generations . This approach allowed the model family to be called "open" but not "open source." There were many freedoms associated with using Gemma, but Google still held the leash. Instead, they're licensing Gemma 4 under the Apache 2.0 license, which means users and developers can use and distribute the model in any way they want without restrictions.

## Summary
Innovation Home Innovation Artificial Intelligence Google's Gemma 4 model goes fully open-source and unlocks powerful local AI - even on phones Now open-source under Apache 2.0, Gemma 4 brings offline, multimodal AI to servers, phones, and Raspberry Pi - giving developers total local control over edge and on-premises deployments. The models are being released under the Apache 2.0 license, making them truly open source compared to the permissive but still controlled license of earlier Gemma generations . This approach allowed the model family to be called "open" but not "open source." There were many freedoms associated with using Gemma, but Google still held the leash. Instead, they're licensing Gemma 4 under the Apache 2.0 license, which means users and developers can use and distribute the model in any way they want without restrictions.

## Article Content
Innovation
Home
Innovation
Artificial Intelligence
Google's Gemma 4 model goes fully open-source and unlocks powerful local AI - even on phones
Now open-source under Apache 2.0, Gemma 4 brings offline, multimodal AI to servers, phones, and Raspberry Pi - giving developers total local control over edge and on-premises deployments.
Written by
David Gewirtz,
Senior Contributing Editor
Senior Contributing Editor
April 2, 2026 at 9:00 a.m. PT
Elyse Betters Picaro / ZDNET
Follow ZDNET:
Add us as a preferred source
on Google.
ZDNET's key takeaways
Gemma 4 is now fully open-source under Apache 2.0.
Local AI enables privacy, offline use, and lower costs.
From servers to smartphones, deployment just got much easier.
Google announced today that its DeepMind AI research division is releasing Gemma 4, its latest generation of
open large language models
. The models are being released under the Apache 2.0 license, making them truly open source compared to the permissive but still controlled license of
earlier Gemma generations
.
What is Gemma?
Gemma is an LLM like
Gemini
. But here, we're talking about the AI processing engine, not the chatbot interface. Both Gemma and Gemini were developed using the same research and technology. The difference is that Gemini is a subscription-based closed product, whereas Gemma is an open model that can be downloaded and run locally for free.
The ability to
run an AI model locally
without a fee benefits a variety of applications. There are plenty of folks who want to run AI at home, without relying on the cloud, and for free.
Also:
How AI has suddenly become much more useful to open-source developers
The ability to keep everything local is particularly important to enterprises that have
data sovereignty or confidentiality requirements
. For example, healthcare providers might have regulatory restrictions that prevent them from sharing patient data with a public cloud provider, yet they would still like to benefit from AI. By running the entire system locally, no data is sent to the cloud, but the AI capability is still available.
There are many devices, ranging from smartphones to a whole bunch of IoT and edge devices, that may have only intermittent network connectivity (or none at all). Being able to run AI operations without additional costs and without the need to phone home provides considerable benefits in terms of flexibility, security, and cost control.
Also:
I used Gmail's AI tool to do hours of work for me in 10 minutes - with 3 prompts
So, while you might run Gemini in your chat interface, you might install Gemma on a Raspberry Pi to monitor a process in a factory and make decisions in real-time without the latency of a round trip to the cloud and back.
The big licensing news
Earlier versions of Gemma were licensed under a Gemma Terms of Use statement, rather than a formal open-source license structure. Google permitted users to download Gemma, use it locally, and make modifications, but they restricted use to approved categories and limited redistribution.
This approach allowed the model family to be called "open" but not "open source." There were many freedoms associated with using Gemma, but Google still held the leash.
By contrast, the Apache 2.0 license grants nearly total freedom. Users and developers can use the software for any purpose, whether personal, commercial, or enterprise, and without any royalty requirements. If you do distribute the software, you're obligated to include a copy of the Apache 2.0 license and provide required attribution for the software.
Users and developers are free to modify and redistribute the code, with the right to create derivative works and distribute both the original and modified versions.
Also:
Why AI is both a curse and a blessing to open-source software
There are also some interesting patent-related protections and penalties. In terms of protections, Apache 2.0-licensed users are granted a license to any patents covering contributions, so that patent lawsuits can't target users merely for using the software. On the other hand, if you sue someone claiming the software infringes your patent, you automatically lose your license to use the software.
Google is no longer using its own terms of use for Gemma 4. Instead, they're licensing Gemma 4 under the Apache 2.0 license, which means users and developers can use and distribute the model in
any
way they want without restrictions.
The Gemmaverse
Since
the release of Gemma two years ago
, in February 2024, the open model has experienced considerable adoption.
According to Clement Farabet, VP of research, and Olivier Lacombe, group product manager at Google DeepMind, "Since the launch of our first generation, developers have downloaded Gemma over 400 million times, building a vibrant Gemmaverse of more than 100,000 variants."
Also:
7 AI coding techniques I use to ship real, reliable products - fast
But as
ZDNET reported back then
, "Google's latest AI offering is an 'open model' but not '

---

## Expert Analysis

### Merits
- Innovation Home Innovation Artificial Intelligence Google's Gemma 4 model goes fully open-source and unlocks powerful local AI - even on phones Now open-source under Apache 2.0, Gemma 4 brings offline, multimodal AI to servers, phones, and Raspberry Pi - giving developers total local control over edge and on-premises deployments.
- Also: How AI has suddenly become much more useful to open-source developers The ability to keep everything local is particularly important to enterprises that have data sovereignty or confidentiality requirements .
- For example, healthcare providers might have regulatory restrictions that prevent them from sharing patient data with a public cloud provider, yet they would still like to benefit from AI.
- Not only can we expect to see Gemma 4 adopted in more projects, but it's also now legitimately possible to bundle the AI with products, services, and devices that can benefit from a powerful on-board model.

### Areas for Consideration
- The 31B model is designed to maximize raw power and quality, bringing all its capabilities to any problem it's asked to work on.

### Implications
- For example, healthcare providers might have regulatory restrictions that prevent them from sharing patient data with a public cloud provider, yet they would still like to benefit from AI.
- There are many devices, ranging from smartphones to a whole bunch of IoT and edge devices, that may have only intermittent network connectivity (or none at all).
- Also: I used Gmail's AI tool to do hours of work for me in 10 minutes - with 3 prompts So, while you might run Gemini in your chat interface, you might install Gemma on a Raspberry Pi to monitor a process in a factory and make decisions in real-time without the latency of a round trip to the cloud and back.
- Now, Gemma 4 is being released as pure open-source software, which means we can expect adoption rates to pick up even over what we've seen in the past 26 months.

### Expert Commentary
This article covers gemma, models, open topics. Notable strengths include discussion of gemma. Areas of concern are also raised. Readability: Flesch-Kincaid grade 0.0. Word count: 1633.
gemma models open google model source without license

Related Articles