Alright, so Google’s got this whole family of “open” AI models called Gemma, and it’s expanding. At the Google I/O event in 2025, they unveiled Gemma 3n, a model built to work smoothly on phones, laptops, and tablets. It’s available for preview starting that very same Tuesday, and apparently, Gemma 3n can handle audio, text, images, and videos. Sounds pretty versatile, right?

Seems like these models that can run offline and don’t need cloud computing are becoming pretty popular in the AI world. They’re not only cheaper to use than the big models but also help keep your data private by not having to send it off to some remote data center. During the I/O keynote, Gemma Product Manager Gus Martins mentioned that Gemma 3n can run on devices with less than 2GB of RAM. He even said that Gemma 3n has the same architecture as Gemini Nano, making it super high-performing.

But wait, there’s more! Google is also introducing MedGemma through its Health AI Developer Foundations program. Supposedly, MedGemma is their most capable open model for analyzing health-related text and images. Martins described MedGemma as a collection of open models that can understand health-related text and images for various applications. Looks like developers are in for a treat with this one, huh?

And then there’s SignGemma, an open model designed to translate sign language into spoken-language text. Google claims that SignGemma will allow developers to create new apps and features for deaf and hard-of-hearing users. Martins mentioned that SignGemma is at its best when translating American Sign Language and English. It’s supposed to be the most advanced sign language understanding model out there, so that’s pretty cool, right?

Now, it’s worth mentioning that Gemma has faced some criticism for its custom licensing terms, which have made using the models for commercial purposes a bit risky, according to some developers. Despite that, developers have collectively downloaded Gemma models tens of millions of times. So, maybe it’s not all bad, right?