Google has quietly launched the AI Edge Gallery app, empowering users to locally run a variety of open-source AI models directly on their Android devices. This experimental app eliminates the need for an internet connection once the models are downloaded, opening doors for offline AI experimentation and use. The AI Edge Gallery supports models from the Hugging Face platform, including the Gemma family, and offers features like image questioning, prompt experimentation, and AI chat.
The app serves as a practical demonstration of the LLM Inference API, showcasing how developers can implement on-device generative AI capabilities. It allows users to explore and benchmark different LiteRT-optimised models, and even import and test their own custom models. This move signifies a push towards bringing AI processing closer to the user, reducing latency and increasing privacy. Google's initiative arrives alongside Hugging Face's efforts to optimise AI vision models for mobile devices, potentially reducing computing costs and promoting sustainable AI solutions.
The Google AI Edge Gallery is available on Android, with an iOS version in development. It supports multimodal inputs like text, image, video and audio (model dependent), and offers tools for customisation through fine-tuning and quantization. This release underscores the growing trend of on-device AI, making advanced AI accessible on personal devices.
Related Articles
Google I/O 2025 Highlights
Read more about Google I/O 2025 Highlights →NotebookLM Launches Android App
Read more about NotebookLM Launches Android App →Google I/O 2025 Anticipation Builds
Read more about Google I/O 2025 Anticipation Builds →Google I/O 2025: Watch Live
Read more about Google I/O 2025: Watch Live →