
17/02/2025
Google continues AI push with new Gemini tools for developers
Google kicked off its annual I/O developer conference with a clear emphasis on AI, unveiling a suite of new Gemini-powered solutions to empower developers.
At the forefront is an expansion of Google’s Gemini language model, including the public preview of 1.5 Flash which is designed for high-frequency tasks. Developers can join a waitlist to preview a groundbreaking two million context window for 1.5 Pro.
“Streamline workflows and optimise AI-powered applications with 1.5 Flash, our model for high-frequency tasks, accessible through the Gemini API in Google AI Studio,” the company stated. The new models are available in over 200 countries and territories.
Google also announced API enhancements like context caching to improve performance for large prompts, parallel function calling, and video frame extraction support. The company is further opening up the Gemma family of open models used to build Gemini, introducing PaliGemma for multimodal vision-language tasks.
To foster an open AI ecosystem, Google highlighted integration across frameworks like Keras, TensorFlow, PyTorch, JAX, and RAPIDS cuDF. Developers can leverage tools like OpenXLA for accelerated training and LoRA in Keras for efficient model fine-tuning.
On the Google AI Edge front, the company is expanding TensorFlow Lite support to enable direct deployment of PyTorch models to mobile. Enhancements also streamline bringing AI models to edge environments like Android and the web.
For Android development specifically, Google previewed Gemini in Android Studio to simplify building high-quality apps with AI assistance. Gemini Nano and the new AICore system service will enable on-device language models for low-latency, privacy-preserving experiences on the latest Pixel and Samsung Galaxy devices.