Google just wrapped up its main keynote at Google I/O, and boy, was it all about AI. Mentioned 120 times throughout the keynote—well, 121 because, why not?—the event ditched the old norms and laser-focused on how AI is being woven into every single Google product.
Generative AI is now a part of your entire Google experience, thanks to Gemini. From helping with accessibility and AI-powered scam detection to advanced search capabilities, support for new multi-modal AI-powered glasses, and video search, this might be the most transformative Google I/O keynote we’ve seen in years.
While there were many impressive announcements, a few stood out as game-changers that could genuinely transform everyday life. These features promise to level the playing field in terms of income, visual impairments, location, and more. Here are the Gemini AI features that I believe will truly make a difference.
Project Astra
If there’s one feature you should remember, it’s Project Astra. This one’s a biggie, especially if you’re visually impaired. My sister has a +12 prescription, and Project Astra feels like a game-changer for multi-modal smart glasses that could drastically improve the lives of people with vision issues.
Sure, Google has dabbled in smart glasses before, and while there was no specific hardware mentioned at I/O, the glasses are real. We’ll probably hear more soon. Whether they come from Google or through partnerships with companies like OPPO, Samsung, or Meta, having your AI-powered assistant see what you see and help answer questions is a massive leap forward.
Ask Photos
Google Photos is getting a significant AI upgrade. Instead of manually searching through your gallery or browsing your favorite images, you can ask Gemini—now replacing Google Assistant on your phone—simple questions, and it’ll find the answers for you.
It works by breaking down your prompt into actionable steps behind the scenes, automating the multi-step process you’d usually have to do yourself. Demos showed searches like “What’s my car number plate?” and “How has my daughter’s swimming progressed?” But as always, the real magic will happen once everyone starts using it.
Spam Detection During Calls
One of the new Android features set to make a huge impact is Spam Detection during calls. With Gemini AI built into every Android phone—not just Pixels—this always-on AI assistant is designed to help you when it thinks you need it.
A demo showed a simulated spam call where the caller suggests moving money to a safe account. Gemini AI alerts the user with a big on-screen prompt, making scammers’ jobs much harder. While scams will likely evolve, Gemini will keep learning and improving.
Ask with Video
I love Circle to Search, and Google just took it to the next level by adding video. Instead of figuring out how to prompt Google for an answer, you can point your phone’s camera at something and ask Gemini how to do it. Gemini then finds the answer faster than you could on your own.
A demo showed this with a record player where the tone arm kept sticking. By pointing the camera at the record player and saying, “This piece keeps sticking,” Gemini surfaced the product name and model number, the name of the part, and examples of how to fix it.
This was a slick demo, and I can’t wait to use Ask with Video. I used Circle to Search to find the value of items during a move, but even that required multiple prompts. Now, I can’t wait to point at something and ask Gemini about its worth, the best place to sell it, and its long-term value.
Let Google TalkBack to You
If you’re severely visually impaired, TalkBack is one of Android’s most valuable features, and it just got a major AI upgrade. Even if you can’t see the screen’s details, you shouldn’t be left out, so Google is using AI to assist with everything you might need.
Imagine shopping for a new outfit. You might not see the finer details of a photo, and many sites don’t use proper ALT tags in their images. AI can help describe the item in more detail. And if it has Gemini AI’s full capability behind it, it could change lives, allowing visually impaired users to do more independently with AI’s help.
Bonus: Google Workspace
Google announced a slew of new Workspace features, all powered by Gemini AI. Whether you’re a sole trader, small business owner, or someone who values Google Workspace in your daily life, Gemini AI is set to transform your experience.
Gemini is integrated into a sidebar, effectively Google’s answer to Microsoft’s CoPilot, within many Workspace apps. It lets you summarize emails, answer questions, and build complex automated workflows to reduce manual data entry and assistance your business needs.
While it’s not just for businesses, these features are limited to Workspace accounts, Google’s paid version of Gmail. Some demos were designed for personal use, like summarizing quotes for a home improvement project or aggregating receipts in a single Google Drive folder. Others were solely for businesses, like creating an automated routine where Gemini looks for emails with financial information and adds them to a spreadsheet.
For businesses of all sizes, Gemini for Google Workspace could be transformative. Many employees spend valuable resources searching for answers to simple problems. In the demo, Google showed how Gemini could learn from its communications across a team. Instead of asking multiple people if something was approved, you can create a virtual teammate powered by Gemini that looks at all conversations and answers in real-time, making information retrieval effortless.
A Hopeful Future
Google knew this would be a landmark Google I/O, and every announcement had a flair for global impact. They focused on real-world use cases of AI in all demos, not niche or edge examples.
The keynote left me feeling hopeful. We’re seeing what an AI-powered future could look like. Sure, Google has a history of not fully delivering on its promises, but these AI features have so much potential that I genuinely hope they come through. Then maybe, just maybe, we can let Google handle all our Googling in every aspect of life.