Gemini Live is the best AI feature I've seen so far from Google
When you buy through links on our articles, Future and its syndication partners may earn a commission.
With ChatGPT rolling out Advanced Voice Mode to some users this month, and Apple on the verge of launching Apple Intelligence, Google has fired back with Gemini Live, a version of the Gemini AI that you can talk to on your phone as if it were a real person. Gemini Live is currently only available to Gemini Advanced customers, as part of the AI Premium plan for $20 (£18.99, AU$30) a month, but should be available to all subscribers with a compatible phone, not just those with a shiny new Google Pixel 9, which the search giant just launched.
My first impression is that Gemini Live is really impressive to hear in action. Finally, I can chat with my phone as if it were a real person, which is all I've ever wanted to do since voice assistants like Google Assistant, Siri and Alexa became a thing. Unfortunately, for the last few years I've been reduced to using Siri and Alexa to set timers on my phone, or play music, since there’s a limit to how useful they can be, usually referring me to a web page if I ask anything too complicated. In contrast, with Gemini Live I can have a conversation on just about anything and it will give me a meaningful answer. It understands my words and intent on a whole new level. Ask Gemini how the USA did in the recent Paris Olympics and it will respond with a real answer. Ask it to recommend a diet plan and it will give me some ideas, based on what it knows about me.
Of course, I could already talk to Gemini on an Android phone and ask it basic math questions, or ask it about the weather, but the new Gemini Live is a whole new beast. With Gemini Live I can have a real conversation about complex topics, ask it to brainstorm or ask it for advice. To make the conversation truly realistic, I can also interrupt its responses, so if I'm finding the answer I'm getting is just going on too long, I can interrupt Gemini and ask it something else. It feels a bit rude, but machines don't have feelings, right? I don’t need to press anything on the screen to talk to Gemini either, so it’s a totally hands-free experience, meaning I can use it while doing other tasks.
Gemini Live is also multimodal, so it can 'look' at images or videos on your phone and answer questions about them. This can be particularly useful if I want to take a photo of something then ask Gemini Live a question about it. It will intelligently take information from the photos and use it in its response. Despite a few hiccups in the live demo at the recent Made for Google event, this is genuinely useful.
Google is still adding features to Gemini (and presumably, will be adding them forever), and “in coming weeks” extensions will be added that start to making it really useful, and allow Gemini to integrate with various apps, like Calendar and Gmail. So, you will be able to say things like, “Find the specs that James sent me in an email a couple of weeks ago”, and it will be able to do it. That feature could end up being the sleeper hit for Gemini Live.
All in all, Gemini Live is the best use of AI I've seen from Google so far. Google has spent a lot of time and money trying to integrate AI into its search pages with AI Overview, which isn’t what I want. I don’t want AI taking over from my searches and getting in the way with unhelpful answers, when all I want is to be directed to a web page. AI can still get its facts wrong, and Gemini is no different in that regard. I simply want AI to help me with my life, and while there’s still lots to come that will take Gemini Live up to a whole new level, for now I can wave goodbye to Google Assistant and I get to have a real conversation with my phone, and that’s pretty amazing.