Google I/O 2024: Android 15, Google Gemini, and everything else announced
Welcome to the Laptop Mag live blog with our coverage of Google I/O 2024!
Looking to catch the most up-to-date information on all of Tuesday's reveals from Google I/O? If so, you're in the right place! We live-blogged ahead of Google's keynote and throughout the presentation to bring you the latest news heading out of the Shoreline Amphitheatre in Mountain View, California.
We'll be updating this page as we expand our coverage, so if you missed the presentation, scroll through and you'll get the highlights, then stick around for our deeper insights into everything Google announced in the hours and days ahead.
Google I/O 2024: Everything announced
Android 15: We're getting some big updates to Android devices and it's all about AI. Android 15 will feature AI-powered search, an all-new Gemini AI assistant, and on-device AI to speed up processes while maintaining data privacy. Circle to Search, which lets you circle and search whatever is on your screen, is available now.
Google Gemini: Oof, I got Gemini all in my Google I/O! Yep, we learned about a wide range of Gemini models available globally now or coming soon, including Gemini 1.5 Pro and Gemini 1.5 Flash. We got new generative software for videos, music, and images. Gemini is being introduced in more audio and visual formats through apps like NotebookLM and Project Astra. In essence, Google wants to "do the work" so you don't have to.
Pixel Fold 2: Sorry, gang. It didn't happen.
Google being more responsible: In one of the more relieving updates concerning generative AI content, Google will be implementing watermarks in AI-generated images, audio, and video.
Google I/O 2024: Follow-up coverage
More from Google
We called Google predictable in the run up to Google I/O 2024, and almost as if done to rebuff our claims, the company went ahead and released two of its most safely expected pieces of hardware with little fanfare last week.
That means the Google Pixel 8a and dock-less Google Pixel Tablet are already out in the wild.
However, if you'd like to know more about either device, check out some of our coverage on these Pixel products below!
Google I/O 2024: The countdown begins!
We are now LIVE, and reporting to you ahead of Google I/O 2024! We'll be bringing you everything we know so far in the lead-up to today's event and recapping the latest Google news and rumors along the way.
Introducing Google Gemini
For those of you not already in the loop, you may be wondering what Google Gemini is. Fear not, we'll get the introductions underway for you!
Gemini has replaced Google Bard as the company's flagship large language model (LLM). While Google Bard was powered by LaMDA and PaLM 2 models, Gemini is running on its own family of Gemini models, offering sweeping performance improvements and better logic and reasoning.
Gemini is Google's latest multimodal AI, much like ChatGPT. This means that Google's chatbot is capable of working with you across images, audio, video, and code, as well as communicating using natural human-like language.
Introducing Android 15
Android 15 is the next milestone release of Google's mobile operating system. It's designed to compliment both smartphones and tablets, much like iOS and iPadOS for Apple devices.
However, unlike Apple's mobile operating systems, Android is known for its high level of customization making it ideal for power users or those seeking to make their device truly unique to them.
Android 15 is currently in the beta testing phase of development, however, Pixel phone owners can opt in to the operating system update early and experience what's coming through the pipeline early.
We expect Android 15 to release in full later this year, potentially alongside the release of Google's Pixel 9 and Pixel 9 Pro phones at the October 2024 Made By Google event.
Android 15: More than performance tweaks
Of course, much like any other operating system, you can expect a series of bug fixes and performance tweaks. However, Android 15 will bring a number of new features to Android smartphones too.
From what we know so far, Android 15 will be bringing a host of new tools and services to the platform, including everything from a macOS-like persistent taskbar for tablets and large-screen devices to satellite connectivity support.
We expect you'll see and hear more about Android 15's many features during Tuesday's live stream. Hopefully, Google will sneak in a few surprises too!
Want to know more about some of the features arriving with Android 15? You can always check out our dedicated page for Android 15 news and rumors, or catch up with some of the articles our writers have published about the tools and services that are on their way right now!
Introducing the Pixel Fold 2
Another strong candidate to be revealed on Tuesday comes by way of the Pixel Fold 2, Google's second attempt at a foldable phone.
We reviewed the original Pixel Fold in June 2023, rewarding it four out of five stars in the process. Considering this was Google's first attempt at making a foldable device, we were impressed, especially when it came to the phone's solid battery life and vivid display.
However, there was still plenty of room to improve. The Pixel Fold 2 is Google's opportunity to do just that, hopefully through addressing issues with processing power, charging speeds, and in an ideal world, price.
The latter is unlikely to make a change, foldables are still pretty expensive at the best of times, however, we're holding out hope for Google to bring the change when it comes to our other critiques.
Pixel Fold 2: What we know so far
If what we've heard about the Pixel Fold 2 so far turns out to be true, then we could be looking at a super sequel to Google's "formidable first foldable."
Not only will the Pixel Fold 2 be joining the Pixel 9 and Pixel 9 Pro in receiving a slightly altered appearance, but it'll also be joining those devices in being outfitted with Google's upcoming Tensor G4 chipset.
One of the key issues with the original fold was the fact it held onto the G2 chipset, while the Pixel 8 and Pixel 8 Pro got to enjoy Google's updated G3 model. Well, this time around, it would appear that the Pixel Fold 2 is getting the flagship treatment.
Pixel Fold 2: New look, new name?
New look and processor aside, there might be another big "new" heading the way of the Pixel Fold 2: a new name.
The entire Pixel 9 line-up is in for a shakeup in 2024, if rumors prove true. Current reports indicate that the Pixel 9 Pro will however be split into two variants, with one retaining the smaller size of the Pixel 9, and the other the same increased frame of the Pixel 8 Pro. This larger phone will be known as the Pixel 9 Pro XL.
Not only that, it's rumored that the Pixel Fold will be brought into this group of phones more clearly by renaming it to the Pixel 9 Pro Fold. It's a little more of a mouthful, but it does make it more clear as to which time frame each phone comes from this way.
Read more here: Google Pixel Fold 2 may have a new name — here's why that's exciting
Google I/O 2024: What we (probably) won't see!
We can't say for certain what Google has in store for us when I/O kicks off in a few hours time, but we do know it's unlikely to see much of certain products at Tuesday's showcase.
Among those unlikely to make an appearance are the already launched Google Pixel 8a and the dock-less Pixel Tablet. They were shuffled out early under the cover of darkness. Not in the middle of the night, but just in the shadow of Apple's "Let Loose" event.
Also unlikely to make an appearance on Tuesday's event, is much information about the Pixel 9 line-up. We may get a glimpse at the new Pixel 9-wide design, but don't expect much more information than that.
Similarly, it's highly unlikely that we'll be seeing the rumored Pixel Watch 3, that's much more likely to be showcased during October's Made by Google event.
Google I/O 2024: How to watch the keynote live
Google I/O kicks off with the Google keynote Tuesday at 10 a.m. PT (or 1p.m ET), which will be live-streamed via Google's YouTube channel and Google's website.
Trying to watch along from outside or the U.S. coasts? We're rounded up when the keynote takes place in other time zones to make sure you're among the first to catch Google's reveals as they happen.
Denver, Colorado: 11 a.m. Mountain
Dallas, Texas: 12 p.m. Central
Honolulu, Hawaii: 7 a.m. Hawaii-Aleutian Standard Time
Halifax, Canada: 2 p.m. Atlantic Daylight Time
London, United Kingdom: 5 p.m. Greenwich Mean Time
Berlin, Germany: 7 p.m. Central European Summer Time
Delhi, India: 10:30 p.m. Indian Standard Time
Dubai, United Arab Emirates: 9 p.m. Gulf Standard Time
Pixel 9 news is here, just not from Google
While Google might not have plans to reveal the Pixel 9 line-up Tuesday, that hasn't stopped the internet from stepping in to do so itself. Russian website Rozetked has been kind enough to share with us all pictures of the Pixel 9 line in full, including the Pixel 9, Pixel 9 Pro, and Pixel 9 XL.
According to the leaks, each phone in the Pixel 9 line will feature 120Hz AMOLED displays and feature Google's upcoming Tensor G4 chipset.
The base Pixel 9 will include 12GB of RAM with configurations starting at 128GB of storage. Both the Pixel 9 Pro and Pro XL will offer 16GB of RAM with the same base-configuration for storage.
Also, only the Pixel 9 Pro and Pro XL will include UWB, locking in the more precise tracking possibilities to the more expensive models.
Can Google measure up to OpenAI's GPT-4o reveal?
Yesterday's GPT-4o reveal by OpenAI was fairly impressive, showcasing a multimodal AI that seems to deliver in ways companies like Google and Apple have promised their virtual assistants of old would one day be capable of.
Sadly, it seems like the writing is on the wall when it comes to Google Assistant and Siri, with Gemini putting down roots on Android devices as the new go-to assistant and Apple reportedly close to penning a deal with OpenAI to bring ChatGPT tech to the iPhone.
Does Google have enough up its sleeve to measure up to the impressive offerings of GPT-4o, or does this post by @MorningBrew on X hit a little too close to home?
There's a chance...
While OpenAI came out of the gate swinging, Google is already teasing at big things on the Gemini front. In a video posted to the official Google X account yesterday, someone can be seen making use of native video processing with Gemini on a Pixel smartphone — with the AI responding to prompts based on what the phone's camera was currently looking at.
In the video, Gemini is able to decipher that people are currently setting up for an event, and even recognize the Google I/O logo displayed on stage. While not as free-flowing or natural sounding as OpenAI's GPT-4o, Gemini's conversational capabilities are still well delivered, though lack the human-like reaction speed ChatGPT is capable of.
Google I/O 2024: How to watch the keynote live
We're now just 30 minutes away from the Google I/O opening keynote. We'll be live blogging through the showcase in its entirety with follow-up coverage to share once the show draws to a close.
Want to watch along live? Head to Google's YouTube channel or the Google I/O website, or stay put and we'll keep you informed of everything that announced by Google from the Shoreline Amphitheatre in Mountain View, California.
The show has begun
Google is opening with a trailer highlighting AI and other Google features. CEO Sundar Pichai takes the stage, prepping us for the Gemini chat.
Pichai is talking about Google's Gemini Era and the success of Gemini 1.5 Pro. We're getting a lot of background about Gemini's user base and where you can access it — Android and iOS devices.
AI Overviews, a new feature where Google offers generative information based on search results, will launch this week in the US. It's an experience that lets users input longer and more in-depth prompts.
Big upgrade for Google Photos
Pichai introduces Ask Photos, a feature rolling out this summer that will let users ask questions like "What's my license plate number again" or "When did my daughter learn to swim" in Google Photos.
This seems like a convenient service, but I'd love to know about how Google plans to keep information secure.
We're now getting some interview footage of developers and how they've used Gemini 1.5 Pro to help with their coding.
Pichai has announced that Gemini 1.5 Pro will be available to all developers globally along with its new updates for improvements across the board. The context window also increases from 1M Tokens to 2M Tokens, which is a small unit of data that is processed by algorithms.
NotebookLM is a classroom in an app?
Josh Woodward, Vice President of Google Labs, demonstrated NotebookLM, a research and writing tool. Gemini 1.5 Pro is also coming to NotebookLM. You can add worksheets, homework, textbooks and it'll create a "Notebook guide," so you'll get a study guide, FAQ, and even a Quiz.
Now NotebookLM features an Audio output, so NotebookLM will generate an audio discussion based on the material presented. If you tap Join, you can join in on the conversation and ask the AI a question. It will process and respond to your question as if you're conversing with an actual teacher.
Gemini, return my shoes please
Pichai is now talking about the use cases it wants to solve, like how Gemini will search the receipt, fill out a form, and schedule a pickup for a pair of shoes you want to return. Another example is when you want to move and change your address — Gemini could help you do it for all of your apps and even offer tips about exploring your new home.
Pichai says Google plans to "organize the world's information and make it universally accessible and useful."
Demis Hassabis, Co-Founder & CEO of Google DeepMind, is talking about AlphaFold 3, which just launched last week. It's designed to help research diseases and aid in drug discovery. Now he has introduced Gemini 1.5 Flash, which is meant to offer low latency with up to 1M Tokens, and is available in Google AI Studio and Vertex AI.
This sounds like a stripped-down version of Gemini 1.5 Pro to prioritize tasks that require speed more than anything else.
Gemini, where did I leave my keys?
Hassabis announces Project Astra. This is a visual-based AI model, so you provide input via your camera and then ask a question. We're getting a demonstration of the device identifying something that makes a sound (speaker). Then it identifies part of the speaker (tweeters). The AI model even recalled where the user left their glasses — this is so cool and so creepy at the same time. It also provided creative responses, like a band name for a puppy and a stuffed animal in the image.
Anyone can be an artist with these generative tools
Doug Eck, Senior Research Director at Google DeepMind, introduces Imagen 3. It's a new image generation model that captures fewer visual artifacts, more detailed backgrounds (sunlight/lens flare), and recognizes text better to more accurately create what you're looking for. You can sign up at labs.google.
Eck also shows off Music AI Sandbox, a new tool to aid in music generation. Google got artists to test this software, including Wyclef Jean, and Marc Rebillet, who was on stage earlier. The software can fill in sparse elements on your track when you just describe the kind of sound you want to include.
Finally, Eck reveals Veo, a new model that creates 1080p videos from text, images, and video prompts, capturing detailed cinematic videos. Videos are also more consistent, as displayed with a car driving on the road. Veo was tested by Donald Glover, who produced a short film through AI. Veo is designed to capture nuance in prompts, like cinematic techniques and stylized visuals.
Google is pushing for people to use these models to advance creative endeavors, but it's unclear what the social and ethical impact might be.
More AI, more power
Sundar Pichai, Google CEO, announces the sixth-generation of TPUs called Trillium, which delivers 4.7x performance, available in late 2024 to Cloud customers. Nvidia's Blackwell GPUs will also be available in 2025.
Pichai announced to the audience that Google's "AI hypercomputer advancements are made possible in part because of liquid cooling in our data centers."
Let Google do the work for you
Liz Reid, Vice President of Google Search, is talking about how it combines its massive information well with Gemini's power to "do the work for you."
AI Overviews will come to 1 billion people at the end of the year. It is introducing "multi-step reasoning," which lets you ask complicated questions like "find the best yoga studio, best offers, and distance away from you."
Now Google can help with planning as well, including 3-day meal plans. You can even ask the AI to swap out dishes. The generative model also uses contextual factors if you're looking for "anniversary-worthy restaurants," like the time of year.
Reid says "AI-organized search results pages will start with dining and recipes, and come to movies, music, books, hotels, shopping, and more."
Help me fix my car
Rose Yao, VP of Product at Google, is doing a live demo about how to fix a record player. You can use your camera to demonstrate the problem, and Google will provide links to help you fix it.
"Why will this not stay in place?" asks Yao, as she holds her phone above a broken record player, demonstrating Google's video search capabilities. "In a near-instant Google gives me an AI overview," she says.
Yao explains in-depth how this works: "Thanks to our combination of our state-of-the-art speech models, our deep visual understanding, and our custom Gemini model, search is able to understand the question I asked out loud, break down the video frame by frame — each frame was fed into Gemini's long context window...so search could then pinpoint the exact make and model of my record player, and make sense of the motion across frames to identify the tone arm was drifting. Search fanned out and combed the web to find relevant insights from articles, forums, videos, and more. And it stitched all of this together into my AI overview."
This is useful for those who don't have the know-how in any field. Googling medical health symptoms is about to get a lot more visual. I hope I can also learn why my car's engine light is on.
Aparna Pappu, Vice President and General Manager of Google Workspace, announced that Side Panel will be generally available next month.
Gmail is getting smarter by enabling AI to ask questions within your email. It can also summarize other messages in your inbox to help you provide an accurate response.
Google Gemni is also rolling out this month to Google Labs users, with suggested responses to your emails. It's a step beyond the current Smart Reply features.
Pappu claims that this evolution is neat because it's "contextual," as "Gemini understood the back and forth in that [email thread] and that [the sender] was ready to start, so [Gemini] offers me a few customized options based on that context."
Google, help me budget!
Pappu also revealed that Google is getting into the smart workflows business — finally — with its Help Me Organize and Track feature in Gmail. It will offer suggestions to add files to Google Drive, and create a new Google Sheet that tracks that data.
Automation companies like Zapier and project management apps may wonder if Google is giving them competition with automation and spreadsheet features that look a lot like Airtable or Asana.
Pappu said the Google Workspace tools to "organize attachments in [Google Drive] and generate a [Google Sheet] and do data analysis via Q&A will be rolling out to [Google Labs] users this September."
This lets you organize the information in your emails based on relevant context, like creating a folder of receipts you've received in your email. You can also choose to automate these processes as well. It'll create a conveniently organized Receipt Tracker spreadsheet in Google Sheets.
AI Workflows will come to Google Labs users in September. Pappu says Google Gemini tools like "help me write" will be in general availability in June. The features had previously been opt-in.
Pappu introduces AI Teammates, which can act like producers — organize all of your team's data and catch conflicting schedules or information that everyone will need to know about.
Sissie Hsiao, General Manager for Gemini experiences, now takes us to new information on the Gemini app. Gemini Live is a new service that you can use to talk to Gemini with your voice. You can even interrupt Gemini while conversing with them. This will be available in the summer. Later this year, Gemini will be able to respond to your video input as well (without you verbalizing anything).
Gems is another service that let you input instructions for yourself, so Gemini will keep in mind your routines and habits. It can also act as a instructor or coach.
Gemini Advanced is more... advanced
Gemini Advanced will now let you ask larger, complex prompts about trip planning. Yes, you can plan an entire vacation with AI. It can suggest restaurants based on when you land. And if you like to sleep in, you can tell Gemini and it will adjust your schedule accordingly.
This trip-planning experience will roll out to Gemini Advanced this summer. As stated previously, Gemini Advanced subscribers will get access to Gemini 1.5 Pro. With this new feature, you can input much more data, like a PDF 1,500 pages long or 30,000 lines of code. This is a great feature for students trying to consume a thesis worth of content before a big test.
Android with AI is coming
Sameer Samat, Android Ecosystem President at Google, discusses what's coming to AI on Android. Android phones will feature AI-powered search, Gemini will be the new AI assistant, and on-device AI will work faster while keeping your data private.
Circle to Search (available now) is a new feature only available on Android. This lets you circle any part of your screen and input it to Gemini. For example, you can circle a complex math problem and it'll solve it for you. Later this year, Circle to Search will be able to solve more complex problems.
Using Gemini in Messages app
Google Pixel 8a makes an appearance! But of course, we're getting a Gemini demo. You can use Gemini features directly in the messaging app.
You can generate images in a little window and paste them into the app. You can also summarize YouTube videos that people send you, so you do not have to watch them if you don't have time. And if someone sends you a PDF, Gemini can also help you summarize that too based on the context you need. You can ask follow-up questions as well.
Gemini Nano — You get a Gemini, and you get a Gemini
Gemini Nano is another AI variant that offers multimodality.
It enhances features like TalkBack to add context to images that it describes. Updates to TalkBack are coming later this year. Gemini Nano can also detect spam calls while a call is active — picking up on triggers like unreasonable requests for information.
Android 15 Beta 2 is coming on Wednesday.
Time to save some money, developers
Gemini 1.5 Pro and Gemini 1.5 Flash are available globally now with their most recent updates. Google AI Studio is a free service that lets you use either model.
There are new API features, including video frame extractions, parallels function calling, and context caching. These features will be available next month. Google is also reducing the price of its AI models, which is probably relevant for only developers.
Gemini 1.5 Pro costs $7 per 1M tokens (prompts up to 128K are 50 percent less — $3.50)
Gemini 1.5 Flash starts at $0.35 per 1M tokens up to 128K.
Introducing PaliGemma and Gemma 2
Gemma is Google's family of open models. It's built from the same research as Gemini, but it lets developers use it for free.
Now Google is announcing PaliGemma, which is another open model system that takes visual input. Gemma 2 is also on its way, which adds 27B parameters and is optimized for the previously announced TPUs and GPUs.
Responsibility with AI
James Manyika, SVP of Research at Google, claims that "building AI responsibly means both addressing the risks and maximizing the benefits for people and society."
He talks about Google's Red Teaming program, which takes its developers and makes them break its own AI. However, Google is introducing AI-assisted Red Teaming, so AI can help break... the AI? I think history has taught us that investigating ourselves usually turns up with a not-guilty verdict.
However, what is exciting is that Google will be implementing watermarks in AI-generated images, audio, and video. Google is also introducing LearnLM, which is a new model that makes education more engaging.
Back to Gems, there will be pre-made Gems, such as Learning Coach, which gives teachers a personal assistant. Instead of providing answers, it will help you get the answer on your own.
Google I/O 2024 is out
It was nice to see the Google Pixel 8a, but unfortunately, there were no other hardware announcements in sight. What we did get, however, is lengthy updates to Gemini AI models. I am excited to see how effective this is for consumers, primarily Android users.
If you're wondering what was important here to Google, we asked Gemini to summarize the event:
Focus on AI: There's a big focus on artificial intelligence (AI), with announcements about upgrades to Google's AI assistant, Gemini. The goal is to make Gemini more helpful in the real world through a feature called "Gemini Live."
Android 15: Google revealed plans for the next version of the Android operating system, Android 15. We don't have all the details yet, but there will likely be updates to Wear OS and Android TV as well.
Other announcements: Other announcements include new features for the TalkBack accessibility tool and a multi-modal version of Gemini Nano for Android.
Well... Gemini is kind of right?