Key Takeaways
- Google’s AI-centered event showcased a widening gap between Apple and its competitors in the AI race.
- Google’s Pixel 9 offers advanced AI features now, while Apple’s Intelligence is slowly rolling out across the next year.
- Despite similarities, Google’s idea for Gemini on phones is ultimately mroe ambitious.
Despite making a strong case for the Pixel 9 lineup launching over the next few weeks, it was hard not to think of Apple during Google’s event. Less because of the iPhone-like hardware Google showed off, and more because the event’s real star — Google’s choice to make AI the new center of Android — made it dramatically clear how far behind Apple Intelligence is.
Google and Apple have a similar problem to solve: transforming AI models that seem to be most useful to computer engineers and medical researchers into consumer products that the average person can comfortably use. The key difference between what Google demoed during its event and what Apple announced during WWDC 2024, is timing. Apple might present a brighter, safer, and friendlier vision for how AI can work on the iPhone, but Google’s ready to offer nearly all the same features Apple showed off essentially now, rather than through an extended beta program. Along with several ideas Apple isn’t even attempting.
New smartphones and smartwatches may have been the reason Google hosted its Made by Google event, but the big takeaway from the event is the existence of a widening gap between Apple and competitors more ready to capitalize on the attention generative AI is getting. One that no amount of shiny new iPhones can fix.
Related
Does Apple Intelligence actually stand a chance in the AI race?
If Apple can flesh out its in-app tools, it has a shot at standing out against the competition.
Google and Apple have similar ideas about AI on phones
An assistant at the core, with the context of all your apps
While Google and Apple run fundamentally different businesses — Google is focused on services, Apple on hardware — the companies have ultimately arrived at very similar ideas for how AI should work on smartphones. They both have some kind of assistant (Gemini and Siri) that you can directly access for on-device information and general requests, and when you need it, the assistant can leverage contextual information from other apps to respond to more complicated questions and tasks.
Both companies are also pursuing a mixture of on-device processing and sending requests to the cloud. Google has long relied on its servers for some of the more demanding tricks that Pixels can do, like Video Boost, which color corrects and smooths out even the roughest of video footage. On the Pixel 9, though, one of the headlining features, Pixel Screenshots, happens entirely on-device thanks to an updated version of Google’s smaller Gemini Nano model. The app, which organizes any screenshots you take and makes them searchable with natural language, isn’t something Apple is even attempting at this point.
Google and Apple are also spreading transcription and summaries, two of the things AI is generally okay at, throughout their operating systems. Google offers Call Notes, which transcribes and summarizes calls. Apple is similarly adding Call Recording and transcriptions to iOS 18. Gemini can summarize the contents of your Gmail inbox, while the Mail app in iOS 18 just includes summaries at the top of emails. Both companies are offering on-device image generation tools for creating images to use wherever you want on your phone, too.
Google / Pocket-lint
Gemini is technically more flexible than Siri in the kinds of questions it can answer, something that Apple’s hoping to supplement by offering the option to send more complicated requests to ChatGPT, but largely the companies are aligned on where AI is currently usable on smartphones. Google’s just able to offer more complexity, combining photos and text requests in a single prompt (something that Siri can’t do) or a lifelike conversation with an AI assistant, with Gemini Live.
Critically, it’s able to do those things right now. The company hosted its event live and filled it with live demos of these new features. Not all of them worked, and the whole thing was a little awkward, but it demonstrated a point. Apple famously held high-profile live keynotes before pivoting to pre-recorded, edit-within-an-inch-of-their-life video presentations during the early COVID-19 pandemic, and never looked back. Google “doing it live” was one of several ways the company tried to differentiate itself from Apple throughout the event. More importantly, it showed that these new AI features can work now rather than in a few months or years.
Related
5 cool things Google’s Gemini AI can do on your Pixel 9
The new Google Pixel 9 phones have some exclusive AI features.
Apple Intelligence is still months away
It’s going to be a while before we meet the new Siri
Apple
Cruise through Apple’s web page explaining the features of Apple Intelligence, and you’ll find two key details the company hasn’t been too loud about:
- Apple Intelligence is launching in “Beta” this fall with iOS 18, iPadOS 18, and macOS Sequoia.
- “Some features, additional languages, and platforms will be coming over the course of the next year.”
The looseness of describing Apple Intelligence as a beta, and suggesting not all the features will be available until 2025, gives Apple a lot of flexibility to ship something that looks drastically different from the experience it showed off in its video presentation. If the developer betas are anything to go on, a few of Apple Intelligence’s biggest features likely won’t be included when Apple’s new software launches later this year. Pocket-lint was able to go hands-on with Writing Tools for generating text, Apple’s new summary and transcription features, and the Siri visual redesign, but the rest of what Apple demoed, like Image Playground and Siri’s ability to work across apps and have contextual information about what’s on your screen, are missing.
iPhone 16 owners might not be patient when the average flagship Android phone is able to do some pretty big things their phone can’t.
Bloomberg reports that Apple plans on technically launching Apple Intelligence with iOS and iPadOS 18.1, but features will be added over time, “via multiple updates to iOS 18 across the end of 2024 and through the first half of 2025.” The revamped Siri features will reportedly be part of one of those 2025 updates, while the new visuals are landing in the 18.1 updates. That means that one of the selling points of Apple’s new iPhone won’t be available at launch, and the secret sauce that gets the closest to tying Apple Intelligence together, like Google’s Gemini does on the Pixel 9, is still possibly a year out.
That’s not necessarily a catastrophe, but the timing could matter. Apple likes to take its time, but new iPhone 16 owners might not be patient when the average flagship Android phone is able to do some pretty big things their phone can’t.
It’s still early days for AI on smartphones
The jury is still out on whether Google’s new AI features are worthwhile or whether they work as well as they’re supposed to. The multiple errors during the live event suggest there could still be a fair number of rough edges to deal with, but I can’t deny I felt a little excited by what Google showed off. I’m primarily an iPhone user, but even Google demonstrating a sliver of the AI assistant dream it’s been pitching for years was invigorating. I’m not sure that it won’t be clunky, but it at least looks like I can use it.
I’m not sure how much of the slow release of Apple Intelligence is motivated by (completely justified) caution, versus Apple being legitimately behind its competitors, but the fact of the matter is that, at least for the rest of 2024, the company is on its back foot. And with so many of the Pixel’s basic AI ideas being either similar or more ambitious than Apple’s, that’s not the best place to be.
Trending Products