This story was updated on February 19th. Update below.
The excitement around Google’s Gemini AI making its way to Android is almost exclusively based on its sexier abilities, but it’s the boring stuff that has really piqued my interest.
Or, at least, the potential for the AI to do boring stuff. I asked Gemini to find emails from British Airways in Gmail. Not only did it find the messages, but it categorized them (T&Cs updates, upcoming trips, account and privacy) and summarized relevant emails so I could quickly find the information I needed. To me, this was more impressive than anything else I have seen Google demonstrate.
I asked Gemini, on my Pixel 8, to find a burger place to eat at with a couple of friends in Tottenham Court Road on Friday. I also wanted it to recommend a few pubs, bars and maybe a late-night spot to visit after the meal. It did both and, knowing the area well, the recommendations were solid. It saved some planning time.
The possibilities of the pedestrian uses of AI are more exciting to me than magicking up creepy images, writing code or scanning images for things to buy. Smartphones have long suffered from a feature bloat problem. There’s simply too much to do and trying to understand it all, or even make use of most of it, feels overwhelming. People have more important things to do in their lives than to take an undergraduate degree in using your Phone efficiently.
I hope that somewhere in Google’s Gemini roadmap there’s a plan for further integrating the AI into a smartphone’s basic functionality. I want to be able to ask the chatbot to turn on my mobile hotspot for an hour, create a shared folder in Photos and add my last 10 pictures to it.
Imagine Gemini creating a video demonstration of all the new features that your phone was recently updated with. I want to do these things in an instant, with a single typed sentence, on the go and in the moment, which is the point of mobile computing.
This technology should replace crawling through Reddit, blogs and help guides when all I want to do is find one setting. The good news is that based on what we have seen already, there’s a more than decent chance this is where chatbots like Gemini and Galaxy AI are headed.
But the technology still has some way to go. I asked Gemini, on my Pixel 8, to turn dark mode on. It couldn’t do it. I asked Gemini to book an Uber, but it refused. I asked Gemini to find an old shopping list in Google Keep, but it said it doesn’t have that functionality yet.
I gave the AI some complex tasks, like composing an email for a refund for something I bought recently, which it did by scanning my Gmail and finding all of the relevant information, including the order number. I asked it to send that email and handle the refund process. It obviously can’t do that… yet.
The thought of a future Gemini iteration doing so is tantalizing, though. I imagine we will see third-party integrations at some point because your personal chatbot handling 90% of the refund process in the background, with only your initial request as input, is too big a selling point for the technology to not happen.
As Google merges Gemini and Assistant, there’s also the prospect of improving on the functions that never really worked in the company’s Nest smart home range. You could ask Gemini, in a sentence, to ensure your heating and lights are turned on at 6 p.m. every day. All of the unused skills that Google is deleting from Assistant, which some users said they’d never heard of until the company announced they were being axed, become instantly available.
Users don’t have to ask to use a skill, rather they can ask Gemini to complete a particular task, from which it will use whatever skills it has at its disposal. We’re seeing that in action now. If you ask Gemini to do something that involves Maps or Workspace, a small logo for those services will appear as the AI thinks.
While creating images, picture editing and chatting with AI will dominate headlines, the real smartphone revolution will be much more pedestrian: unlocking all of the hidden features of your phone and, crucially, saving you valuable time.
Update February 19th: Google could be building a new Chromebook with a built-in Assistant hardware key, according to a new report by Chrome Unboxed. The site discovered a file in the Chromium Repositories that references an Assistant key for a yet-be-released Chromebook codenamed “Xol.”
Little else is known about the device, other than the fact that development started on January 3rd of this year. It’s also not clear if this is a Google-made laptop, but it is worth noting that we have only seen dedicated Assistant keys on Google’s Pixelbook range. Google stopped making its own Chromebooks in 2022 and dissolved the team behind them, but with the emergence of Gemini there’s a potential interesting future for the Chrome OS laptop range.
As Gemini improves, so will Chromebook laptops that bake the AI into the operating system. Chromebooks run Chrome OS, which is a simplified, operating system that relies largely on cloud software. Almost everything happens in the browser and through Google’s apps. That makes these laptops well suited to carry out Gemini tasks, like video and image creation, picture editing and other resource heavy jobs that aren’t typically possible on a Chromebook.
The latest update of Gemini, 1.5, has also massively expanded the context window, which means it can handle bigger queries and more information at once. In a blogpost announcing the update, the company explained that this equates to an hour of video, 30,000 lines of code or over 700,000 words. This would open up a potential new, Gemini powered, Chromebook to a lot of serious business and productivity tasks.