Google’s new AI search features are game-changing for mobile users (iOS included)

Android figure at Google CES 2024

June Wan/ZDNET

Google is trying to use its dominance in the search engine space to advance in the artificial intelligence (AI) race. The company’s latest effort includes two new AI-enabled search features — and you can even start using one now, regardless of the device you use. 

On Wednesday, Google unveiled an improved multisearch feature in Lens, which allows you to add word-based queries to a photo and receive AI-powered insights instead of just visual matches. 

Also: Everything announced at Samsung’s Unpacked event

Google showcased the feature in a demo in which it uploaded a screenshot of an amethyst and asked, “Why is it purple?” The result was an AI-generated insight that answered the question solely based on the context of the photo.

The AI-powered overviews on multisearch results will be launched this week in English in the US. The best part of the launch is that, since it’s not part of Google’s Search Generative Experience, you don’t have to go through Search Labs to access the feature. 

To get started, all you have to do is tap the Lens camera icon in the Google app for Android or iOS, snap or choose a photo, add a text question, and then you’ll get a generative AI response. 

Also: 5 exciting Android features Google just announced at CES 2024

Microsoft Copilot has been accepting multimodal search inputs into its chatbot for months, so if you want similar capabilities but favor Microsoft or Bing, Copilot is a solid alternative.  

Google is also unveiling a Circle to Search feature, which allows users to search for anything on their Android screen by circling, highlighting, scribbling, or tapping anything on their phone screen, and the search information then pops up. 

For example, if you are scrolling through TikTok and see an influencer has an accessory you like in their video, all you have to do is circle it, and then you will be redirected to similar products on Google. 

This feature, however, is only available for select premium Android smartphones, including the Pixel 8, the Pixel 8 Pro, and the new Galaxy S24 Series starting on January 31. To get started, all you have to do is long-press the navigation bar or the home button on the select Android phones.

Also: Apple is reportedly eyeing generative AI push for the iPhone

The newly unveiled Galaxy S24 lineup will be supercharged with generative AI, also leveraging Imagen 2, Google’s most advanced text-to-image diffusion technology, and Gemini, Google’s most capable foundational AI model, to power apps and services built by Samsung. 

For example, using Gemini Pro, Samsung’s Notes, Voice Recorder, and Keyboard apps on the Galaxy S24 series will provide better summarization features, according to Google. Gemini Nano, a lighter model version, will be used for a new feature in Google Messages that ensures better data protection. Imagen 2 will be used to support intuitive photo-editing in Generative Edit. 

Lastly, Galaxy S24 users will have an improved Android Auto experience. The feature will use AI to automatically summarize long texts or busy group chats while you’re driving and suggest relevant replies. Android Auto will also soon mirror elements of your phone, such as wallpaper and icons, to provide a more seamless transition. 

Source Link

LEAVE A REPLY

Please enter your comment!
Please enter your name here