Artificial intelligence continues to play a critical role as technology improves daily. Google has recognized this and isn’t slowing down in taking advantage of that aspect of technology. At the Search On event hosted by tech giant Google, Google made a number of revelations on several features that would generally improve and enhance search function across its platforms.
The annual global event was streamed in a way that reemphasizes the promise by Google to continue to enhance and deliver a seamless search experience to its numerous users across the world. These new features can be described as fantastic and are features users would love and readily take advantage of. At the event, one of the major revelations was making search possibilities with Multitask Unified Model, or MUM for shorts.
MUM: A Brand New AI Breakthrough For Understanding Information
MUM has the ability to make complex tasks possible using the AI text-text framework to understand information and provides a result that is 1000 times more efficient than BERT (a model Google introduced to search in 2019). MUM comes with the ability to understand search in various languages and also develop a more comprehensive understanding of information in other to provide world knowledge than subsequent models Google has introduced to users. Google is looking at expanding MUM into a more robust multimodal version in the nearest future to take on video and audio information, but for now MUM completely understands the information in text and images. MUM has a deep understanding of languages and can transform information in various languages, simultaneously in various formats from webpages to pictures and much more.
According to Google, in the coming months, users would be introduced to new ways to search visually, with the ability to even ask questions about the results you see. Here is a couple of possibilities using MUM. Tapping on the lens icon on your keyboard when you’re looking at a picture of a shirt, you can ask Google to find you the same pattern — but on another clothing, like socks. This feature will help users when searching for something that might come extensively difficult to describe accurately with words alone. By combining images and text into your query, visual search is typically easier and results can be more specific.
Also announced by Google using the MUM advanced AI features is the “Things to know” addition that would bring a redesign to search. Users can now explore new topics more easily with “Things to know.” This part of the search would present interesting information users would like to learn about topics searched for. According to Google an example is if you are looking to decorate your apartment, and your interest is in learning more about creating acrylic paintings. If you search for “acrylic painting,” Google understands how people typically explore this topic and will show the aspects people are likely to look at first. We can identify more than 350 topics related to acrylic painting, and help you find the right path to take. In addition to this feature would be the ability to zoom in and out of topics, this feature offers a more broad and deeper insight on topics searched for and other related topics associated as well. All this and many more are innovations Google is bringing to search using its advanced AI tool – MUM.
Google introduces the Lens to search, the lens would be a button users can find on Google search to make all the images that appear on a page searchable. This feature starting with iOS users will enable users to shop seamlessly through the lens mode whether for images found online or saved somewhere on their phones. This lens feature would also come to Chrome on your desktop
Google also brings in handy a shopping experience users can have online before heading to have the ultimate in-person shopping experience. Users can find products available at local stores around right from Search. Whatever product you are looking for, specific to a brand or not, users can select the “in stock” button that filters only the nearby stores that have what you are looking for on their shelves. According to Google, this brand new experience is powered by Google’s Shopping Graph, a comprehensive, real-time dataset of products, inventory, and merchants with more than 24 billion listings. Google says “this not only helps us connect shoppers with the right products for them, but it also helps millions of merchants and brands get discovered on Google every day.”
New Features In AI-Powered Google Map
Google introduces a new AI-powered feature that will be helpful in combating climate change. With incidents of wildfire in the past, this feature powered by satellite data provides a new layer on Google Maps. This layer will bring together various global information on wildfires. This feature will bring users real-time information like the size and location, the number of acres burned in wildfire emergencies, Google believes this feature can help users make informed decisions. The layer would also carry useful information and helpful resources that can help in wildfire emergencies, e.g. evacuation details, websites, and phone numbers of the local help desks. This feature would be rolled out across the Android and iOS platforms globally in October starting from the US and Australia while rolling out to other countries in subsequent months.
Also along with the innovations that come with Google Map is the Tree Canopy Insight AI-powered tool. According to Google, the Tree Canopy Insights uses aerial imagery and advanced AI capabilities to identify places in a city that are at the greatest risk of experiencing rapidly rising temperatures. This feature would go a long way in helping users access free insights about locations to plant trees within a community in order to increase shade and reduce heat. Google promises to expand this feature to over 100 cities around the globe especially cities like Guadalajara, London, Sydney, and Toronto are on the list for the first half of 2022.
Coming along in the Google Map additions pack is a new address maker app. Google is introducing an address maker app to help communities, NGOs, and the government generally provide the right address to people around the world that might need them. Google has identified that billions of people globally still don’t have the right addresses which makes it difficult to carry out things as simple voting, opening a bank account, etc. With this new free Address, Maker app introduced by Google, entities can now create unique and functioning addresses use Google’s open-source system called Plus Codes.