Google has recently introduced a new feature, 'Multisearch,' to aid their visual search function Google lens. The multisearch feature will enable a user to refine their visual search queries. With multisearch, the users can combine their graphic and text parameters to get elaborated search results.
Google, in its blog, has given a use case to consider where a user can take a screenshot of their shopping-related search query and add a question to find variants of the item searched.
E.g., suppose a user searched apparel that appeared in Orange colour in the search result; then it can use the google lens to find the green variant of that apparel (as shown in the blog image).
Although this feature currently aims to assist in shopping-related queries, its possibilities seem to be endless. Sometimes users experience a lack of proper keywords to search their query. However, visuals+text search query capabilities will resolve this inconvenience. E.g., one can use Google lens to scan their plant and explore how to care for it by typing the query.
Visual search criteria may play a significantly larger role in the future, when augmented reality glasses and other visual technologies become available. As consumers become more acclimated to visual references and visual capture via their glasses, the ability to include those same aspects into search may become a far more significant discovery feature.
Currently, this feature is in the beta phase and can only be used in the US region.
Google gives credit to recent development in AI tech; it says in their blog,
"All this is made possible by our latest advancements in artificial intelligence, which is making it easier to understand the world around you in more natural and intuitive ways. We're also exploring ways in which this feature might be enhanced by MUM– our latest AI model in Search– to improve results for all the questions you could imagine asking."