Taking note of the online shift from text-based content to image/video-based content, Google is doing its best to improve the search engine’s ability to reflect that shift. Utilizing machine learning, Google Lens can view images that are either saved in the device’s memory or visible to its camera and complete tasks based on those images.
For example, Google Lens can:
- Identify a species of flower that the camera is focused on.
- Log into a WiFi network just by viewing the SSID sticker on the router.
- Translate text that the camera is pointed towards into a different language.
- Provide information and reviews on local restaurants, stores, and other establishments that the phone is pointed at.
When it’s launched, Google Lens will have the ability to interact with both the Google Assistant and Google Photos. Google Assistant will allow you to add an event to your calendar just by pointing your camera at an information board. Google Photos will allow the user to check details like opening and closing hours of a business. If you happen to have a screenshot of someone’s business card, you can call them directly from the image.
More Google apps will follow these two, providing users even more functionality.
So, what do you think? Do you see these features making your day-to-day business tasks and responsibilities easier? Which Google app do you most look forward to Google Lens augmenting? Let us know in the comments!