Google has pushed a new update that enhances the AI-based capabilities of its various apps. One of the biggest new features is the AI-enhanced Google Lens being able to identify medical skin issues. Google's recent innovations in its AI portfolio, particularly with Lens and Bard, reaffirm the tech giant's ongoing commitment to harnessing generative AI technologies. By providing advanced features like skin condition identification and Live View, Google Lens emerges as a potent tool capable of assisting in a wide array of daily activities, from shopping to health consciousness. The company announced its news features in a Keyword blog post.

Know skin condition with Google Lens

COMMERCIAL BREAK
SCROLL TO CONTINUE READING

Users can capture or upload an image through the Lens app, enabling the technology to find visual similarities for an array of skin ailments. This feature streamlines the process of recognising skin complications, offering users potential correspondences for conditions such as moles, sun spots, rashes, and even further issues like lip anomalies, nail streaks, and hair loss.

It's important to emphasise that Lens's outputs serve informational purposes only and do not represent a medical diagnosis. Prior to taking any serious medical actions, users are advised to seek professional medical counsel. While not as advanced as Google's AI-driven diagnostic app available in the European Union, Lens still offers users a basic comprehension of potential skin complications.

The system has gone through rigorous testing, recording an 84 per cent success rate in identifying different ailments. It has secured approval in Europe, although it is yet to be evaluated by the FDA. However, a common critique is its reduced accuracy in detecting issues for individuals with darker skin tones due to underrepresentation in image databases. Google is working to address this by enhancing the diversity of its image databases.

 A symbiosis of Google Lens and Bard

Google has not only updated Lens's capabilities but has also integrated it with Bard, Google's AI chatbot. Users can insert images into dialogues with Bard, who can now identify brands or provide fashion advice. This collaboration enriches the user experience as Bard leverages the Lens capabilities to interpret and respond accurately.

Live View: Google Lens breaks new ground

Adding to the suite of advancements in Google Lens is the debut of Live View10. This AI-facilitated feature allows users to identify objects and text in their real-world environment through augmented reality. Users can access overlay data about objects or text on their display, including specifics such as name, category, and typical uses. Use cases include hovering over text in real life and getting a translation of the text on your smartphone screen live. 

Live View is presently available in English for a selection of Android devices, and Google is planning to extend its support to more languages and devices in due course. This progress could revolutionise Google Lens, making it an even more resourceful tool for shopping, education, and navigation.