A year after debuting Lens, Pinterest records more than 600M visual searches from people each month.
A year after its debut, Pinterest’s Lens feature has become so capable of parsing what an image contains and what a person is searching for that the company will now use it to support text-based searches.
Starting next week, people will be able to attach images to textual search queries on Pinterest to have Lens aid in finding what they are looking for, the company announced on Thursday. The new option will first roll out to Pinterest’s iOS app and will eventually make its way to the Android version.
The idea is that the images will serve as an additional parameter for a search to better mimic how people might seek things out in the real world. Consider how you might walk into a furniture store looking for a living room rug and show the salesperson a photo of your couch and coffee table to help pinpoint a match. Or how you might be at the grocery store shopping for salsa ingredients, see an odd-but-inviting type of pepper and ask an employee what other salsa ingredients it would complement. Now you’ll be able to put those questions to Pinterest.
The combination of visual and text search should also help Pinterest refine its visual search results. The text queries can be used to augment Pinterest’s computer vision technology’s understanding of what an image contains and to establish new relationships between the objects the technology is familiar with and other things or uses it may not yet be aware of.
Of course, Pinterest’s ability to parse images is already improving as its volume of visual searches increases. Every month, people conduct more than 600 million visual searches using Lens, Pinterest’s image-parsing browser extensions and its visual search within pins feature. As a result, Pinterest’s computer vision technology can recognize more than five times as many items as it did a year ago, including recipe ingredients and clothing styles.