People across the virtual world are increasingly using visuals to share their emotion. However, brands continue to measure & listen to conversations that are shared in text form. 3.3 billion photos are shared daily from Facebook-owned properties, and Snapchat combined in 2015 – up from under just 2 billion in 2014. Visual listening is possibly the next frontier for marketers considering it gives brands the ability to measure data beyond text form.
A vast majority of images published on social platforms lack any identifying text or hashtags. In fact, 80% posts published on Facebook don’t have any associated text. Brands are using new age platforms that provide image recognition capabilities to identify pictures that contain their logo, product or service. Some of the leading platforms in the world are trying to solve this problem using computer vision, machine learning & neural networks. Visual listening will enable brands to gather more holistic data encompassing text, videos, and images.
Beyond social, brands are also leveraging visual intelligence to capture and process sponsor exposures. For instance, a brand like Vivo will be able to gather the number of exposures that it received on TV and assign a media value to the sponsorship.
How Visual Listening Works?
So how does a visual listening platform identify your logo in millions of other images shared daily? Visual listening platforms use interest matching algorithms that identify common points in an object and enable computers to detect logos even if they are only shown partially at an angle (on a coffee cup for example). It turns out that most of us have already been using an interest matching algorithms on our mobile phones when we create panoramic images. Suppose you want to create a panoramic image the first step that system needs to do is match portions of images. To match regions of the image the solutions has to look for interest points on image regions that are unusual. It sounds easy, but then the system has to be robust enough to handle changes in viewpoint, significant changes in illumination, and make sure that all of this is done in real-time.
This technology can also be used for object recognition, 3D reconstruction, and motion tracking. The initial research on using computer vision to detect & describe local features in an image was done by David Love a former professor from the University of British Columbia in 1999, but more advances have happened in the technology off late.
Visual Listening to Intelligence
Most social platforms today rely on semantic analysis, a process that allows then to analyze the context of words. Visual listening bridges the gap by combining image recognition and analytics. Visual listening enables the brand to gather valuable insights from both text and image.
So, how can Visual Listening help brands?
Visual Mentions & Analytics
With visual listening, you can move beyond the share of voice to identify the share of eye. Measure the visual mentions and affinity of the audience sharing the images. Brands can also track the earned media generated from user generated images. Monitoring this data will enable them to plan event sponsorship and influencer marketing effectively.
Using photos brands can validate when, where and how people are using their products. Understanding micro-moments will enable brands to develop new campaign opportunities. For instance, an airline can offer special offers or reward program to frequent travelers posting selfies from the flight or during the journey.
28% of millennials watch sponsored in-feed advertisements to see if it is something that might interest them and nearly 20% make the purchase. Images remove the global language barrier for analyzing consumer trends offering deeper insights into audience interests.
Sentiment of Images
Visual listening enables brands to perform sentiment analysis on images. Pabst Blue Ribbon (PBR) an American brewery using visual listening uncovered that beer drinkers love to toast their favorite brew with dogs. PBR used this insight to craft a social media campaign geared towards beer drinkers and their pets.
Visual listening is getting better with advances in computer vision. Computer vision can also be powered by neural networks to enable self-learning capabilities and improve accuracy over time. This technology is a lot more than identifying an image it’s about helping brands to understand how they are represented across the visual network.