When Google crawls your website, they’re putting less value in what you have to say about your content (i.e. metadata),and more interest in what the content itself is saying and who is talking about it. Makes sense. If our own assessment of the content of our lives were a true indicator of their worth, we’d all be dating Erin Andrews or Brad Pitt.
So why then don’t search engines measure the content of a video? Although title, description and tags might point us in the right direction, a good deal of information is lost in translation.
The same way Google added value to the Internet ecosystem by better indexing and searching text-based content, DC-based Veenome is pushing the envelope with video content. Beyond simply better categorizing and matching video content to other relevant online media, is the advertising revenue that would undoubtedly accompany it. By uncovering all the brands, products, and other relevant categorical information that has remained hidden to date, an advertising ecosystem is being built.
Clearly the issue in indexing video lies in the technology required to detect the on-screen information. The first company to solve this problem has effectively added a new dimension to this ever expanding media. Veenome’s CEO, Kevin Lenane, believes he’s on his way to winning this race.
I caught up with Lenane to learn more. Be sure to read all the way through the interview as we demo the Veenome technology on some of our favorite Tech Cocktail videos at the bottom of the post!
Tech Cocktail: Can you speak a little to the technology behind Veenome’s ability to recognize objects in videos? Is it only visual, or can the software detect the content based upon audio as well? If not, is that a goal of the future?
Kevin Lenane: Right now it’s based on taking a select group of frames and performing image recognition on them. We then perform some heavy language processing (combining tags like car, silver car, and Mercedes car into one consistent tag between camera cuts). We then have a set of system rules that add brands to common items and then a set of account rules that the publisher can create that apply to all their videos.
The result is a scalable, detailed set of tags that equate to the “stuff” in the video. This can be provided as a clickable web product (as you will see below) or as an API data product. In the future we’ll incorporate social data on products, allow for wiki style editing of tags and incorporate object-proximal data from the audio track.
Tech Cocktail: You list three types of customers on the website: content providers/publishers, marketers and advertisers, and video producers. Which of these three customers do you see having the most potential moving forward and why?
Lenane: It’s hard to say which will have the most potential. Right now we are seeing a lot of traction amongst large video providers (publishers and major television networks) with our data product. There are several reasons for this. One, large video providers have a need to organize and index their videos for search. Data is valuable in these cases where it can be created automatically and used to promote discovery and organization. Two, large video providers typically have many existing advertising relationships and they can use our data to better target their existing inventory by subject matter without having to manage a separate revenue source. Three, many large video providers and particularly television networks have “premium” content and they aren’t necessarily comfortable letting us build a “clickable” UI that would sit over a video. They may build their own UI using our data or opt to use the data for one of the two aforementioned uses.
Tech Cocktail: Has Veenome received funding and/or are you pursuing funding moving forward?
Lenane: We raised a 500k seed round at the end of November from private Google investors, Ecosystem VC, Dingman Center Angels, Tim Drees (founder of Web Metrics) and Aayush Phumbhra (founder of Chegg.) We’ll be raising a larger A round in September of this year.
Power of Reciprocity