Where Einstein Meets Edison

Semantic Technologies Series: Interview with Will Hunsinger of Evri

Semantic Technologies Series: Interview with Will Hunsinger of Evri

Jul 22, 2010

I recently talked to Will Hunsinger, CEO of Evri, a discovery engine, delivering intelligent, real-time streams of information around popular topics enabling users to share and engage. The company has just under 40 employees and is based in Seattle with offices in San Francisco.

How does the recent acquisition of Twine/Radar Networks impact your strategy?

So we acquired Twine/Radar Networks [Editor’s Note: Twine let users collaborate online to create data collections, so called twines, automatically categorizing and tagging content.] in March and I had been focused for a while on articulating a consumer strategy. Up to that point, we had one foot on shore and one foot in the boat in the sense that we had done some very large strategic deals with companies like Hearst but in the background we were looking what companies like Twine had done in the consumer space. So I would say the acquisition didn’t necessarily change our strategy as much as it validated our consumer strategy, saying this is the direction we are going to go and obviously it was a vote of confidence of our board that they approved an aggressive move like that for an earlier stage startup [Note: Vulcan Capital is an investor in both Evri and Twine/Radar Networks].

What is your outlook for semantic search: Will we see semantic search as our preferred search method? Would you go as far as some people in saying current keyword search will die and we will entirely be guided by recommendations and results coming from our social network?

So the way we look at search is less about semantic technologies replacing keyword search. I think you look at what Google brings to the table, or even Bing and others; the consumer isn’t necessarily immediately recognizing the higher value a semantic search can bring you vis-à-vis keyword search. Basically keyword search is good enough. There are certain verticals where a degree of precision and semantic filtering might give you a much more precise and rich experience, for example Farecast by Microsoft, semantic technologies applied to travel.

We look at it in an entirely different way. We try not to focus on semantic technologies or semantics as the end but as a means to an end; we focus on discovery, search you have an answer in mind, discovery is the process of finding and delighting in something you did not know you were looking for. Can we create a better experience for the consumer applying the technology where it actually has a distinct advantage over keyword e.g. delivering precise results around general topics like “movies” or “reality tv”, understanding meaning and context (e.g. why is a particular entity popular right now) or even enabling consumers to follow topics over time (we do the discovery for them).

Given your experience in online advertising and search marketing from your previous experiences including at business.com, overture.com, and while running Gap.com’s e-commerce business: How will the recent developments in social search, in particular with respect to Facebook’s Open Graph protocol impact publishers and e-commerce players?

I wrote a blog post on Evri about this recently, I called it Social Graph Optimization. I want to take credit for this term (laughing)… The incentive for publishers and e-commerce players is to increase traffic, because traffic converts into sales; the sheer volume of Google provides a powerful incentive to optimize your page etc. to rank well.

When looking at Facebook’s Open Graph, one would be led to believe that publishers now have to annotate content. I see two challenges. First, [the Open Graph protocol]  is not particularly well thought through; it becomes the world greatest semantic spam: I am going to annotate that my Ironman 2 is a movie in the category of action etc but I am not going to point to IMBD or Wikipedia, the canonical representation [i.e. the representation all other references point to], rather I’m going to say that my representation on The Onion or Fandango is the canonical representation. That actually provides an incentive for spam similar to the time before we had Google. [The question is] how can we keep the index pure once folks understand the algorithm. The second challenge is that it is all good if the incentive is data flows in, traffic flows out. However, Facebook has a long history of data flowing in but not a hell much of a lot data flowing out (or anything else for that matter.) If you can’t see the data flowing out, if you don’t understand and if anybody tags Ironman 2 and if I can’t learn, I m not going to do it. If the traffic is not there, I am not going to the pain of annotating. So Facebook thinks it further and says: first in, or search engine – then you might start to get some competition and you might have broad adoption.

Do you see a way to fix it? Make the Open Graph protocol better?

Make it open! Just like they had Facebook FBML instead of XML, they seem to want to control it – make it open. If someone likes something, let us get the data back out, make it truly an open platform and then we can start to see what does and what doesn’t work.

What would be your advice to start-ups in the semantic technologies space? What are ways to monetize linked data?

The two biggest lessons learnt are: one, figure out what problem you are solving before you just start building a semantic search engine or a semantic engine. What I see a tremendous amount of is hammers looking for nails – solutions in search of a problem – and it’s like it’s all about the technology, semantic technologies. What does this technology do better than what’s out there such that you are going to solve a real problem and then there is going to be a market for it, there is a lot of “well, we got the next, fastest NLP parser”, “we found a way to crowdsource ontologies” whatever and yet so? Also, semantic technologies require a lot of money. Prototyping is relatively straight forward. Scale is expensive. That’s where you see a lot of semantic technologies companies going after enabling others, because hardware is expensive [Editor’s Note: to store and analyze large datasets often requires significant investments in storage and processing power] and doing it in scale is more daunting. It’s not like it was years ago, where Natural Language Processing (NLP) wasn’t web-scalable. However, if anything, going around raising a $200,000 seed round and planning to do anything other than building a prototype, I think they are in for an interesting surprise.

Evri is backed by Paul Allen’s Vulcan Capital, an investment firm with a strong footprint in semantic technologies. Do you see many mainstream VCs exploring investment opportunities in semantic technologies?

You know, I do actually, I have seen a lot more activity in the last 12 months than I have seen in a long time. Sometimes I take a step back and say “Well, is this just because I am attuned to it?” but there’s been more mainstream investment in semantic technologies based companies. In 2007, we saw the acquisition of Powerset by Microsoft. You look around and at Ron Conway’s angel investments in semantic startups, the real-time web screaming for filter, we have seen smaller investments and then you have the recent Apple acquisition of Siri, a company building virtual personal assistant software based on semantic technologies.  These transactions have given validation that the technology is here and ready but also that there is a path to liquidity. In the last 6 months I have also seen more interest in what we are doing than in the prior 6 months.