When Does Having an API Product Mean You Have a Solid Business Model? The WebKnox Example

Mark Boyd
Oct. 14 2013, 08:00AM EDT

If you have created a powerful API-as-product, do you automatically have a business model you can monetize? German startup Webknox aims to answer this question and is worth following to see how they turn a powerful data product into an investable commodity. ProgrammableWeb caught up with Dr David Urbansky at the recent API World conference to talk about the very early days of this new API enterprise and to discuss the road to success that may lay ahead.

While there is a growing focus in the API economy on in-house developers working on integration projects within the enterprise, there are still opportunities for entrepreneurs with a big data mindset to be able to create a technologically savvy product and market it successfully. Dr David Urbansky, CEO of Webknox sought out the experience of those who had done so successfully at the recent API World and to connect with other businesses (like Algolia) who had built a business model around API-as-product.

Urbansky is counting on there still being a lot of opportunity for the longtail developer to build a successful business model in the maturing API economy. He attended API World to build industry contacts and to see what new opportunities might lay ahead.

Webknox has a catalog of 11 APIs, including:

"We have a natural language API with a concentration on text analysis, and a question/answering API," Urbansky said. He likens their approach to Alchemy API, which offers a similar, machine-learning, text analysis API.

Urbansky believes the power of their tech is in the proficiency of their algorithms: "If you fairly compare our results with other machine learning APIs, we are able to show the accuracy of our products and that we have the best algorithms.

"We have a semantic recipe search, for example, which really understands something like dairy-free or no-sugar brownies. In other search engines, if you search for no-sugar brownies, the top result will be 'sugar-coated brownies', for example."

Knowing what to monetize

Webknox faces a tough dilemma. They have 11 APIs, each with a high level of functionality and the opportunity to grow into a business in its own right. But which one to focus on first? Urbansky is backing the startup's Recipe API as the initial commercial product.

"For example, with our recipe API you can ask questions like how much vitamin C is in 2 apples, or how much protein is in a cup of milk."

So far, the recipe API is used by Spoonacular, and Urbansky believes there is potential for food bloggers, and food websites to incorporate the API into their content delivery.

"We definitely want to shift our focus on picking some of our APIs to really market them to end customer segments, in this example, food bloggers." Urbansky recognizes the difficulty in strategizing entry into new industries, and has a marketing manager coming onboard to help map out the potential.

If successful with encouraging uptake, Urbansky already has eyes on some ways to monetize the data service. "So we havent tested this yet, but we have product images for every ingredient, so technically we could have ad spaces for which brand ingredient is returned." In this scenario, the idea would be that if a ketchup producer, for example, wanted their brand displayed in product image queries returned by the API, they could bid against other ketchup manufacturers to be the chosen "image representation" for that product.

Knowing how to scale

For WebKnox to succeed, it is important to know what data is out there that can be sourced as potential data streams for the API. This is what helped Food Genius to scale so well. According to Derrick Harris on GigaOm, they sourced GrubHub's takeaway menu database to quickly populate an individual meals database for use in their data product.

"For the recipe API, we have the USDA API for core data, and we add a ton of data, and we either buy or get a free source of data and we aggregate and enhance the data ourselves," said Urbanksy.

"For example, we have students manually creating the datasets. We tried Mechanical Turk but we weren't very happy with that. We have to handpick the students if we want real quality. To get the data for the students, we use some webscraping tools and scripts,and in Webknox we have access to Palladian for machine learning and webscraping, etc."

At some point, if they continue to grow, we imagine it will become too onerous for WebKnox to be sourcing and cleaning data themselves. Other webscraper services have tried to scale data service-related products, including ScraperWiki, and have ended up returning to a consultancy model given the difficulties to scale.

Urbansky is hoping that at some point, end users of apps built with the WebKnox APIs will be helping build the data as much as draw from it: "We have a windows phone app already out, and users can create recipes, so users can feed their recipes back into our recipe database so it is constantly growing."

Lessons for other API-as-product startups

Enterpreneur-developers are encouraged to keep an eye on WebKnox as they evolve over the coming 6 - 12 months. From speaking with Urbansky, there is no doubt that the API technology is quite advanced and gives end users powerful, accurate results from a natural language dataset. How this translates into a viable business product over the coming months will be a fascinating story to watch unfold.

Mark Boyd is a ProgrammableWeb writer covering breaking news, API business strategies and models, open data, and smart cities. I can be contacted via email, on Twitter, or on Google+.

Comments

Comments(2)

User HTML

  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.
  • Web page addresses and e-mail addresses turn into links automatically.
aidan

Mark when you say "ScraperWiki, and have ended up returning to a consultancy model given the difficulties to scale" this is not quite accurate. We have from the beginning provided consultancy services its one of the main ways we have gained customer insights.

Thanks Aidan. I spoke with Francis Irving, CEO of ScraperWiki and we talked about how the original ScraperWiki Classic didnt scale, which is where that mention comes from. He has also talked about that in his article for Open Knowledge Foundation, which is a key article to read for anyone interested in business model development especially around open data: http://blog.okfn.org/2013/07/18/9-models-to-scale-open-data-past-present...