How to Build Wolfram Alpha-Style Computational APIs

The entire API economy and, dare we say, the entire connected world has two movements going on, both in the same direction. We are looking to automate more, while creating a better user experience. The computational API, one that performs calculations beyond simple database interactions and turns raw data into computational knowledge, has arisen as one possible solution. These APIs, through the use of computation and semantic programming, allowing for the faster creation of other APIs and improved usability and availability of APIs, while making data more accessible for both the automation and the UX camps.

“You can’t underestimate the barriers of learning and using new technology,” said Jon McLoone, director of technical and communication strategy at Wolfram|Alpha, at his APIcon talk. Wolfram is a unique search engine which looks to turn all the world’s knowledge into machine-readable, computational knowledge.

“You can never underestimate how much you have to make life easier for people for them to embrace and adopt,” McLoone continued, talking about how even something as seemingly insignificant like square brackets in a programming language can become a barrier to adoption of your application programming interface. Computation and semantic programming helps overcome that barrier by creating common interpretations in order to make sense and use of data.

This post looks to summarize the benefits of using computational APIs to save time, money, and manpower. Watch the video at the end of this piece to see McLoone’s demo of how to do it all with Wolfram.

Computation Turns Data into Information and then Knowledge

APIs tend to be either computational APIs, which actually work something out, or they are simply database interactions, where they read something in a database or change the status in one. Once you do something with that information, it can go even further to become knowledge.

“Computational knowledge is much more valuable than just the dead information,” McLoone said.

He gave the example of the NASA API, which updates once every two hours, which is about the time it takes for the International Space Station to circle the earth. This would normally render the data garnered by that API, relatively speaking, useless. But “it’s using computation that actually makes the answer useful,” McLoone explained, which can give more intuitive information like where the ISS will be in 45 minutes.

He also gave the example of how this level of computation made satellite navigation beat out the atlas. Most of the time we know where we are, so satellite navigation won out by saying “Given where you are and where you want to go, turn left.” It’s the computation that’s layered on top of that information which lets us both literally and figuratively go farther.

McLoone presented the three key ways computation improves upon your base knowledge:

  • automation - He illustrates this one by classifying stagnant data, like age, sex, and boarding class, to determine if you would have survived the Titanic. Here you can use computation not only to create the algorithm but to choose the best one because there are simply too much choices and not enough human knowledge to determine the best suited tools and algorithms.
  • interactivity - Next, you need an interface that allows you to interact with this data. For this he needs to combine automation and symbolic computation to create a symbolic representation of the interface which can then be rendered by the computer.
  • integration - Finally you need to combine all of these routines, including datasets that contain images, social relationships and numerical data, in order for these tools to all work together. Here he solves a maze by eliminating dead ends and turning it into a topographical map, using graph theory, image processing and algebraic equations. He does this all through symbolic representations via APIs, “which allows us to join up all the bits of maths together.” This also works for sounds, images, and any other type of media combined.

“Stop trying to think of how you can coerce data to be one type. You can handle all kinds of different things,” McLoone said. But while computation is undeniably useful, it’s a very specific and challenging expert skill that takes a lot of time and thus money. In comes semantic programming.

Semantic Programming Makes Data, Programs, and API Specifications into the Same Kind of Object

The purpose of this level of semantic interpretation and semantic knowledge in APIs is to take data and allow it to go further, more intuitively, more easily. Together these three things reduce development time and development cost:

  • flexible interpretation - Linguistics can make access to the data that you call on via your API much more fault tolerant. For example you can have a field that requires a number, but a loose interpretation of how that number can be represented, like with the U.S. vs. the rest of the world with date ambiguity.
  • restricted interpretation - Following in the number path, you can then be more restricted in your interpretation, only accepting numbers formatted a certain way, interpreting what is the most important information. The fault tolerance is still clear here, like recognizing that The Big Apple really means New York City.
  • semantic knowledge - Falling in suit with the trend toward a Semantic Web, semantic knowledge allows things to relate together logically. it recognizes ontological structures in a way a human would, like how cities are part of states or provinces, which are part of countries. This is where a Web API or a series of Web APIs will find the intersection between data sets.

Semantic knowledge in particular can be applied to API deployment in these ways:

  • instant cloud deployment - Everything serializes very easily when you have a symbolic representation, enabling instant deployment and translation to different servers and operating systems.
  • instant interface deployment - This can be used to find out what an API is likely to return, in order to let users decide if they want to use it.
  • parameter restriction - To turn back fewer results or to make fewer calls, you can determine how many of something you want, like perhaps only returning the first three results.
  • output control - Computational APIs allow you to create rules for format transformation.

As should always be the focus for APIs, you need to make sure that whatever you are doing is good for the API consumer too. Semantic knowledge understands everything that’s going into that API, its restrictions, its data, and the API consumption levels.

McLoone also talked about how to use semantic knowledge to curate APIs at an abstract level. He gave the example of curating his Facebook friends into three groups—family, friends and coworkers—for which it returned a graphical interpretation.

Are Computational APIs the Future?

If the poor quality of Google Translator is any proof, we’re still quite far from semantic programming being able to understand at the same levels as natural language, but machine learning is getting us closer all the time.

The next place you’ll find the Wolfram language? The Internet of Things, of course. Devices are getting smarter with a new generation that allow you to embed a whole computer like Raspberry Pi and Intel Edison. “The idea that physical objects in the world can actually send out APIs of their own and be able to talk to each other, then they need to have the code running on them in a serious way. And this is where linguistics also becomes quite important because it’s an emerging field, so there’s no standards,” McLoone said. When you ask your fridge to remind you what you ate for breakfast, one may respond “three eggs” while another may say “an omelette.” We’ve got to be able to linguistically translate. He argues that the best way to do this is to import import entire languages and  environments onto these mini computers and embedded systems.

But this is not just tech for the Internet of Things, computation and semantic programming will continue to give data more value by making everything—data, objects, programs and APIs—the same, enabling you to operate across different types and structure of data, allowing things to work more easily and faster. Which, of course, means you save time and money in production and then offering a better end result to your API consumers.

McLoone ended his talk with a call to arms, welcoming other APIs to be added to the core Wolfram language pool to increase its breadth and awareness by curating your APIs.

Jennifer Riggins Writer, marketer and luddite in a technical world. Obsessed with helping tech and startups sell their value to us laypeople, improve efficiency, management practices, and message. Learning something new and laughing every single day.

Comments