What is GraphQL and How Did It Evolve From REST and Other API Technologies?

This is Part 1 of the ProgrammableWeb  API University Guide to GraphQL: Understanding, Building and Using GraphQL APIs

The desire to share structured data in a meaningful way has been a driving force behind information exchange since the first data entry clerk entered billing information into a mainframe computer. It's been a challenging task that has become even more difficult with the unyielding explosion of data created by the Internet. Fortunately, open standards such as TCP/IP, HTTP, XML, and JSON have made sharing data between different data domains easier. TCP/IP and HTTP (aka "the World Wide Web") provide common ways to move data between domains. XML and JSON have become the standard format by which to structure data. Also, as mobile devices and cloud-computing replace desktop PCs and bare metal servers in the client-server paradigm, we're seeing APIs based on these open standards become the way by which data is made available to consumers through Web and mobile apps. At the forefront of these API technologies are HTTP-based APIs, REST, GRPC and GraphQL.

Each of these API technologies have had a dramatic influence on how software is made and used in the age of the Internet. Any one of them is worthy of a book. The technology we're going to cover in this upcoming series of articles is the relative newcomer GraphQL.

Within ProgrammableWeb's API directory, GraphQL is one of the architectural styles that can be assigned to an API. REST and RPC are other examples. As an architectural style, GraphQL is growing in popularity. (See Figure 1.)

Figure 1: Interest in GraphQL has grown significantly since 2015, according to Google Trends.

Figure 1: Interest in GraphQL has grown significantly since 2015, according to Google Trends.

Companies such as Atlassian, Credit Karma, GitHub, Intuit, KLM, Pinterest, Shopify, The New York Times, WordPress and Yelp have made it a prominent part of the way they access data both privately and to the public.

Although the company still doesn't offer a public implementation, GraphQL was created and used first at Facebook. As GraphQL co-creator, Nick Schrock, wrote in 2015,

"It [GraphQL] was invented during the move from Facebook's HTML5-driven mobile applications to purely native applications. It is a Query Language for graph data that powers the lion's share of interactions in the Facebook Android and iOS applications. Any user of the native iOS or Android app in the last two years has used an app powered by GraphQL."

GraphQL is having a dramatic impact on the way data is created and consumed on the internet. The technology attempts to make exchanging data across a variety of data domains discoverable and efficient; an emerging area of developer need that HTTP-based REST APIs are less equipped to handle. It even tries to fulfill the promise of the Semantic Web. In short, GraphQL might very well be the next step toward unifying data across the internet in a way that is meaningful and machine-readable.

In this opening article of the series, I'm going to present an introduction to the GraphQL — what it is and how it came about. Then in following articles, I'll discuss in a detailed manner the features of GraphQL from an operational perspective. I am going to do this by creating a GraphQL server and commenting on the details for the implementation. After discussing the nuts and bolts of GraphQL, the next article after that is going to provide an in-depth analysis of how GraphQL applies to the Semantic Web.  Finally, I'll report the experiences of a number of companies that have adopted GraphQL Their insights, both good and bad, are invaluable. Learning from the successes and mistakes of others is always a cost-effective way to move forward.

So, let's begin at the beginning. Let's talk about how GraphQL came about and why it's becoming so popular.

The Road to GraphQL?

The internet and open standards have fundamentally changed the way that applications access data. Before the introduction of PC based client-server computing, data was stored in mainframe computers that were accessed via dumb terminals. Distributing data among interested parties was accomplished, for the most part by printing reports on paper. (See Figure 2)

Figure 2: Paper based reporting was an early form of data exchange

Figure 2: Paper based reporting was an early form of data exchange

The scope of paper-based mainframe reporting was broad. It included everything from the Accounts Receivable aging reports that businesses depended on to collect monies due in a timely manner to telephone bills sent to the general public. Remember, for every telephone in the US there were month-end bills and payments exchanged between the telephone company and its customers.

Early Methods of Data Exchange

Exchanging data via paper worked, but for obvious reasons it was limited. First and foremost, paper exchange required a lot of human processing to facilitate machine to machine interaction. For every one of the millions of paper bills sent out to customers every month, a clerk at the telephone company had to enter payment information when a bill was paid.

As mainframe computing matured, companies began to exchange information between computers using mutually agreed upon digital formats such as byte-order and word size. Byte-order format is one in which a file containing a series of bytes is exchanged between sender and receiver. Both parties share a specification that defines a field of data according to a byte count. For example, bytes 0-19 can define a field first_name, bytes 20-39 define last_name, bytes 40-49 define date_of_birth, etc. Records will typically be defined by a particular byte value that represents a line break. Defining fields by word size means that a word is defined as an array of bits fixed in size. Then, the size of particular field is determined by the number of words assigned to the particular field.

Regardless of which method was used, parsing data out of files on a byte-by-byte or word-by-word basis was tedious and error-prone. Each sender in the data exchange usually had a proprietary specification that defined the data format that a receiver needed to respect. It was not unusual for receivers to have a shelf full of manuals that described data exchange formats for a variety of vendors. The process was brittle and time-consuming. A better way was needed.

Around 1983 CSV appeared. CSV (comma-separated values) is a standard specification for formatting data as a text file in which a record is defined as a line of data and fields are, as the name implies, separated by a comma. Also, according to the specification, the first line in the file describes the names of the fields to which the lines that follow correspond. Listing 1 below shows a sample of a CSV file that describes a data structure with the field names, id, firstname, lastname, and dob. The lines of text that follow are records according to the those fields name.


Listing 1: Comma separated-values format (CSV) allowed mainframes to exchange structured data electronically

The CSV file format allowed senders and receivers to exchange data according to a common format. However, the physical exchange still proved daunting, particularly when the exchange needed to take place in an Asynchronous manner. One solution to make asynchronous data exchange possible was to use an FTP server. (See Figure 3.)

Figure 3: Sending a CSV file to a FTP server was an early method of data exchange between mainframe systems.

Figure 3: Sending a CSV file to a FTP server was an early method of data exchange between mainframe systems.

In this process, both sender and receiver share access permissions to a common FTP server. The sender has read/write permissions. The receiver has read permission. The sender copies a file to the FTP server, usually using a predefined file name convention — for example, ar02283.csv.

In that file naming conventions are special to the sender, the filename, ar02283.csv might mean, accounts receivable February 1, 1983, or it could mean, archive record January 2, 1983. In order to understand the file naming convention, a common reference is needed. Thus, while CSV brought a common standard to data exchange, actually doing the exchange was still special to the parties involved. The process was still tedious and error-prone, but it was a significant improvement over counting bytes in binary files. Still, a better way was needed. Fortunately, the internet arrived.

The Rise of Data Driven HTML

The language that's responsible for the structure and layout of web pages — Hypertext Markup Language or HTML — is old hat by now (in fact, we're onto version 5). We've all become accustomed to being able to read the information on a website as easily as our grandparents read newspapers. But, when it first appeared, HTML was a game-changer. Before HTML came along, data was published using proprietary reporting software such as Oracle RPT or Crystal Reports. There was no open publishing standard. HTML was the open standard that provided the versatility and power to publish information to web pages. HTML was the transformational technology that made accessing information available on the World Wide Web nothing more than a mouse click away.

The early history of HTML was about static data. Web developers typed out hard coded information into static files that were decorated with HTML markup. The web pages were stored on web servers accessed via web browsers. Static web pages were powerful, but they didn't provide easy access to the volumes of data stored in the hundreds of thousands of databases over the planet. Again, something more was needed. That something more, in addition to programming to the Common Gateway Interface (CGI) using a language such as Perl, were dynamic web page technologies such as PHP (See Listing 2), Java Server Pages (JSP), and Active Server Pages (ASP).

$servername = "xxxx.imbob.org'";
$username = "username";
$password = "password";
$dbname = "actorDB";
// Create connection
$conn = new mysqli($servername, $username, $password, $dbname);
//get the data
$sql = "SELECT first_name last_name FROM MyGuests";
$actors = $conn->query($sql);

   <!-- iterate through the actors resultset and populate the web page -->
<?php foreach ( $actors as $actor ) { ?>
        <a href="<?php echo $actor->wiki_url ?>"><?php echo $actor->first_name?> <?php echo $actor->last_name?> 
<?php } ?>

Listing 2: Embedding data in HTML on the server side provided a way to easily publish machine readable information on the web.

The introduction of dynamic web page technologies made it so that web developers could write server-side programs that accessed data in databases and decorated it with HTML. Whereas in the past, report writing was based on proprietary technology, now with dynamic HTML technologies, data from a database could be published using a common rendering technology — HTML — and accessed just as easily using a common data access protocol — HTTP.

Connecting Web Data Using Hypermedia

HTML also provided a feature that was never available before in any data publishing paradigm: the ability to link data in a web page to data and media in web pages in the same domain and other domains. This feature is hypertext. Effectively, hypertext put "browsing" in the web browser. A piece of content is marked like so:

<a href="URL_TO_OTHER_DATA>some data

Then, a human reader can click on the link to go to the other data. (See Figure 4.)

Figure 4: HTML hyperlinks make information in multiple media formats accessible in a nonlinear manner

Figure 4: HTML hyperlinks make information in multiple media formats accessible in a nonlinear manner

Hypertext embedded in web pages that are rendered by a web browser with complete access to the internet was a realization of a pre-Web prediction made by Bill Gates back in 1989. Information was now at your fingertips. Hypertext also made information retrieval nonlinear. Using hypertext, humans could read the information on a web page and follow that information anywhere at any time. It was a profound transformation to the way humans absorbed information. In addition, hyperlink technology was equally transformational for digital applications. For example, when the entity reading the link on a web page is a machine-driven SEO engine scouring the internet, hyperlinks provide the way for those machines to go find the "next" piece of data in the information chain, even if it was published on another domain. Thus, machines could now crawl the continuously growing amount of data being published to the internet. However, in order to make the data useful, it needed to be made relatable. Defining relationships between data points on the internet was the next challenge to be met as information technology evolved toward GraphQL.

Defining Relationships Across the Web

Using forward-pointing hyperlinks to continuously traverse and connect data across the internet was a significant breakthrough in information technology. Yet, in order to turn the connected data into useful information, the relationships between the various data points need to be well defined and discoverable. This fact was not lost on the World Wide Web Consortium ( W3C), the standards-setting body of the World Wide Web. Thus, it built a relationship definition parameter — rel — into the HTML specification. (In fact, in years to come, the W3C expanded this relationship definition to include the standards set forth by the Semantic Web, which we'll discuss extensively in Part 4 of this series.)

Listing 3 below shows how the rel parameter can be used to define relationships in HTML. The HTML, which is taken from the web page shown above in Figure 2., contains a list of people who are connected to the web page's subject, Nicholas Roeg. The list of people is rendered as an unordered list in HTML. (The unordered list begins at line 11 and ends at line 19 in Listing 3.)

1:   <html>
2:   <head>
3:   <title>Profile</title>
4:   </head>
5:   <body>
6:    <div>Nicholas Roeg</div>
7:    <div>1928-08-15</div>
8:    <div>
9:      <div>Connections</div>
10:      <div>
11:        <ul>
12:          <li><a rel="knows workedWith likes" href=" HTTPS://en.wikipedia.org/wiki/David_Bowie">David Bowie</a></li>
13:          <li><a rel="knows workedWith" href="https://en.wikipedia.org/wiki/Rip_Torn">Rip Torn</a></li>
14:          <li><a rel="knows workedWith" href="https://en.wikipedia.org/wiki/Candy_Clark">Candy Clark</a></li>
15:          <li><a rel="knows workedWith likes" href="https://en.wikipedia.org/wiki/Buck_Henry">Buck Henry</a></li>
16:           <li><a rel="knows workedWith" href="https://en.wikipedia.org/wiki/Mick_Jagger">Mick Jagger</a></li>
17:          <li><a rel="knows marriedTo" href="https://en.wikipedia.org/wiki/Susan_Stephen">Susan Stephen</a></li>
18:          <li><a rel="knows workedWith marriedTo" href="https://en.wikipedia.org/wiki/Theresa_Russell">Theresa Russell</a></li>
19:        </ul>
20:      </div>
21:     </div>
22:    </div>
23:    </body>
24:  </html>

Listing 3: The HTML rel attribute can be used to describe relationships between a parent document and a linked document.

Each item in the unordered list is tagged with a link to the person's Wikipedia page in Wikipedia. If a user clicks on one of those links, the browser goes to the Wikipedia URL defined by the link's href tag. By now, this is a common technique that even a child using a Google Document can accomplish.

However, the anchor tags (<a>) in Listing 3 contain an important piece of additional information that defines how the target link relates to the hosting page. Take a look at the HTML shown at line 12 of Listing 3:

<li><a rel="knows workedWith likes" href="https://en.wikipedia.org/wiki/David_Bowie">David Bowie</a></li>

Notice the rel="knows workedWith likes" attribute highlighted in bold. The rel attribute is a part of the HTML specification that can be used to define how a web page relates to the links it targets. In this case, the attribute indicates that the subject of the link, David Bowie, has three defined relationships to the hosting web page. These relationships are, knows, workedWith and likes. Thus, by using the rel tag, the web page is informing inspecting entities (ie: a machine that crawls the page) that David Bowie knows Nicholas Roeg, that David Bowie has worked with Nicholas Roed and that David Bowie likes Nicholas Roeg. With the "rel" parameter, the link now provides a way to not only navigate to related information, but also a way to understand how a web page is related to the information it links to.

The good news is that the relationships that various people have with the web page's subject, Nicholas Roeg, is discoverable. But, there is still a problem. Without a common point of reference about the meaning of the terms knows, workedWith and likes, there's really no way to understand the exact definition of the relationships. Does Nicholas Roeg know David Bowie because he bought one of the artist's albums? Or, have they met in person? Without a reference defining a common vocabulary, aka an ontology, there's no way to be sure.

Creating such a common vocabulary comes later on with the introduction of XML namespaces. (Ontologies are another subject that will be covered in Part 3 of this series.) Still, the rel tag was an important beginning for unifying the web. It not only provided a way for humans to understand data on the internet but for machines as well. In fact, as the amount of information on the internet continued to grow, so too did machine ingestion of that data, so much so, that HTML became exhausted.

Outside of web browsers, machine-driven applications don't really care that much about human readable formats. These applications want data in machine-readable formats that were easy to consume. The proliferation of machine activity on the internet was the impetus behind the rise to the machine-readable, data formats of XML and later on JSON.

The Introduction of Common Formats

HTML is OK for human consumption, but more elegant data formats are required to make machine consumption more efficient. Hence, XML and JSON. XML (Extensible Markup Language) was first proposed as a working draft by the W3C dated November 14, 1996. Since that time, the specification has gone through a number of revisions. The specification is well known today and still used in business and academia.

Listing 4 is an XML sample that could represent the list of persons described in the web page and HTML shown above in Figure 4 and Listing 4, respectively.

  <person id="101" firstName="David" lastName="Bowie" dob="1947-01-08" />
  <person id="102" firstName="Nicholas" lastName="Roeg" dob="1928-08-15" />
  <person id="103" firstName="Rip" lastName="Torn" dob="1931-02-06" />
  <person id="104" firstName="Candy" lastName="Clark" dob="1947-06-20" />
  <person id="105" firstName="Mick" lastName="Jagger" dob="1943-07-23" />
  <person id="106" firstName="Buck" lastName="Henry" dob="1930-12-09" />

Listing 4: XML is a standard way to format data for publication on the internet

XML is similar in syntax to HTML. It structures data with user-defined opening and closing tags. And, an opening tag can contain user-defined attributes that can be used to describe fields within the structure.

JSON, ( JavaScript Object Notation) is a way to format data following the standard used by Javascript to describe objects and arrays. Javascript and the web are inextricably linked due to Javascript's exclusive role as the standard programming language for automating tasks within web browsers which in turn was a launchpad for JSON's popularity with developers. JSON's star rose even further as Javascript found its place next to PHP, Python, and Java as a server-side programing language (officially known as Node.js). Listing 5 below is a sample of JSON that describes all the information described in the web page and HTML shown above in Figure 4 and Listing 4, respectively. Listing 5, also compares the JSON to an equivalent expression in XML.

Representing a movie in JSON
   "id": 4001,
  "title": "The Man Who Fell to Earth",
   "releaseDate": "1976-04-18",
  "director":{"id": 102, "firstName": "Nicholas", "lastName": "Roeg", "dob": "1928-08-15"},
  "actors" : [
     {"id": 101, "firstName": "David", "lastName": "Bowie", "dob": "1947-01-08"},
    {"id": 103, "firstName": "Rip", "lastName": "Torn", "dob": "1931-02-06"},
    {"id": 104, "firstName": "Candy", "lastName": "Clark", "dob": "1947-06-20"},
    {"id": 106, "firstName": "Buck", "lastName": "Henry", "dob": "1930-12-09"}
Representing a movie in XML
  <title>The Man Who Fell to Earth</title>

Listing 5: Compared to XML, JSON is a more concise data format data for publishing information on the internet

Notice please that while both XML and JSON provide a means for structuring data in a way that's agnostic of any technology or vendor, JSON has the benefit of being a more concise format, as demonstrated in Listing 3, above. Thus, it's gaining broader acceptance among business and academia. While there is some use of XML presently, JSON is becoming the preferred method for structuring data using a text-based format. In fact, as you'll see when we start the look at using GraphQL to work with graph data, the information will be retrieved in JSON format.

Mobile Devices, APIs and REST

The days of a single application working with a single data source are over. Today, it's typical for an application to aggregate and transform data from a variety of sources into datasets that are special to the need at hand. And, as mobile devices become increasingly more powerful, more aggregation and transformation activity is happening directly on the client (ie: a mobile app running on a smartphone or a Javascript-driven web app running in a browser). A result is that proprietary data access methods and formats are being abandoned in favor of generic open data access technologies and common data formats. In other words, wherein the past applications used a proprietary flavor of SQL to work with data stored in a proprietary database such as Oracle, DB2 or SQL Server, today, application developers prefer to use generic, internet-based APIs that return data in agnostic text-based formats such as XML and JSON or that are serialized in an open-source binary format such as protocol buffers. This level of abstraction reduces development time and increases the ability of an application to scale. API based development is a fundamental transformation in the way software is made.

Before GraphQL came along, most popular APIs used an adaptation of REST. REST, which is an acronym for Representational State Transfer and is an architectural style defined by computer scientist Roy Fielding in a doctoral dissertation published in 2000. REST is a comprehensive approach to software design that utilizes the basic features of the web's HTTP protocol to work with applications. This reliance on HTTP is why REST APIs are often called Web APIs and vice versa (even though not all HTTP-based APIs adhere to the fundamentals of REST). In REST, an application represents itself as URIs within a domain that are accessed using the standard HTTP Methods, GET, HEAD, POST, PUT, PATCH, DELETE, CONNECT, OPTIONS and TRACE to perform actions upon the application. These methods, also known as "verbs," are identical to the those used when a web browser issues a request to a web site. The application responds to these requests with data, status codes and other information contained in the response header. Also, an application can return URIs within the responses that describe subsequent actions available to execute. For example, the following URI is an API published by the domain Open Library. (Open Library provides books online for free.)

The URI describes a Resource, books. Also, the URI has a query string that indicates a particular book resource, according to ISBN number.

Listing 6 below, illustrates the response to a request made against the URI shown above using the HTTP GET method.

The response contains information about the book resource in JSON format. Notice that the response returns not only the bib_key field containing the ISBN number but also, the field, preview with a value indicating that noview is available. Also, the response contains three other fields that have URIs as values. These URIs indicate the next steps possible in the workflow for this particular application.

  "ISBN:0451526538": {
    "bib_key": "ISBN:0451526538",
    "preview": "noview",
    "thumbnail_url": "https://covers.openlibrary.org/b/id/295577-S.jpg",
    "preview_url": "https://openlibrary.org/books/OL1017798M/The_adventures_of_Tom_Sawyer",
    "info_url": "https://openlibrary.org/books/OL1017798M/The_adventures_of_Tom_Sawyer"

Listing 6: A response from a RESTful API that contains URIs that describe subsequent actions actions for viewing a thumbnail image or previewing data or general information about the book, The Adventures of Tom Sawyer.

The client application can call the URI associated with the field, thumbnail_url to view a thumbnail image of the book. The application can call the preview_url to get a preview of the book or the it can call info_url to get more information about the book.

As you can see, REST is using the concepts of hypertext and hypermedia to indicate the next possible actions and data points available in the application's workflow and information chain. Using hypertext and hypermedia to describe options by which to continue to view or alter an application's state per a given response is a powerful feature of REST. In order for an API to fully support REST, it needs to provide forward pointing references in a response. APIs that allow clients to do nothing more than perform Create, Read, Update and Delete (aka " CRUD") actions on resources are considered to be only RESTful (ie: they bear some but not all characteristics of REST). It's a subtle distinction, but an important one nonetheless.

REST and RESTful APIs have transformed the way developers create applications. APIs add a lot of elegance and efficiency to software design. However, RESTful APIs have a few drawbacks; the most prominent are that they tend to create a lot of round trip traffic (due to the multiple requests they often make to advance an application's state), and they are not easily recursive. For example, in the books API shown above, once the request for a particular book resource is returned another trip back to the network is necessary to get follow up information such as the books thumbnail or additional information. Having to make multiple trips to the network to get the complete information change gives way to the recursion problem.

There's no easy way to get an REST or RESTful API go return portions of a given information chain recursively. This means that if, as in the books API, I want to get and show not only the additional book information but also the information within the book information, it can't be done in one declarative statement. I have to go back to the network for the addition information.

This problem of network round trips and recursion were ones that afflicted Facebook as it tried to make its News Feed feature more performant. The first thing that the client applications did was load in the News Feed. Then, if the user wanted to view the comments associated with a particular post in that feed, or find out more about the people making those comments, the client application's only option was to make trips back to the network. The process was time-consuming in terms of client execution. Also, the programming it took to implement the behavior was brittle. Making a change was hard. The way they addressed these and other problems was to create GraphQL.

GraphQL Redefines Data Access on the Internet

GraphQL was created by Facebook to address a very specific problem: how to control its news feed in native mobile applications. The person responsible for the Facebook News Feed was Lee Byron, one of the co-creators of GraphQL.

When he was interviewed for this series of articles, Byron told ProgrammableWeb that he and his team at Facebook worked for years to optimize the News Feed in its various iterations (Byron has since left Facebook to lead web engineering at the commission-free investment startup Robinhood). So had other teams within Facebook. Early versions of the News Feed were built on an internal RESTful API developed around 2009. It was a feature developed for a group of third-party companies wanting to work with the New Feed data. At the time the API was little known inside Facebook. Byron got wind of the API in 2012 while reviewing some refactoring work his developers were doing to improve the News Feed code. While the API provided some utility, Byron noticed that large segments of data were missing from the feed, data such as comments on a post or aggregations of data emitted among friends. Byron realized the API had two significant drawbacks. One was network latency. According to Byron:

"REST really wants to have one model per URL. Recursion is just really difficult to correctly model in that framing, especially when, in order to resolve any of the links between these things, you need to go back to the network. And here we're talking about relatively early days of the smartphone world where the vast majority of people are still on 3G, 4G isn't even a thing yet, let alone LTE. So network is absolutely the bottleneck."

The other constraint was recursion. The API's recursion mechanisms made it difficult to get additional information about a particular data point on demand, such as viewing a list of friends liking a particular story. Byron and his team began to look for a new way to approach the publication of News Feed data.

At the time Facebook had released a new technology, FQL (Facebook Query Language). which was a derivation of SQL. Unlike SQL which typically interacts directly with the given database engine, FQL was designed to query against a code abstraction layer that represented News Feed data. This code abstraction layer linked various pieces of Facebook's application tier together to fulfill FQL queries.

FQL addressed the network bottleneck issue, but it fell short addressing the recursion problem. Writing recursive FQL queries was difficult. Development teams using FQL needed to have at least one member who had a deep understanding of its working in order make server-side operations performant. There weren't a lot of people on staff with this type of talent. Faced with a limited number of developers who could do the FQL optimization work and the growing complexity and volume of the backend queries created to support the demands of the News Feed, Byron decided to look for a better way. That better way required that he and his fellow engineers change their thinking about data structures. They needed to move away from conceptualizing datasets as tables toward a different type of data structure: the object graph. This change in thinking was critical to the emergence of GraphQL.

From Data Tables to the Object Graph

Although FQL allowed Front-end developers to get at Facebook's News Feed data faster, it didn't solve a fundamental architectural round-peg, square-hole problem. Speaking of GraphQL co-creator Nick Schrock's assessment of FQL, Byron told ProgrammableWeb:

"As it turns out that Nick [Schrock, creator of FQL] who had been working on FQL was also frustrated with FQL, but for very different reasons. He felt that FQL was squishing a square peg through a round hole. On the server side of Facebook, the way all the abstractions are set up is to think about data in terms of graphs. So there's objects that relate to other objects with one or one-to-many [relationships]. And everything is written in a very graphy sort of language. But FQL being a sort of variant of SQL wants to think about everything as tables and join tables and joins. And those two ideas in Nick's opinion didn't fit very well together. And he felt that while what he had built ended up working, it felt very hacky.

Both the client side and server side teams were uncomfortable working with FQL. The client-side developers and server-side developers talked about data in terms of an object graph, yet FQL was fundamentally tabular in concept. As Byron reported to ProgrammableWeb,

"You've got a square-peg, round problem on the server and a round-peg, square-hole problem on the client, so we thought, 'hey we've got to get rid of this table abstraction all together and get back to round-peg, round hole.'"

Thus, emerged the idea for GraphQL. GraphQL was built from the ground up by its co-creators Lee Byron, Dan Schafer, and Nick Schrock to be an API and query language for object graphs.

Figure 5: An object graph structures data according to nodes and edges

Figure 5: An object graph structures data according to nodes and edges

GraphQL is intended to be used to create APIs that support models that can be retrieved by a single request from the server. And, GraphQL is defined to support declarative recursion from within a single query. Declarative recursion means that developers can create a single query that effectively says "show me a list of movies according to title, releaseDate, directors and actors and show me who each director knows and likes", (See Figure 6) The developer can delve deeper into the graph if so desired. For example, the query can be extended to recurse further down the graph to ask for the likes and knows of the people that the director likes and knows, as so on.

Figure 6: GraphQL provides API access to entities and their relationships using continuous recursion

Figure 6: GraphQL provides API access to entities and their relationships using continuous recursion

Fulfilling the query is done behind the scenes. The developer doesn't have to do any fancy joins as are typical when working with tables to a relational database. The object graph is the building block upon which queries are executed.

The important thing to understand about GraphQL is that it's intended to provide a way to retrieve structured, recursive data within the constraint of a single request to the server. In other words, once the initial recursive declaration is made, no other action needs to take place. Also, another important thing to understand about GraphQL is that it's only a specification, just as SQL is only a specification. GraphQL itself is not an API nor is it a product. Implementing some technology to support that specification is another activity altogether. The specification is the mechanism that allows anybody to work with a GraphQL API regardless of the underlying technology and language used to publish data through the API. GraphQL is Platform agnostic and there are in fact several implementations for a variety of platforms. However, in order to work with a GraphQL API, a fundamental understanding of the specification is required.

Understanding the GraphQL Specification

As mentioned above, GraphQL is an open source specification for implementing GraphQL-compliant API in a specific technological Framework. For example, the implementation used in this series is Apollo Server, which is powered by Node.js. There are also implementations in C#/.NET, Golang, Ruby, Java and Python, among others.

The GraphQL specification is distinctive in 6 ways:

  • The query language itself is special.
  • The specification requires the use of custom object types to define data models.
  • GraphQL requires that an API support implementations of the following operations, Query, Mutation and Subscriptions.
  • The specification supports abstract types such as interfaces and unions
  • The specification support introspection
  • The specification supports publish-and-subscribe messaging. Within ProgrammableWeb's API Directory, such Publish And Subscribe API fall under a larger umbrella of push/streaming APIs; APIs that let clients know when there's new information (as opposed to the client having to constantly check or "poll" an API for updates).

The following sections describe each feature in detail.

The GraphQL Query Language

The GraphQL query syntax is special. It's a declarative format that looks something like a cross between JSON and Python. The query language supports the curly bracket syntax to define a set of fields within an object (aka entity). But, unlike the way JSON uses commas to delimit a field, a GraphQL query uses line breaks. Listing 5 below shows an example of a GraphQL query for a distinct Movie and the result of that query.

GraphQL query
  movie(id: "6fceee97-6b03-4758-a429-2d5b6746e24e"){
  "data": {
    "movie": {
      "title": "The Man Who Fell to Earth",
      "releaseDate": "1976-04-18",
      "directors": [
          "firstName": "Nicholas",
          "lastName": "Roeg",
          "dob": "1928-08-15"
      "actors": [
          "firstName": "David",
          "lastName": "Bowie",
          "roles": [
              "character": "Thomas Jerome Newton"
          "firstName": "Rip",
          "lastName": "Torn",
          "roles": [
              "character": "Nathan Bryce"
          "firstName": "Candy",
          "lastName": "Clark",
          "roles": [
              "character": "Mary-Lou"
          "firstName": "Buck",
          "lastName": "Henry",
          "roles": [
              "character": "Oliver Farnsworth"

Listing 7: The GraphQL query on the top defines a result shown on the bottom

The meaning behind the query in Listing 7 is as follows: "Show me information about a movie according to the id, 6fceee97-6b03-4758-a429-2d5b6746e24e. The information to return is the movie title and release date, Also show me the directors of the movie, according to firstName, lastName, and dob. And, return the collection of actors in the movie according to the firstName, lastName and the role or roles the actor played."

The result of the query defined at the top of Listing 7 is shown at the bottom of the listing.

Continue on Page 2

Be sure to read the next GraphQL article: GraphQL APIs for Everyone: An In-Depth Tutorial on How GraphQL Works and Why It's Special