Not All Clouds Created Equal for Social API Access

With so many services available for integration via APIs, selecting the most appropriate cloud provider to host the service is becoming increasingly important. In November 2015, APImetrics published its second API Performance Report that highlighted the effect that the choice of cloud provider has on the reliability and responsiveness of social APIs according to region.

The survey involved standardized performance tests for latency and reliability over several months from multiple geographic locations, with additional tests on pass rates for each social network via a range of cloud providers. Surprisingly, the results showed that the choice of cloud service has as much of an effect on API performance as the geographic location of the hosting server.

The tests focused on the social APIs from Facebook, Twitter and Tumblr, with calls made from locations in Europe, the U.S. and Asia. The standardized test employed each API to make a status share using an identical payload to each platform consisting of an ID code and time stamp, with the response time measured in milliseconds. APImetrics deployed the tests in each region via different cloud providers, which were AWS, Azure, Google and Softlayer.

The overall results showed that Google Cloud offers the best performance across all regions, averaging 703ms per response. In contrast, Microsoft Azure boasted the slowest response rate across the three social media networks, averaging 942ms. Individual latency for each platform by cloud varied wildly, with Twitter on AWS exhibiting the fastest response rate of 342ms, and Facebook on Azure bringing up the rear at 1240ms per response.

Azure also experienced reliability issues resulting in the lowest pass rate for both Twitter (95.79%) and Facebook (96.91%) out of the four clouds tested. Azure’s average pass rate across the three platforms came in at a lowly 97.57%, in stark contrast to AWS, Google and Softlayer, who all scored above 99%.

The average latency of each social network by region also exhibited vast discrepancies, from Twitter’s U.S. response time averaging 205ms, up to Facebook’s response time from Asia averaging 1398ms. Despite the width of this average range, the individual rankings for each of the three regions saw Twitter consistently come out on top, followed by Tumblr and Facebook respectively.

This report shows that API performance varies according to the combination of inputs, making it vitally important to look beyond the simplistic metric of whether the API is returning the correct message. You must consider the end-user location and the intended platform before selecting your cloud provider to ensure the best possible performance, and thus the best possible user experience. An example of this can be seen in Google offering the lowest latency of the four clouds across all three platforms, yet performing around 25% slower than AWS and Softlayer when applied to Twitter alone.

APImetrics conducted these tests over several months, which left Facebook’s results a victim of the three outages experienced in September. Facebook Director of Production Engineering and Site Reliability Pedro Canahuati told the Wall Street Journal, “Each unrelated issue began as a planned code change that had unintended consequences in our ability to respond to server requests”. Had the test occurred in August, Facebook would have ranked highest of the three, but the dip in performance dropped the world’s largest social network into third place for speed and reliability, behind both Twitter and Tumblr. None of the three social networks has responded to our queries on the subject.

 

Comments (0)