Twitter API Best Practices: Server vs. Browser API Processing

This guest post comes from Adam Green, a Twitter API consultant who Builds custom Twitter apps for clients, and blogs about Twitter programming at Follow him at @140dev.

Most Twitter API programmers assume that they should do their API calls from the server, and just treat a website as a display layer for the data they receive from Twitter. This is not always the best approach. There are
many cases where the best practice is to use JavaScript code in the browser to interact with Twitter instead.

IP based rate limits

The strongest case for browser based API programming is when you are running into rate limits with a server model. Let's look at a site we built for a client as an example ThisRThat is a site that displays comparative search results for two different sets of words. The searches can be for anything the user wants, and it needs to handle thousands of users at the same time. This can't be managed with a Streaming API connection from the server, because that is limited to 400 search terms. It is possible to run multiple queries with the Search API from the server, but this API call is limited to about 200 an hour. Twitter refuses to reveal the exact limit, but common wisdom says 200 is a good approximation. Since the Search API rate limit is based on the IP address making the request, the solution we used was to call the Search API from the browser with Javascript. When you do it this way, each user's browser is assigned its own rate limit. In the case of ThisRThat, we wanted to run 2 searches simultaneously, at a refresh interval of 60 seconds, giving us a total rate of 120 calls an hour. This works fine when each user has 200 calls available. There are other REST API calls that also use IP based Rate Limiting, such as /statuses/user_timeline. The Twitter docs identify the calls that offer IP rate limiting, in their typically obscure way. When the Requires Authentication option says "supported", that means that you don't have to use OAuth, and when you call without authentication, you have an IP based limit of 150 calls an hour.

Saving browser results in a server database

If you do adopt the browser model for API calls that collect data, such as Search or collecting a user timeline, a neat trick is to pass that back to the server through Ajax. You can call the API from the browser, display the results, and then call your server with the data you got from the API. In effect you are using all of the user browsers as a large collection grid. This approach can be used to reduce the amount of API calls you have to do from the server. For example, if you have a site that needs to collect user profile data with a call like /users/lookup, and you also need to do this in the browser to respond to user requests, you can save the browser generated results back on the server.

Light weight, easily installed code

Another great advantage of browser based API programming is that you can create very light weight code. If your entire site is based on API calls from the browser, you don't need a database on the server. It is possible to create code that exists as just a collection of text files, which means it can be installed by just copying it to a new web server. This is a great model for open Source Code, or even widgets, where the users want the simplest installation possible.

Twitter client code belongs on a server

When I need to build API code that performs Twitter client functions, such as tweeting or following, I always run that on a server. Since this code requires OAuth, I don't want to make the application's or the user's OAuth tokens available within the browser for security reasons. It is much safer to use an Ajax call in the browser that tells the server which user to act for, and passes data about the action, such as the text to tweet. That lets me keep each user's OAuth tokens safely on the server. Even if I wanted to develop a secure method of passing OAuth tokens back to the browser, such as HTTPS, there is no benefit to this approach. The rate limits will still apply to the account being modified, so the limit is the same when done on the browser or server.

Don't take advantage of users

I'm sure when some people read my mention of browsers as a collection grid a light bulb will go off. While it is technically possible to have a browser run background processes for your site whenever web pages are displayed, this is very unethical. I don't see anything in the Twitter TOS that explicitly forbids it, maybe because their lawyers never though of it, but it does fall under the Twitter developer guidelines, which warn against surprising users. I'm sure that people would be really surprised to know that you were stealing CPUs cycles and bandwidth from them by running API calls in their browser that they don't need. There is also the chance of slowing down or blocking their browser, if you do too much processing. I would limit any browser API calls to just those that directly benefit the user whose browser is making the request.

Be sure to read the next Best Practices article: Twitter API Best Practices: DB Lists vs. Twitter Lists