Most Twitter API programmers assume that they should do their API calls from the server, and just treat a website as a display layer for the data they receive from Twitter. This is not always the best approach. There are
IP based rate limits
Saving browser results in a server database
If you do adopt the browser model for API calls that collect data, such as Search or collecting a user timeline, a neat trick is to pass that back to the server through Ajax. You can call the API from the browser, display the results, and then call your server with the data you got from the API. In effect you are using all of the user browsers as a large collection grid. This approach can be used to reduce the amount of API calls you have to do from the server. For example, if you have a site that needs to collect user profile data with a call like /users/lookup, and you also need to do this in the browser to respond to user requests, you can save the browser generated results back on the server.
Light weight, easily installed code
Another great advantage of browser based API programming is that you can create very light weight code. If your entire site is based on API calls from the browser, you don't need a database on the server. It is possible to create code that exists as just a collection of text files, which means it can be installed by just copying it to a new web server. This is a great model for open source code, or even widgets, where the users want the simplest installation possible.
Twitter client code belongs on a server
When I need to build API code that performs Twitter client functions, such as tweeting or following, I always run that on a server. Since this code requires OAuth, I don't want to make the application's or the user's OAuth tokens available within the browser for security reasons. It is much safer to use an Ajax call in the browser that tells the server which user to act for, and passes data about the action, such as the text to tweet. That lets me keep each user's OAuth tokens safely on the server. Even if I wanted to develop a secure method of passing OAuth tokens back to the browser, such as HTTPS, there is no benefit to this approach. The rate limits will still apply to the account being modified, so the limit is the same when done on the browser or server.
Don't take advantage of users
I'm sure when some people read my mention of browsers as a collection grid a light bulb will go off. While it is technically possible to have a browser run background processes for your site whenever web pages are displayed, this is very unethical. I don't see anything in the Twitter TOS that explicitly forbids it, maybe because their lawyers never though of it, but it does fall under the Twitter developer guidelines, which warn against surprising users. I'm sure that people would be really surprised to know that you were stealing CPUs cycles and bandwidth from them by running API calls in their browser that they don't need. There is also the chance of slowing down or blocking their browser, if you do too much processing. I would limit any browser API calls to just those that directly benefit the user whose browser is making the request.