How to Build a Free, Scalable API Testing Solution With Microsoft Excel

Enter sub_total in the Expected Name cell. Explanation: JSON data is expressed as a collection of name/value pairs. The sub_total is the name of the target element that we want to test. We are telling the software that we want to run a test against the sub_total element found in the API response. allows testers to confirm exact matches for name/value combinations found in the API response.   

webtesting element name

Enter 44.38 in the Expected Value cell. Explanation: We are informing the software that we expect the sub_total element to have a value of 44.38. 

webtesting element value

Click on the Run Tests button found in the Excel Tab.

webtesting excel tab

Behind the Scenes: You might be wondering how does make API calls using only Microsoft Excel. uses Excel VBA (Visual Basic for Applications) to invoke the Microsoft.XMLHTTP (Windows COM library) to call external Web URLs from the PC without using a Web browser. Here is the code snippet that makes the actual API request:

Set oXMLHTTP = CreateObject("MSXML2.XMLHTTP")
oXMLHTTP.Open HTTPVerb, TestURL, False
If HTTPVerb = "GET" Then
  oXMLHTTP.setRequestHeader "Content-Type", "application/json; charset=utf-8"
  oXMLHTTP.Send Trim(JSONInputValues)
End If
'Wait for request to finish
Do While oXMLHTTP.readyState <> 4
response = oXMLHTTP.responseText

Now let's review the results: The Actual Value cell contains the value 44.38. Explanation: submitted a JSON request to the target API URL using a HTTP POST, and automatically parsed the 44.38 from the sub_total name/value pair. 

webtesting actual value

The Pass/Fail cell contains the value PASS since the Expected Value cell equals the Actual Value cell. Explanation: performs a comparison test to confirm whether the expected element value matches the actual element returned in the API Response.

webtesting pass score

The Last Ran cell displays the automated test's completion date and time information. Explanation: Each test will contain the last ran timestamp, which will visually remind the tester the last time we invoked a test. 

webtesting timestamp

The Actual Response From API cell should contain the actual JSON response from the exchange rates API. Explanation: This is a very helpful utility that displays the actual (raw) JSON response from the server. At times, you may find it useful to just quickly scan the raw data from the server. 

webtesting actual response

The Actual Response Time contains the actual API response time in seconds (broken down in milliseconds). Explanation: Another useful nugget. This provides the round-trip time (in seconds) of the HTTP Request and Response.  

webtesting running time

Congratulations, you have successfully created an automated test with!

Reviewing Test Metrics

Test metrics help measure the API operations' data consistency over a long period of time. automatically prepares and reports the test summary metrics for each test life cycle.

Click on the Metrics Tab to review the test summary metrics.

webtesting metrics tab

The test summary metrics for this tutorial would look like the following:

webtesting test metrics

Summary JSON Web Services Automated Testing solution is a useful test automation platform to help your teams with converting manual API tests to automated API tests. was designed to resemble standard Microsoft Excel test plans, and this should help with reducing the learning curve usually associated with managing automated API test suites.

This article demonstrated how to create your first automated test with Hopefully, this tutorial provided you ideas how to create automated tests for your API-based solutions.

Be sure to read the next Testing article: Runscope Simplifies API Test Management With New Import/Export Functionality


Comments (7)


well this work in MAC OSX? Will it work in other application supporting excel files?



This platform will not currently work on MAC OSX.  A MAC-compatabile solution is on our product roadmap for Q2 2016. Hope this helps. -Marc


Why is the VBA source code protected with a password?   


VBA Newbee...


[Transparency: I work for SmartBear, I cover SoapUI open source and other tools]

I agree that minimalist solutions people are already familiar with are worth considering, not wrong to entertain the thought of Excel, and most people don't care if something's open source so long as it's free to them. And API testing at any level is better than none at all.

But for those who know and live the API testing space, it seems like a bit of an omission to not mention SoapUI, the open source tool that has been fulfilling this need for a decade, for free, at the top of its class.

It's great that there's finally some competition, honestly, though the new kid solutions are overly simplistic for most of the SOAP and REST APIs my community uses SoapUI to test. Real API testing happens after you unit test your endpoints. Professional testers will understand that point, so I guess the Excel solution works for...other people then.

Sorry for the sour grapes, but it seems strange for SoapUI to be omitted, especially due to behind-the-scenes competitive dynamics between API vendors in the space. I'd love to hear professional testers' opinions on this, even if it counters my points, but it's their experience that really matters.



Hi Paul,

Nice to meet you. Appreciate your comments. I am a long-time user/customer of SmartBear products. SoapUI was not intentionally omitted. This was article was written for tutorial purposes, and not to cover the current testing tools market. The testing solution mentioned in this article focuses on integration testing, and not unit testing. JUnit and other tools are more appropriate for unit testing. I hope software testers will give a sincere look for integration and black-box testing as it has been a very useful solution for our API solutions. Thank you for your comments. 



Well try to bespoke one to each customer to start with .. Let's assume a 20k seat call centre looking to migrate to an automation tool for a chunk of the call for instance....Too many people employed to perform the job of instrument matching and tool building are too analytical and not creative enough. With voice, for instance, it is too difficult to measure true sentiment utilising existing tools, so there are floored tools people know very well, but a prejudice towards features not true accuracy. The first step should be to acknowledge the weighting that should be given to each tool in the market place you think is valid for the project and that only a few should be used (and create/blend a new tool if the weightage is too low). Whether it's Big Data or Live Data the tone needs to be adequately recognizable into mood, from there into some phonetic / musical frequency adjudication system for every key consumer segment then then matched out to an agreed group size of that consumer demographic. Psycographic segmentation might need to be established early in the experience through the testing of various voice styles and tones as an introduction to each segmentation. Once a true spectrum of audience is understood you can start cross referencing and refining to establish an even more quantified customer type. Then the challenge is to either a) change the experience for every identified segment or b) cheaper, simply pitch one of 100 researched preferences to the audience who match and then and only THEN starting product development and identification. Perhaps a simpler goal (starting point) would be just the agents in call centre's and taking a measurement of their quantitative test results, KPI's and tests and match that to a database with best phonetic representation of say 7-10 criteria. From there the data sets can commence taking on some form. Each critera tool is used for standard QA scoring based on their own capability and audience, then we see if the difference between the quantitive interpretation and the phonetic judgement tool/s based upon those 7-10 agreed criteria is closely alligned. We establish our best research team based on industry voice and analytic instrument calibration as a panel for show, listening in on calls to further refine and re-calibrate the slice and dice of the call and recognisable variances in pitch at various stages and final mood sentiment score and cost savings against existing and live consumer mood and engagement score.


Voice 2.0 built. Next a client who may use Voice 2.22


This is not even half my pitch. It's already changed and better...


Just spent 20 minutes on it twice and it had an error send. 


Point is we create better tools, by first measuring properly what the existing ones can do when matched with certain agents and certain styles. Look fr mood, timing, outcome of call etc an then find another 3-4 which are real and create them.


Use voice and speech experts, data anaylists and a guy called Serg.


The justification of the integration tool is it's ability to put this stuff together time and again. Sell the sizzle and forget the sausage.



What if want to verify category which is under cart ??