Coverity Upgrades Application Testing Platform

Application testing has become both complicated and expensive, since multiple programming languages are often used in application construction. Looking to simplify the application testing process while helping to reduce costs, Coverity this week announced the general availability of version 7.0 of the Coverity Development Testing Platform, which includes additional algorithms for analyzing C# code and Java security.

Eric Lippert, architect with Coverity's research and development division, says Coverity is working to unify the testing tools that organizations need to use in support of different application development projects.

Img Credit:

When it comes to applications, Lippert says APIs have emerged as the most common problem area for developers. All too often, developers wind up trying to use an API in a way that was never intended. That misuse of the API often leads to performance issues that only manifest themselves after the application is rolled into production.

In fact, one of the most active research and development areas in the realm of application testing is API mining, Lippert says. In theory, testing tools should be able to apply advanced analytics tools to determine whether an API conforms to known best practices. Rules for how an API should be used could then be constructed to more naturally guide developers, he says.

Related Searches From ProgrammableWeb's

Directory of More Than 10,000 APIs

Browse The Full Directory

For now, however, consistently building high-quality software is prohibitively expensive due in part to the complexity of the APIs in use, says Lippert. As a result, a lot of organizations short shrift the testing process not because they are missing deadlines, but because the complexity of the application makes testing every attribute difficult to achieve in a timely manner.

Unfortunately, the issue tends to have a negative effect on the API economy as a whole. As the number of issues with APIs continues to multiply, the various integrations with applications can become more unstable. What works one day can easily not work the next day, creating problems for both the organizations that developed the API and the ones trying to use it.

That doesn’t mean organizations should shy away from APIs. It does mean that time spent testing those APIs is time well spent. After all, not only is it less expensive to fix something before it goes into production, it’s also a lot less disruptive to the business as a whole when APIs are as stable as possible.

Be sure to read the next News Services article: BigML 2014 Winter Release Boosts Predictive Modeling