How the USDS is Modernizing Medicare’s 50 Year Old Payment System

Continued from page 1.

When asked about what classifies a project as high risk, Haselton referred to studies that show, “that when you add over $15 million to a project, that the success ratio of the project goes down to 10%. Next you look at how visible a project is, is it getting a lot of political notice? Has the government had a good history of completing similar projects?” Here, this project was a pretty unique one that the government didn’t have a lot of experience with. This ratchets up the risk factor.

So there are three risk factors that the Medicare Payment Modernization project satisfies; high cost, high visibility and no track record of the government successfully finishing similar projects. In addition to the risk factors, when deciding what project to take on Haselton’s team also looked at whether the Medicare project would have a massive impact. Ensuring that 4% of the nation’s GDP continued to flow without interruption qualified.

The USDS develops a strategy to modernize the shared system

With the Medicare project identified, the USDS team now was faced with developing a strategy to modernize such an old system.

An earlier project that Haselton’s team worked on was the Quality Payment program within CMS. The result of that project was the successful launch within the first year of one of the first public facing APIs for CMS. The Quality Payment APITrack this API collected data based on quality measures that a provider would provide the beneficiaries and that data was used to inform the final payouts back to providers.

Having proven their methods with the Quality Payment program, the USDS team advocated for an API first approach within the agency when taking on the Medicare project. The argument was that this is how an organization needs to approach modern development. An organization needs to allow for hooks into its system that remove the friction from the Medicare ecosystem; in other words, that allow it to accept data or give data out and democratize innovation by letting the industry innovate on top of that. In other words, it needs to create a platform.

To get started with the project, the USDS did a one day deep dive between the engineers on their team and the business owner, the system owner and the contractor doing maintenance of the shared system. The result was a one page white paper that described what needs to be done to modernize a legacy system. Haselton’s team then stepped away to finish other projects in their queue while CMS took the whitepaper and prototyped modernizing proof of concepts. A year later, USDS revisited the project and, as a team, re-engaged.

At this point getting buy-in from the various stakeholders was critical to moving the project forward. Since the payment system is the heart of CMS, there were a lot of voices at the table. Stakeholder alignment wasn’t an easy process, but at the same time it wasn’t a hard sell. As Haselton explained, agreeing on first principles such as the need to update the system due to how vital it was to the economy was straightforward. Coming up with a product roadmap however proved to be harder. There were questions that needed to be answered such as “what are some of the first APIs we want to create? Who are they going to benefit? Those [kinds of] typical product development questions...that was a little more difficult to get people on the same page,” according to Haselton. By the end of this process though, USDS had secured strong support from the highest level of leadership within CMS down to the people who were implementing the project. Haselton felt by that point that, “we’re all rowing in the same direction. We all know what we need to get out of it.”

With buy-in secured, the question then turned to how best to go about modernizing such an old system. The strategy was to take a small, incremental approach with the basic steps including: introducing a cloud environment, identifying the pieces of business capability and logic that can be pulled out, refactoring the code from COBOL to a modern programming language, put an API in front of it and finally have the mainframe call out to the cloud.

The first step was to introduce a cloud environment so that they can start to migrate off the mainframe which solves the issue of being coupled to an old language and being locked-in to the mainframe itself. Haselton explained that having connectivity between the mainframe and the cloud was the first big proof of concept to make sure that the hypothesis of the migration would work.

Once connectivity was established, the team could look into the mainframe for pieces of business logic that were self contained meaning that they didn’t have any dependencies on them and could be broken-out as network-based API-led services. Those pieces of code were then put into the cloud environment and the team ran tests involving the mainframe reaching out, calling the cloud-based services, and running the business logic there in the cloud.

When asked about what modern programming language is being used, Haselton said that it was important to first take a step back and consider the approach of doing a full rewrite of those codebases versus using automated tools. It simply isn’t feasible to do a rewrite of millions of lines of code and expect a 1:1 behaving system once it gets migrated to the cloud. So instead, the USDS team used tools to convert the COBOL code to Java. This resulted in a Java program that looks like COBOL and therefore can be run on the mainframe.

After the conversion process, the Java code was run on the mainframe to make sure that the behavior was consistent with how it worked before. Once it proved to be consistent, the next step was to refactor that Java so it looked more like a modern web application versus a mainframe application. Finally, that business logic was wrapped in an API that the mainframe can access from the cloud.

Up until this point, the process has taken place in sandbox environments. The APIs are in development with hopes that the implementation will move into production by January. Haselton notes that the first pieces of code that were targeted for migration to the cloud were internal APIs to the mainframe itself. Once the process has been repeated a few times, the hope is to make those APIs available to the Medicare ecosystem. In the end Haselton is hoping to “...see a world where providers and billing offices can hit those APIs directly instead of having to file claims through a series of steps that leads to the mainframe. The hope to is make the system more transparent through the APIs.”

Modernization poises Medicare to lead healthcare into the future

The Medicare Payment Modernization project is a massive undertaking and it would seem obvious on first glance that the greatest value comes from keeping such a critical system running.

In talking to Haselton about what excites him most about the project, he was clear to point out the greater potential impact that would be enabled by the success of the project:

“Decisions made by CMS have massive influence on how healthcare is provided in America. It swings a massive stick. If we are able to position this system to adopt value-based models...and accelerate that process of how new models are integrated into the system...if we can facilitate that transformation, the system can be responsible for speeding up the rest of the industry for adopting value-based care. [It] can’t be overstated enough how much the industry will respond to what CMS does.”

Be sure to read the next Healthcare article: VA Partners with Apple to Provide Veterans Access to Medical Records Via iPhone

 

Comments