How Curity.io's API Identity Server is Designed for Scale, Availability, and Configurability

One reason the telecommunications and cable TV networks work so well at scale is that the devices such as the routers and switches that they're comprised of are designed to do a few things and do those things really well. Reliability and performance are absolute priorities for telco providers like Verizon and Sprint. The bulletproof nature of their networks is driven by both the simplicity and dependability of these and other devices and, in the event something does go wrong, the ability to reprogram network routing such that customers never realize that something went amiss.

According to Curity.io CEO and co-founder Travis Spencer, this is exactly how API providers should be thinking about dealing with identity management when it comes to not just the authentication part of accessing APIs, but the access control function as well. For example, just because an application authenticates its user for access to an API doesn't necessarily mean that that user also gets access to all of that API's resources.

Particularly in situations where scale is involved (eg: a global bank with millions of worldwide mobile users that hit some of the bank's APIs), it can make more sense to go with a solution like Curity's Identity Server that, like the routers in the telecommunications networks, are dedicated to one or two functions (ie: authentication and access control) while also being continuously programmable. Such programmability — often discussed in terms of automated DevOps-style provisioning and configurability — makes it much easier for an organization's API infrastructure to respond to changing network conditions (eg: autoscaling) while also responding in a timely fashion to changing market standards.

Since Spencer's team wakes up every morning thinking about how to provide the best API authentication and access management and little else, Curity can be lightning fast in how it supports the ever-evolving API standards such as Oauth and OpenID and all of their nuances that many organizations might not get to appreciate with other less dedicated solutions.

In the interview below (available in both video and audio formats), Spencer goes into further detail about Curity's offerings and many of the design decisions he and his team made such as delivering the product as an appliance instead of as a cloud offering.

How Curity.io's API Identity Server is Designed for Scale, Availability, and Configurability

Editor's Note: This and other original video content (interviews, demos, etc.) from ProgrammableWeb can also be found on ProgrammableWeb's YouTube Channel.

Audio-Only Version

Editor's note: ProgrammableWeb has started a podcast called ProgrammableWeb's Developers Rock Podcast. To subscribe to the podcast with an iPhone, go to ProgrammableWeb's iTunes channel. To subscribe via Google Play Music, go to our Google Play Music channel. Or point your podcatcher to our SoundCloud RSS feed or tune into our station on SoundCloud.

Tune into the ProgrammableWeb Radio Podcast on Google Play Music  Tune into the ProgrammableWeb Radio Podcast on Apple iTunes  Tune into the ProgrammableWeb Radio Podcast on SoundCloud


Transcript: How Curity.io's API Identity Server is Designed for Scale, Availability, and Configurability

David Berlind Hi, I'm David Berlind, Editor in Chief for ProgrammableWeb and this is a special sponsored version of the Developers Rock Podcast. Welcome. Today we have Travis Spencer. He is the CEO of Sweden-based Curity.io. Travis, welcome to the show.

Travis: Thanks, David. Great to be with you.

David Yeah, it's great to have you here. We don't get a lot of startups or technology companies coming to us from Sweden. You guys are in Stockholm, right?

Travis: That's correct.

David Yeah, I haven't even been to Stockholm, so hopefully one of these days I'll get over there and we can meet in person, but it's great to be able to film this interview remotely the way we're doing it right now. So, thanks for joining me today. So, I want to start off first by asking you what it is that Curity does?

Travis: We are a software company that provides an OAuth and an OpenID Connect Server to help our customers secure their data.

David Okay. And there's a lot of companies out there doing that. Either they're doing it as their only purpose in life. And in many cases, companies are doing it as a form or a feature built into an existing solution. For example, an existing API management solution. So, why is it that there was room for another entry into the market like Curity? What is it that differentiates Curity from those other solutions?

Travis: Sure, that's a great question. So there definitely are a lot of people working in the API management space. We kind of see ourselves as fitting into the kind of overlap or the union of API management and identity management. So, really focused on figuring out who the user is, which requires pretty advanced and getting more and more advanced authentication and then issuing a token. A token that represents that user and that login. And it could be an interactive user, a human user like you and I, but it could also be a computer system. So there's different ways of authenticating and that's all identity management

Travis: But then you need to actually present that to an API to get that data that our customers are trying to secure. And that token then will be checked by some sort of gateway or API management product. And so that's why we see ourselves sort of bidding into the overlap of those two. And so, the things that really differentiate us is the approach that we've taken to solve this problem. A lot of us come from the IOT background and the identity management background. So we sort of looked at the problem in a pretty different way. We looked at solving this in a way that makes it very, very easy to operationalize this and to run such a service, such a capability.

Travis: And so, the Curity product is really like a durable router in that it's all programmable, it's all configurable. It goes in and just like a base station or some sort of telcom, a piece of gear and it just works. It just does its thing. So to do that, the system has to be highly, highly robust. You can't go and touch it to reconfigure it. You need to be able to have remote updates, it needs to be up all the time. So really focusing on that high uptime. But also some of the key differentiators is it needs to be very technically complete. OAuth and OpenID are very, very broad standards and so we need to be able to support all of those different standards, not just the sort of introductory or the basic parts of it.

David So the implication, if you don't mind me saying this, is that some of the other solutions on the market are not as nearly as robust, or maybe don't understand the standards as well. You've sort of described a little bit, you've touched on the workflow. What happens when somebody, let's say, is using a remote mobile application and that mobile application has to authenticate against an API and get access to a certain data. So, we have components of OAuth working there, right? OAuth is, technically, it's the part that relies on the token to make this negotiation. There's OpenID, which I think is more connected to the part where, Oh, who is this user? And to what data do they have access to? Is that right?

David Yeah. So, you're nodding your head. Yeah. Yeah. And so, that workflow is fairly well known. But what specifically, you said this has to be really robust. Like drop it in like a router. You don't touch it. And I'll paraphrase you kind of bulletproof. So, is the implication that some of the other products on the market that do this or where it's a feature built into something else is not nearly as robust or not scalable? What's your feeling about that?

Travis: I don't want to make a judgment call so much on our competition, but I will talk about what it is that we do. And that is, I mean, it needs to be robust for sure, but it also needs to be having a lot of different features, a lot of different capabilities. Because being able to log users in, it changes over time. The different kinds of login methods are going to vary. You're going to need to run different workflows after login. There's going to be different times when different strengths of logins are acceptable. So maybe sometimes you want to log in with a very insecure login, like just an email. But other times you want to log in with a very high level of assurance of who the end user is. And so that's going to vary on the actual authentication method that's being used.

David So a lot of configurability options that may not exist with other solutions.

Travis: A lot of configurability options, but then also like picking those different authentication methods can all be done with OpenID in the protocol. But a lot of other products don't implement those aspects of the protocol. And so, that's what I'm saying, just the basics isn't enough to be able to say, "Okay, now you're trying to delete your account." It's not enough that you logged in with an email address. I need you to just log in with a two-factor authentication, like OTP to your telephone or something. So I really know who it is that I'm talking to before I allow this more secure API call to be made.

Travis: So, that's really what we're trying to do is provide robustness, completeness, and help people to solve all the different challenges they have today. But also when people put an OAuth server, an OpenID connect server in, different use cases are always going to be coming up. And so it needs to be able to solve the use cases of not just today but the next five years, the next 10 years, so that they don't have to rip this out and put something else in. So it needs to be evolvable.

David Sure. Now, when I think about the various ends of the spectrum of scale, when it comes to authenticating inbound users or processes and then determining to which resources they have access to, on one end you have API's, let's say, that an enterprise might develop for internal consumption and it ends up getting consumed by one or two developers and maybe five programs, pieces of software that have to kind of go in and get some data and then mash it up with something else. And on the other end of the spectrum you have something like a bank, which has millions of users with a mobile application and they have to authenticate against the backend of the bank through that application.

David This same workflow works in both of those situations but is there an element of scale to Curity's solution that's a little bit different from the rest of the market?

Travis: It's true that there are definitely different scenarios where you're going to have a lot more users if you're targeting your end customers than internal cases like employees. And when getting to that scale, first of all, I mean you're going to have more use cases. So again, having all those features to solve those different use cases is important. The product itself is built in a way that's highly, highly scalable so they can obtain high levels of throughput and concurrency to support loads that you would expect from end customers rather than smaller loads to internal use cases.

Travis: Other things that are helpful in Curity to do that is like the way that you can operationalize it, right? If you're going to be targeting end customers, you're going to probably have deployments that are very large scale and multi-site. So how do deploy something here in Stockholm, Sweden and there in San Francisco to target those two different markets and actually manage the configuration across them and be able to make configuration changes and deploy those in near real-time? So, that sort of thing is quite easy to do for telecoms and with routers and that's the sort of approach we took, is this is an identity infrastructure, so it should be able to scale globally and be able to be managed and monitored at that sort of level.

David Okay. So scale seems like to be one of your areas of specialty and given that, what sort of businesses do you typically target with this solution?

Travis: Well, the product itself isn't unique to any specific industry or any specific vertical, but people who need that level of scale are large scale gaming companies, banks, multinational corporations, like energy companies, for example, retailers, things of that sort. But, really because OpenID Connect is about, as you said, figuring out who someone is and logging them in and then integrating that into an application, that could also be a technology company that's providing a mobile app and an API. So, it definitely varies.

David And who isn't a technology company today? When really you think about it.

Travis: Exactly.

David That's sort of the mantra of digital transformation is, is that pretty much every company that was in the analog world in a previous life has to start thinking of itself as a technology company, this would have applications there. So, when you were talking about the sorts of companies that would deploy this, again, I'm thinking big companies, lots of customers, something that's really robust. Can you talk about how it is that you achieve that scale? Is it a clusterable solution? Like how is it delivered? And how do you scale it up? Like some companies might start and they might have a thousand people or a thousand processes or a thousand things coming into authenticate and then years later it could be millions. How you scale up to match that sort of growth?

Travis: Definitely. So it's a software delivered product. So what I mean by that is we give software to our customers, we license the software to them and they deploy it. And they can deploy it on computers under their desk or they can deploy it in large scale cloud infrastructures. So, part of the way of achieving that scale is to have that kind of infrastructure that is elastically scalable and can be run in multiple sites. So, a lot of that starts with the architecture of being able to cluster it like you're saying, and to not have to have a bunch of state that are synchronized across that cluster.

Travis: So every node within a cluster is independent of all others and that helps them to achieve a higher load. Also, the clusters that are each node themselves, they're written in a lock-free way. So, what I mean by that is every request thread that is handling requests doesn't need any data from any other request. So this allows each of the processes within a node to service the entire request without fighting over data from other threads.

Travis: So, also other things common in web applications would be like the use of a CDN. So, a lot of the static resources can be deployed further out on the edge. And then also being able to split up a cluster so that maybe this cluster over here in Stockholm not only is targeting the Stockholm market, but maybe that's only servicing endpoints that do authentication using Swedish BankID. Whereas there in San Francisco it's using username and password and OTPs that are sent over a telephone. So that separate configuration, a different login methods, that also helps with being able to achieve that greater scale and to be able to manage that configuration from a single place. So, there's different things like that.

David So, again, that speaks to the level of configurability and customizability given whatever the circumstances are. A lot of organizations do though have existing API management solutions in place. So, because those API management solutions, as you can imagine, do way more than just handle this piece of the puzzle. So, in those situations, is the expectation that they will deactivate this part of the API management solution and plug-in yours, does your solution work with those other solutions? How does that go?

Travis: Sure. So, first of all, what is API management? Like if we break that apart, I'd say it's kind of three big things. One is a developer portal where you can let third parties log into a web application and create an application that is going to call certain APIs. There's a whole process around promoting those to production and whatnot. So there's—

David And self-service capability. How do you get your—

Travis: Exactly, so sign up, yeah, keys, all that stuff. And then the other part is the API gateway. So like once you actually get an application and you get a token, you're going to call the API and where to check that you're authorized to call these APIs. Both the client and the end-user. But then there's the other part which is issuing the tokens, actually figuring out who the user is, logging them in, all these different things. So the idea is that Curity would become that token service and that log in service. And it would do that part. Whereas, the API management would still be the developer portal and it would still be the API gateway. And so, the reason that you might want to deactivate those functions of your API management system is because Curity's token service, it's OAuth and OpenID Connect implementation is far more sophisticated than the ones in the gateway product, in the API management system.

Travis: And it is more sophisticated because that's all we're doing. That's our entire business. We're not doing it ... we don't provide a gateway. We work with everyone else's gateway. We don't provide a developer portal. We work with everyone else's development portal. So, in that way, our token service is very, very sophisticated compared to those who are doing a lot more or are trying to do a lot more in that. And the other thing is, it's very easy to validate a token. Like a gateway has a pretty simple job relatively speaking. It's just like I get a token in and I'm going to validate it by either calling the one who issued the token or I'm going to perform some digital signature check, maybe do some decryption. All pretty fast, all pretty straightforward.

Travis: Whereas, the process of issuing a token, the prerequisite is I have to log the user in to figure out who I'm issuing the token to and both of those are orders of magnitude more complicated. So, doing that in a purpose-built product, that's all it does. It can integrate with all these different databases, all these different web services, get tokens from all these different places, login in all of these different ways, and then connect into an application via an open standard. Every part of that standard, not just the basic parts of it. And then work with any gateway will allow customers to provide their API, get insights in their API using the API management product, which is what they bought it for, but then still be able to do these complicated login and token issuance processes in a product that is purpose-built for that.

Travis: So, it's easy to manage, easy to operate, easy to control. So it's sort of, instead of having all of that complexity in a product where it's maybe not ideally suited for it, it can be in Curity where it's built just for that.

David Right. So, and another thing I was just thinking about as you're talking through that is a lot of organizations, a lot of enterprises, big enterprises typically will have multiple solutions for the same problem. What I mean by that is, is it's not uncommon for some big enterprise to have one API management solution over here and another one over there. And so it sort of dawned on me as you were talking through this that because you are sort of independent of those solutions, this actually allows the organization to standardize. Instead of having to relying on the functionality within those two separate API management solutions and kind of dividing and conquering between all the users and everything, you have one solution to handle this problem. It plugs into both of those and then, therefore, you're centralizing this component of your API strategy, which technically speaking is pretty important because this is the security component and if there's anything that has to be kind of rock-solid and there should be a standard across the entire enterprise, it's got to be the security. Would you agree with that?

Travis: Definitely, definitely. I mean, it's the old Unix philosophy of do one thing and do it really well. And so instead of trying to do too much and doing it really bad, all we're trying to do is log users in an issue tokens. And then leave the gateways to do their job, which they do a really good job of validating the token, which is also very important for security, right? I mean if we do all of this great login and double factor, triple factor, all this stuff, but then the token is not validated. It's for nothing. So the gateway plays a role in that. But then as you get deeper and deeper into the network, in the microservice mesh, there it's like different things start to come into play and different kinds of components, micro gateways, other sidecar solutions.

Travis: And there, what we wanted to be able to do is make sure that, okay, as you get deeper into the network and then that mesh of microservices, we still know at the beginning who you were and not having to believe the caller to say like, "Okay, it was Travis. It was David, trust me, we're friends here." But to allow that token to flow as you go deeper into the end of the mesh, that's very, very important for providing the security. And so each of these different parts of this deployment, whether it's the token server, the login service, the gateway, those micro gateways, they're going to play a role in providing that security. And we're just trying to do our part in that.

David Earlier, you mentioned how it's delivered. It's a software, so you essentially are installing it either on-premises or if you're installing it off-premises it would be in your own sort of private cloud, like something on Amazon or in a hosting situation. But you also mentioned that it's highly programmable. And so, I'm assuming you're hinting at a very DevOps sort of oriented approach here where a lot of this configuration you're talking about, for example, can be implemented on the fly, which makes it particularly suitable to continuous integration scenarios. So talk a little bit about the CI/CD component and I'm assuming that works with things like Kubernetes and Docker, things that DevOps people really love.

Travis: Yeah, definitely. So, I mean, when I say programmable, I mean programmable in like software defined networking, SDN sort of sense, which in essence is just configuration. And to be able to make configuration changes and apply those on the spot. So to help with DevOps, I mean, first of all, it's standalone, like we talked about, it has no dependencies except the operating system. We deliver it as a tarball, as a Docker container, as a helm chart. So it's very easy to use it in all sorts of different scenarios. And this is important for us because the clouds that people are using today, the cloud infrastructure that's being used today might not be the same five, 10 years from now. So, it's important that we allow people to use the infrastructure that will work for them today and in the future. And that's definitely going to evolve and change.

Travis: And so, we deliver this in a way that will help them to operationalize this in whatever environment they are. And the way that we do that is a few things, I mean, like when you buy a home router, right? Like it has configuration in it to start with, but when you log into the website, it has—

David Not that anybody ever configures it, but—

Travis: No. No. But maybe you change some basic things about it. But it has some initial configuration. So every node that is in a runtime can always be deployed with initial configuration that our customers have created. And this is very easy to create just in the Web UI. You just make some configuration changes, download a big piece of XML, stick it into your Docker container or overlay on top of our Docker container with that, now you have your sort of initial configuration. But then, when you're in production or pre-production, maybe you need to make some changes. Well, sure, there's the Web UI, which is great, but then everything that you can do in our UI, you can do with a REST API. And we actually—

David So, there's an API to this. Yeah.

Travis: Yeah. And we never had a UI for like the first two years. So we were all API first. So, and the important thing about our API is that it isn't one that we cooked up. It isn't one that we invented. So managing configuration isn't a new problem. So we went out there and we looked and we sort of have an adage which says like, If you have a problem, find the RFC. That says how to solve it." So we found an RFC called RESTCONF, Which says, "When doing configuration as a REST API, this is how you should do it." And we just implemented that standard.

Travis: So, all of the API is very easy to use. And everywhere in the UI you can just look and see, like click the button and it shows you the UI builder or the API builder and then you know how to do a PUT or a PATCH or a GET on that configuration. And then you can start to put that into your scripts and whatnot. So very, very easy to make changes if you needed to in production or pre-production using the API. And then another part on the DevOps that's very important is scripting. So being able to script everything. So, our entire command line interface, or CLI, is based on Juniper style command-line interface. So if you've ever used a router, you know exactly how to use this. Got tab-completion history, the whole thing.

Travis: So it's very, very easy to use. Plus it's a command interpreter, so you can write scripts that use our shell within a shebang of a shell script and then we'll just run that and interpret those commands. So, very easy to automate that way. Some other important things on DevOps that are important is being able to get configuration from one environment into the next one. So pre-production into production in a controlled and known manner. So, the entire API just make one GET request and you got all of the configuration. But in that you're going to have then pre-production specific information. So in the configuration, you can define placeholders and macros that would be replaced, the values would be replaced with environment variables on startup, so that initial configuration that I talked about, it could just, okay, load that, replace it with some macros that are set with environment variables that might come from Kubernetes where you've set that, or in your Docker file or whatnot.

Travis: And then those will all be replaced with values then of the production environment. So it makes it very easy to go in a controlled manner from one to another. So between scripting, REST API, those sort of things with the variables in the configuration makes it quite easy to operationalize this. Being able to—

David Yeah, very programmable it sounds like, definitely it sounds like you married the worlds of old ... the operations of running routers to the idea of DevOps and you come up with something that's both robust and highly programmable. The name of the company is Curity.io. Is the name of the solution also the same or is there a name for the solution?

Travis: So we call it the Curity Identity Server. Everyone just calls it Curity.

David And they can go to Curity.io, the website, to find all the information there is to find out about it?

Travis: Yep, definitely. So you can find the docs there, you can sign up for a trial version, download it, do a Docker pool. It has the instructions there on how to do that, find all the information about the features and the aspects of the product. It's all there.

David Terrific. Well, Travis, I want to thank you very much for joining us here today on ProgrammableWeb's Developers Rock Podcast.

Travis: David, great to be with you.

David Yeah. Okay. Well we have been speaking withTravis Spencer, the CEO and Founder of Stockholm-based Curity.io. If you're interested in finding other podcasts from ProgrammableWeb, you can just go up to our YouTube channel at www.youtube.com/programmableweb. And you can also find most of these podcasts published on ProgrammableWeb itself with a full transcript of everything that was said. So thank you very much for joining us, and we'll see you at the next podcast.

Be sure to read the next API Design article: US Court of Appeals Irreparably Damages API Economy