With the recent announcement of OpenAPI Specification v3, it’s the right moment to pause and think for a moment about the benefits of using an API contract to describe your Web API. With an open and freely accessible, computer-friendly format, it opens up very important perspectives around tooling, compatibility, or team collaboration, as well as foster a fruitful ecosystem.
The benefits of using an OpenAPI Specification (OAS) to describe your API
In a nutshell, the API contract is the source of truth. Whether you’re the one implementing the API backend, or you’re the consumer calling the API, there’s this central contract that each party can rely on, to be certain how the API should be looking like, what kind of endpoint to expect, what payloads will be exchanged, or which status codes are used.
With a central contract, team communication and collaboration is facilitated: I’ve seen customers where a central architecture team would define a contract, that was implemented by a third-party (an outsourcing consulting company), and the API was consumed by different teams, both internally and externally. The central contract was here to facilitate the work between those teams, to ensure the contract would be fulfilled.
In addition, having such a computer-friendly contract is really useful for tooling. Out of the contract, you can generate various useful artifacts, such as:
- static & live mocks — that consumers can use when the API is not finalized,
- test stubs — for facilitating integration tests,
- server skeletons — to get started implementing the business logic of the API with a ready-made project template,
- client SDKs — offering kits consumers can use, using various languages, to call your API more easily,
- sandbox & live playground — a visual environment for testing and calling the API, for developers to discover how the API actually works,
- an API portal with provisioning — a website offering the API reference documentation and allowing developers to get credentials to get access to the API,
- static documentation — perhaps with just the API reference documentation, or a bundle of useful associated user guide, etc.
However, be careful with artifact generation. As soon as you start making some customizations to what’s been generated by tools, you might run the risk of overwriting those changes the next time you re-generate those artifacts! So beware, how customization can be done and be integrated with those generated artifacts.
Presentation on scaling an OpenAPI Spec-based web API
InfoQ recently released a video from the APIDays conference that took place in Paris last year. I talked about scaling an OpenAPI Spec-based web API using Cloud Endpoints, on the Google Cloud platform.
I spoke about the topic a few times, as web APIs is a topic I enjoy, at Nordic APIs, at APIDays, or Devoxx. But it’s great to see the video online. So let me share the slide deck along with the video:
You can also have a look at the slide deck embedded below:
In my presentation and demo, I decided to use Cloud Endpoints to manage my API, and to host the business logic of my API implementation on the Google Cloud Platform. GCP (for short) provides various “compute” solutions for your projects:
- Google App Engine (Platform-as-a-Service): you deploy your code, and all the scaling is done transparently for you by the platform,
- Google Container Engine (Container-as-a-Service): it’s a Kubernetes-based container orchestrator where you deploy your apps in the form of containers,
- Google Compute Engine (Infrastructure-as-a-Service): this time, it’s full VMs, with even more control on the environment, that you deploy and scale.
In my case, I went with a containerized Ratpack implementation for my API, implemented using the Apache Groovy programming language (what else? :-). So I deployed my application on Container Engine.
I described my web API via an OpenAPI descriptor, and managed it via Cloud Endpoints. Cloud Endpoints is actually the underlying infrastructure used by Google themselves, to host all the APIs developers can use today (think Google Maps API, etc.) This architecture already serves literally hundreds of billions of requests everyday… so you can assume it’s certainly quite scalable in itself. You can manage APIs described with OpenAPI, regardless of how they were implemented (totally agnostic from the underlying implementation), and it can manage both HTTP-based JSON web APIs, as well as gRPC based ones.
There are three interesting key aspects to know about Cloud Endpoints, regardless of whether you’re using the platform for public / private / mobile / micro-services APIs:
- Cloud Endpoints takes care of security, to control access to the API, to authenticate consumers (taking advantage of API keys, Firebase auth, Auth0, JSON Web Tokens)
- Cloud Endpoints offers logging and monitoring capabilities of key API related metrics
- Cloud Endpoints is super snappy and scales nicely as already mentioned (we’ll come back to this in a minute)
Cloud Endpoints actually offers an open source “sidecar” container proxy. Your containerized application will go hand in hand with the Extensible Service Proxy, and will actually be wrapped by that proxy. All the calls will actually go through that proxy before hitting your own application. Interestingly, there’s not one single proxy, but each instance of you app will have its own proxy, thus diminishing the latency between the call to the proxy and the actual code execution in your app (there’s no network hop between the two, to a somewhat distant central proxy, as the two containers are together). For the record, this proxy is based on Nginx. And that proxy container can also be run elsewhere, even on your own infrastructure.
Summary
In summary, Cloud Endpoints takes care of securing, monitoring and scaling your Web API. Developing, deploying, and managing your API on Google Cloud Platform gives you the choice: in terms of protocol with JSON / HTTP based APIs or gRPC, in terms of implementation technology as you can chose any language or framework you wish that are supported by the various compute options of the platform allow you to go from PaaS, to CaaS, or IaaS. Last but not least, this solution is open: based on open standards like OpenAPI and gRPC, or by implementing its proxy on top of Nginx.
With an open format like OpenAPI Specification to describe APIs, all the stakeholders working with APIs can collaborate together and reap the following benefits.
First of all, it’s easier for teams to work effectively together, as all the members can rely on the API description to be the truth representing what the API should look like. It’s particularly true when one team designs the contract, while another team, potentially external, implements the contract. There’s one source of truth describing the API, easing communication in the team.
Secondly, as a computer-friendly and well specified format, it’s possible to automate various tasks, like generating mocks, client libraries, server skeleton, and more. You can also use that specification to check that an implementation complies with the contract.
Lastly, with an open and freely accessible format, an ecosystem of vendors, of open source projects & developers, can collaborate together, preventing lock-in effects, allowing API developers to take advantage of tools and solutions that are compatible together via the OpenAPI Specification.