API and REST standards allowed an easier exchange of data and system connectivity. This has propelled the rise of containers, and Kubernetes, in which applications are broken down into smaller services. REST APIs are one example of interface development. GraphQL and other protocols can also be used.
The main purpose of building an API platform is to program service-level contracts independent of the underlying protocols or code. This is what firms are really after when constructing an API gateway and API code base.
An API Service Contract is any means for interfacing data exchange between applications without worrying about the underlying level 4 protocol, backend databases or API programming code. The service interface provides a contract between supplier and consumer. If anything changes within the process or technologies within that exchange, the consumer is unaffected.
There are a few options to build ‘microservice’ interfaces including In a GraphQL, Kafka, gRPC and even our old friend SOAP-XML. A single GraphQL query can access existing REST APIS and provide all the data your application may need. gRPC offers some performance and ease of development benefits, thanks to being based on cross-platform, language-agnostic protocol buffers. Kafka supports asynchronous microservice-to- microservice communication by acting as an event collector, allowing developers to replay event series or reproduce the state data of a given timestamp. SOAP-XML is still widely used in many systems which are usually internally facing (even if they are sending data over HTTPS).
A key concern when building an API platform is ‘how to manage the development life cycle of the APIs?’
It does not really matter whether an API is RESTful, gRPC, SOAP-XML or GraphQL. What ultimately matters is that each service is a contract API between the producer/provider and the consumer. The consumer of a service does not care about whether other parts of your organization are adopting gRPC. The consumer cannot tolerate breaking changes because they lead to large scale failures of business-critical applications they have built.
A simple framework for Full-Service Lifecycle Management imitates that of DevOps: build, run and automate.
Take an Inventory first
The cycle begins with Design because spec-driven development is the best place to start. You can pick a few services to begin introducing spec-first development. Obvious candidates are any that are already leveraging OpenAPI (formerly Swagger) as a framework.
An inventory is necessary and is a process assessment, rather than organizational or cultural. You can introduce spec-driven development without moving developers between departments. Changing the organisation should be avoided (at least initially).
The next step is taking an inventory of the existing range of interfaces, languages and protocols in your current architecture and medium-term roadmap for connecting the services across your enterprise. Whether you are coming from a monolithic or serverless world, there is a set of services relevant for your immediate department as well as the broader ecosystem within the enterprise and the broader industry. The ideal inventory reflects mission-critical services, as well as what needs to be added and expanded for ideal operation and optimization of the enterprise at scale.
Create an inventory, make a target model, initiate a proof of concept to get there, identify the resources and tools and educate and train the staff on the POC. Develop a high-level leading to an end-to-end design, emphasizing an agnostic and iterative approach to building an API gateway and related APIs. Understand dependencies, budget, time, and complexity. Standardize Agile engineering methods and tooling. Understand that the LCM is an iterative affair beginning with Design, but design should not be a silo or waterfall effort.
The Build phase represents pre-production activities for a service. The first step of this phase is Design.
Design in a full-service lifecycle management context takes a spec or specifications-driven approach. Spec-driven development starts with the spec and dissolves any disparity between the spec and the implementation.
The advantage of this approach is a much faster cycle to mock and test a service, as well as a reduced risk of breaking changes in the future. Beyond the obvious operational benefits of this approach, spec-driven development underscores taking a design and development approach that assumes responsibility for the successful operation of services as well, since any operational challenges in production or post-production will need to flow back into the specification before the implementation can be patched.
The second step is Mock. Mocking speeds up development by providing developers with a reliable imitation of how the service would work in real life. They can build other tools and interfaces based on assumptions about the behavior and structure of the service’s responses.
The third step is Test. Testing provides greater confidence and earlier warnings to development teams before going into production. By using isolated, discrete services instead of huge monoliths, it’s easier to test and find problems at a granular level and to surgically fix them one by one.
The final step to building a Design is Deploy. After code has passed the relevant tests and reviews, this step enables release velocity by automating the decision to ship the code. Rather than taking a manual process with a fixed cadence, this step supports CI/CD and means teams can release new application code to production in minutes rather than only on predetermined schedules. Release quality also improves thanks to review process, tests and validation of traffic defined in the specification.
The Run phase represents all production activities for a service.
In the Manage step of this phase, an administrator has visibility over the entire span of services in production and can monitor their health in real time, as well as make changes as needed. With logic built into the endpoints of the system, workflows can be managed and automated between other key systems.
In the Publish step, the platform is live across all relevant systems, whether on premise or cloud- based, monolithic or serverless. The system is able to perform even above 10,000 transactions per second and to self-heal dynamically in case of unexpected outages of individual microservices. Further, the system is open to being extensible based on the needs of various business lines for customization due to specialized needs and use cases.
The Automate phase refers to all post-production activities. In the post-production world, we shift our focus to ensuring that the live system is optimized and compliant, leveraging real-time data and machine intelligence where possible to make our job easier.
This entails using machine learning to reduce manual tasks related to documentation and alerting teams to critical service information. Pushing documentation directly into an accessible, search-friendly developer portal and automating updates reduces manual effort significantly.
Analyzing traffic patterns and detecting anomalies for business attention enables real-time threat resolution. Artificial intelligence can also be used to visualize information across all services, which is increasingly important as complexity and number of services increase.
The Security step of this phase involves creating policies to help ensure that internally the system is compliant with industry requirements. Role- Based Access Controls (RBACs) ensure the data being accessed aligns with the authorization levels at the organization.
Adopting a Full Lifecycle Mindset
In pre-production, our key challenge is to design and build a service network that meets the performance requirements of the business and customer, offers maintainability and will not become irrelevant as new services and requirements emerge.
The most important link is between the “Build” and “Automate” phases – public specification documentation must be the starting and ending point for creating and updating services relied upon by hundreds or thousands of consumers. The developers creating specs will still bear the responsibility to design with a minimization of breaking changes in mind, so this step also bears an important link to the Management step of the cycle.
In the Run phase, the benefit of abstraction in the Build phase comes to life. Instead of needing to worry about a giant, tangled system, the fruits of a successful Build phase are an orderly, transparent and well-encapsulated set of services. This abstraction is important, both for the later Automate phase and for day-to-day management. In order to ensure resilience and operational excellence, the Run phase must be front of mind even before and after.
The Build phase is meant to make the Run phase more manageable – all so that the Automate phase can glean valuable information from it.
Running in production gives a wealth of information about usage, health, behavior, and access. Through Automation, systems can generate insights from that information and act on your behalf. Having taken a full-lifecycle approach means that you are set up to reduce many manual tasks since your production environment can be readily visualized and analyzed with machine intelligence. You also have the flexibility to adjust the policies that apply to your environment in the Automate phase, with these changes going into effect and directly impacting what you have in production.