Trilogix Cloud

API Gateway vs Service Meshing

API Gateways are often used to decouple consumers and producers of information.  API Service Meshing can be used to decouple network management within, and between systems which rely on API integration to move and use data.  The differences and overlaps are outlined below in a series of blogs, the first describing what an API platform entails.

An API platform is composed of 2 ‘layers’.  The first is the Gateway, which manages access to the underlying API services (figure above).

In essence an API gateways is used as an abstraction layer that allow us to change the underlying APIs over time without having to necessarily update the clients consuming them. This is especially where the client applications are built by developers outside of the organization who cannot be forced to update to the latest APIs every time, we decide to update them. In this instance, the API gateway preserves backwards compatibility with those client applications as our underlying APIs change over time.

API Gateways

An API Gateway is a data plane which routes and executes each API request.  It is considered a ‘centralised’ deployment model, since all the related API services are controlled within this data plane.  A service mesh (more later) is a decentralized, intrusive model.

The API gateway will receive a request from a client and apply security, network and identification policies on that request.  These policies are centrally stored within the Gateway and can constitute a considerable number.  Many firms do not properly collate or manage these requests leading to issues when requests are denied, or data is not transferred for some reason.

The control plane can be built into the API gateway or separated from it.  if the number of nodes being deployed is of a manageable size the bundling of both planes (data and control) is easier to manage within the same process and updates can be propagated with existing CI/CD pipelines.

The API gateway is deployed in its own instance (its own VM, host or pod) separate from the client and separate from the APIs. The deployment is therefore quite simple because it is fully separated from the rest of our system, and it fully lives in its own architectural layer.

API gateways usually cover three primary API use cases for both internal and external service connectivity as well as for both north-south (outside the datacenter) and east-west (inside the datacenter) traffic.

Use case 1:  APIs as a Product

The first use case is about packaging the API as a product that other developers, partners or teams will consume.

The client applications can initiate requests from outside of the or outside of the scope of the product (that’s exposing the API) that they are consuming.

This use case is very common whenever different products/applications need to talk to each other, especially if they have been built by different teams.

 When offering APIs as a product, an API gateway will encapsulate common requirements that govern and manage requests originating from the client to the API services — for example, AuthN/AuthZ use cases, rate-limiting, developer on-boarding, monetization or client application governance. These are higher level use-cases implemented by L7 user policies that go above and beyond the management of the underlying protocol since they govern how the users will use the API product.

APIs exposed by an API gateway are most likely running over the HTTP protocol (i.e., REST, SOAP, GraphQL or gRPC), and the traffic can be both north-south or east-west depending if the client application runs inside or outside the data center.

Use Case 2: Service Connectivity

This centers on enforcing networking policies to connect, secure, encrypt, protect and observe the network traffic between the client and the API gateway, as well as between the API gateway and the APIs. They can be called L7 traffic policies because they operate on the underlying network traffic as opposed to governing the user experience.

Once a request is being processed by the API gateway, the gateway itself will have to then make a request to the underlying API in order to get a response (the gateway is, after all, a reverse proxy). We will need to secure the request via mutual TLS, log the requests, and overall protect and observe the networking communication. The gateway also acts as a load balancer and will implement features like HTTP routing, support proxying the request to different versions of our APIs (in this context, it can also enable blue/green and canary deployments use cases), as well as fault injection and so on.

The underlying APIs that we are exposing through the API gateway can be built in any architecture (monolithic or microservices) since the API gateway makes no assumption as to how they are built as long as they expose a consumable interface. Most likely the APIs are exposing an interface consumable over HTTP (i.e., REST, SOAP, GraphQL or gRPC).


Use Case 3: Full Lifecycle API Management

Managing the APIs, their users and client applications, and their traffic at runtime are only some of the many steps involved in running a successful API strategy. The APIs will have to be created, documented, and tested and mocked. Once running, the APIs will have to be monitored and observed to detect anomalies in their usage. Furthermore, when offering APIs as a product, the APIs will have to provide a portal for end users to register their applications, retrieve the credentials and start consuming the APIs.

API management, and effectively most APIM solutions provide a bundled solution to implement all of the above concerns in one or more products that will in turn connect to the API gateway to execute policy enforcement.

Next Service Mesh

Leave a Comment

Your email address will not be published. Required fields are marked *