Chief Technical Writer
Feb 15, 2024 | 6 mins read
A significant measure of a software application’s success is its ability to communicate within itself and with external parties such as users and other applications if required. It is logical to assume that if communication within the application’s components—or microservices in the cloud native context—is not optimal, the application cannot perform its intended tasks without errors or bottlenecks. Moreover, seamless communication between the application and external parties is equally critical for the application’s integration into a broader technology ecosystem.
This article’s function and purpose is to dive into “all things middleware” in the cloud native ecosystem (and Fiorano context), beginning with a description of cloud native middleware.
The Forbes article titled “What’s Happening to Middleware in the Cloud Native Era?” describes middleware as follows:
Middleware includes software components, tools, and services that “sit between system and application software, helping developers create abstractions, hide complexity, and build applications rapidly.”
Middleware first became topical in the 1990s when SOA—Service-Oriented Architecture—became the most dominant architecture style. In this context, the middleware was the Enterprise Service Bus—ESB—or the communication channels and integration capabilities between disparate software applications or modules of the same application. In essence, the SOA framework focused on creating a loosely coupled, flexible, and standardized way for individual components to communicate.
Fast forward to today, where, with technological developments, SOA has evolved into a microservices-based architecture as the requirement for increased agility and scalability became apparent. Without diving into defining the microservices architecture, as it is not part of the scope of this article, let’s take a brief look for the sake of completeness.
Techtarget.com describes a microservices architecture as a design pattern prescribing how enterprise applications are constructed—from the bottom up—using the smallest unit of logic—each packaged as microservices.
Note: This architecture forms the foundation for cloud native applications.
These microservices are loosely coupled. Thus, more than the traditional ESB middleware paradigm is needed to communicate between these microservices. Cloud native middleware is an evolution of the ESB architecture and is often found in service meshes or RESTful APIs, message brokers, or event streams.
Note: Traditional ESB-based architectures are tightly coupled, while microservices-based architectures are loosely coupled. Therefore, cloud native middleware must be able to support loosely coupled architectures.
Technology is not a software application driver. Business—and, by extension, user—requirements drive the development and deployment of new software applications as well as the infrastructure architectures required to run these applications. Technology is merely an enabler, providing enterprises with the capability to remain competitive and relevant in the world we live in, as well as allowing them to improve their operational efficiencies, realize their missions, and increase their bottom lines over time.
Statistics quoted by McKinsey report that the adoption of cloud computing has been rapidly increasing, and cloud-specific spending was expected to grow more than six times the rate of IT spending through 2020. This adoption enabled enterprises to automate and standardize IT processes, leading to significant cost reductions of between 30% and 40% and improved service quality.
However, many organizations struggle with the principles and practice of leveraging these ever-evolving technologies together with their existing IT assets to address today’s business challenges.
Note: This is a mandatory balancing act required by businesses to stay competitive and address the dynamic demands generated by our post-modern market conditions.
Let’s expand on several of these challenges before we dive into a solution to these challenges:
Note: Not all organizations face every single one of these challenges. Some enterprises only deal with a subset of these challenges. Thus, at least one of these challenges is relevant to a particular organization.
First, what is a Global Hybrid Multicloud Application?
At Fiorano, we describe the GHMA concept as a new type of application with the following characteristics.
Note: For a GHMA to fit its description perfectly and to be successful, it must solve the challenges described above.
It is important to note that Kubernetes is the de facto container orchestration platform and is integral to the cloud native application architecture. However, Kubernetes does not handle hybrid, multicloud environments—a combination of on-premises and cloud servers. It is really good at orchestrating containerized microservices located on a single cloud/on-premises server but not so good at orchestrating microservices based on a multicloud.
Therefore, to solve this challenge—the need to leverage a global, hybrid server architecture—we have developed a layer that sits on top of a peer-to-peer server architecture where each peer (or node) has its own K8s orchestrated set of microservices. This layer abstracts the networking (or middleware in this article’s context) and the integration details; thereby, ensuring that all peers can communicate with each other. Practically speaking, this layer abstracts the networking and integration details between all peers/nodes.
Let’s focus on one element of the overarching GHMA architecture—the middleware—and how it facilitates communication between the individual business functions. Another way of describing middleware’s role in this context is that it is the glue that sticks various components together so that they can communicate with each other.
For instance, imagine a healthcare application—developed as a GHMA—that provides a platform for healthcare providers, patients, and insurers to interact and share health-related information and includes the following business functions:
The application/platform in this example is developed using a microservices-based architecture where each service is packaged in a container and deployed and orchestrated with Kubernetes. This promotes modularity and agility, as well as the ability for individual services to scale up and down as the application’s workload increases/decreases.
In this scenario, we use a service mesh with a sidecar proxy pattern to connect the microservices, providing a robust service-to-service communication mechanism. Not only does a service mesh facilitate communication between microservices, but it also acts as a security and observability mechanism. It abstracts the complexity of these tasks away from the application’s business logic, simplifying them and making it easier for developers to focus on developing the application’s functionality instead.
The sidecar pattern is deployed as follows:
Each microservice is deployed with an associated “sidecar” container that handles all communication-related duties and responsibilities like network security, service discovery, and load balancing. This sidecar executes alongside its associated microservice container.
Note: This pattern is known as a “sidecar proxy” because it comes alongside the microservice it belongs to in the same way a motorcycle sidecar attaches to the side of the motorcycle, extending its capacity.
As described above, the sidecar containers in the service mesh handle all communication between microservices by intercepting and routing traffic between services asynchronously triggered by events such as creating a new patient record, scheduling an appointment with a healthcare provider, and submitting an invoice to the patient’s health insurance provider. Moreover, they add functionality like circuit breakers, retries, and timeouts, ensuring that communication remains resilient and reliable as the number of microservices scales up and down.
Equally importantly, the service mesh includes security features encrypting traffic between microservices, enforcing access control policies, and providing observability into communication patterns, ensuring the application complies with data security regulations in the healthcare sector.
The sidecar proxies also implement load balancing, distributing traffic evenly amongst microservices—and triggering auto-scaling when the load gets too great for the number of microservice instances deployed at any given moment. The converse is also true; microservice instances are scaled down when the load reduces, and the decreased load is balanced among the remaining containers.
Middleware in the cloud native ecosystem plays a fundamental role in how GHMAs communicate. While based on the ESB foundations, GHMA middleware is a communication layer that is located on top of multiple nodes—or peers, abstracting away the networking and integration elements. Therefore, in practice, when adopting a GHMA for your enterprise software, you don’t need to configure a Kubernetes service mesh.
As elucidated above, the sidecar proxies and networking service mesh between containerized microservices are all managed by the GHMA architecture, the layer that sits on top of the peer-to-peer framework, each peer comprising a Kubernetes installation, orchestrating a part of the global hybrid multicloud application architecture.
Lastly, utilizing this application architecture results in an application that meets the requirements of a GHMA—including the requirements not discussed—especially one that is cloud native, highly scalable, highly available, agile, communicates asynchronously triggered by events, and is business process-centric. It creates a resilient and secure application for a healthcare application, improving communication between individual business functions as well as ensuring data privacy and compliance.