The Role of Middleware in the Cloud Native Ecosystem | Fiorano Software

Leigh van der Veen
Chief Technical Writer
Feb 15, 2024  | 6  mins read

A significant measure of a software application’s success is its ability to communicate within itself and with external parties such as users and other applications if required. It is logical to assume that if communication within the application’s components—or microservices in the cloud native context—is not optimal, the application cannot perform its intended tasks without errors or bottlenecks. Moreover, seamless communication between the application and external parties is equally critical for the application’s integration into a broader technology ecosystem.

This article’s function and purpose is to dive into “all things middleware” in the cloud native ecosystem (and Fiorano context), beginning with a description of cloud native middleware.

What is Cloud Native Middleware?

The Forbes article titled “What’s Happening to Middleware in the Cloud Native Era?” describes middleware as follows:

Middleware includes software components, tools, and services that “sit between system and application software, helping developers create abstractions, hide complexity, and build applications rapidly.

Middleware first became topical in the 1990s when SOA—Service-Oriented Architecture—became the most dominant architecture style. In this context, the middleware was the Enterprise Service Bus—ESB—or the communication channels and integration capabilities between disparate software applications or modules of the same application. In essence, the SOA framework focused on creating a loosely coupled, flexible, and standardized way for individual components to communicate.

Fast forward to today, where, with technological developments, SOA has evolved into a microservices-based architecture as the requirement for increased agility and scalability became apparent. Without diving into defining the microservices architecture, as it is not part of the scope of this article, let’s take a brief look for the sake of completeness.

Techtarget.com describes a microservices architecture as a design pattern prescribing how enterprise applications are constructed—from the bottom up—using the smallest unit of logic—each packaged as microservices.

Note: This architecture forms the foundation for cloud native applications.

These microservices are loosely coupled. Thus, more than the traditional ESB middleware paradigm is needed to communicate between these microservices. Cloud native middleware is an evolution of the ESB architecture and is often found in service meshes or RESTful APIs, message brokers, or event streams.

Note: Traditional ESB-based architectures are tightly coupled, while microservices-based architectures are loosely coupled. Therefore, cloud native middleware must be able to support loosely coupled architectures.

The Challenges Inherent in Modern-Day Enterprise Operations

Technology is not a software application driver. Business—and, by extension, user—requirements drive the development and deployment of new software applications as well as the infrastructure architectures required to run these applications. Technology is merely an enabler, providing enterprises with the capability to remain competitive and relevant in the world we live in, as well as allowing them to improve their operational efficiencies, realize their missions, and increase their bottom lines over time. 

Statistics quoted by McKinsey report that the adoption of cloud computing has been rapidly increasing, and cloud-specific spending was expected to grow more than six times the rate of IT spending through 2020. This adoption enabled enterprises to automate and standardize IT processes, leading to significant cost reductions of between 30% and 40% and improved service quality.

However, many organizations struggle with the principles and practice of leveraging these ever-evolving technologies together with their existing IT assets to address today’s business challenges.

Note: This is a mandatory balancing act required by businesses to stay competitive and address the dynamic demands generated by our post-modern market conditions.

Let’s expand on several of these challenges before we dive into a solution to these challenges:

  • Massive Scale – Most applications must be able to handle an enormous increase in the number of users and data at short notice. As a result, applications must scale up enough to handle the rapid growth in user and data requirements.
  • Dynamic Capabilities – User requirements evolve rapidly with the expectation that application functionality should grow with these ever-changing user requirements. Therefore, organizations need to build applications that are flexible and dynamic.
  • Global Scope – The move to the online space during the COVID-19 pandemic set the scene for global customers and employees. This trait continues today. Therefore, applications must be accessible and usable from anywhere in the world.
  • Hybrid Infrastructure – As described above, enterprise organizations often have a combination of legacy on-premises infrastructure and any number of public and private clouds—as well as old and new technologies—they must leverage to drive business growth timeously and cost-effectively.
  • Real-Time Data Processing and Interactivity – Customers and employees want real-time data and interactivity with the application with zero latency and lags.
  • Data Sovereignty and Regulatory Compliance – The mandate for regulatory compliance drives the need for data sovereignty. In other words, organizations are bound by geo-locational rules and regulations regarding data storage and usage.
  • Digital Transformation – Organizations are continually transforming—and digitizing—their business processes, ensuring that digital technologies are incorporated into all facets of their operations, including business processes, culture, and customer experiences, aligning them with the ever-changing business and market requirements.

Note: Not all organizations face every single one of these challenges. Some enterprises only deal with a subset of these challenges. Thus, at least one of these challenges is relevant to a particular organization.

Cloud Native & The Global Hybrid Multicloud Application (GHMA)

First, what is a Global Hybrid Multicloud Application?

At Fiorano, we describe the GHMA concept as a new type of application with the following characteristics.

Note: For a GHMA to fit its description perfectly and to be successful, it must solve the challenges described above.

  • Cloud Native – Cloud native applications typically comprise multiple containerized microservices orchestrated by Kubernetes – today's most widely used container orchestration platform.
  • Geographically Distributed – Modern enterprise applications—GHMAs—must run in many different geo-locations.
  • Hybrid – GHMAs must leverage a combination of environments like on-premises servers as well as private and public clouds. Moreover, GHMAs must also be able to combine different technologies, such as legacy assets owned by an organization.
  • Asynchronous Communication – Because of GHMA’s fundamental architecture and the requirements described here, GHMAs must communicate asynchronously as events occur.
  • Highly Available – Not only must GHMAs be resilient, but they must also be highly available with SLA—Service Level Agreement metrics—of 99.99%.
  • Business Process-Centric – As highlighted above, technology is not a primary business driver; instead, business processes drive technology. Therefore, a successful GHMA must be undergirded by business processes, not technology.

It is important to note that Kubernetes is the de facto container orchestration platform and is integral to the cloud native application architecture. However, Kubernetes does not handle hybrid, multicloud environments—a combination of on-premises and cloud servers. It is really good at orchestrating containerized microservices located on a single cloud/on-premises server but not so good at orchestrating microservices based on a multicloud. 

Therefore, to solve this challenge—the need to leverage a global, hybrid server architecture—we have developed a layer that sits on top of a peer-to-peer server architecture where each peer (or node) has its own K8s orchestrated set of microservices. This layer abstracts the networking (or middleware in this article’s context) and the integration details; thereby, ensuring that all peers can communicate with each other. Practically speaking, this layer abstracts the networking and integration details between all peers/nodes.

Middleware and the GHMA: An Example

Let’s focus on one element of the overarching GHMA architecture—the middleware—and how it facilitates communication between the individual business functions. Another way of describing middleware’s role in this context is that it is the glue that sticks various components together so that they can communicate with each other.

For instance, imagine a healthcare application—developed as a GHMA—that provides a platform for healthcare providers, patients, and insurers to interact and share health-related information and includes the following business functions:

  • Electronic Health Records Management – healthcare providers can view, update, and manage patient healthcare records, including medical history, diagnoses, treatments, and lab results.
  • Appointment scheduling – patients can schedule appointments with healthcare providers.
  • Billing and Claims Processing – handles invoices, billing, and insurance claims.
  • Pharmacy Integration - patients can interact with pharmacies and request medication refills.
  • AI-Based Chatbots – provide 24/7 support, guiding patients through post-procedure and post-visit care.
  • Health Monitoring – collect data from wearable devices to monitor heart rates, blood pressure, fitness trackers, etc.

The application/platform in this example is developed using a microservices-based architecture where each service is packaged in a container and deployed and orchestrated with Kubernetes. This promotes modularity and agility, as well as the ability for individual services to scale up and down as the application’s workload increases/decreases.

In this scenario, we use a service mesh with a sidecar proxy pattern to connect the microservices, providing a robust service-to-service communication mechanism. Not only does a service mesh facilitate communication between microservices, but it also acts as a security and observability mechanism. It abstracts the complexity of these tasks away from the application’s business logic, simplifying them and making it easier for developers to focus on developing the application’s functionality instead. 

The sidecar pattern is deployed as follows:

Each microservice is deployed with an associated “sidecar” container that handles all communication-related duties and responsibilities like network security, service discovery, and load balancing. This sidecar executes alongside its associated microservice container.

Note: This pattern is known as a “sidecar proxy” because it comes alongside the microservice it belongs to in the same way a motorcycle sidecar attaches to the side of the motorcycle, extending its capacity.

As described above, the sidecar containers in the service mesh handle all communication between microservices by intercepting and routing traffic between services asynchronously triggered by events such as creating a new patient record, scheduling an appointment with a healthcare provider, and submitting an invoice to the patient’s health insurance provider. Moreover, they add functionality like circuit breakers, retries, and timeouts, ensuring that communication remains resilient and reliable as the number of microservices scales up and down.

Equally importantly, the service mesh includes security features encrypting traffic between microservices, enforcing access control policies, and providing observability into communication patterns, ensuring the application complies with data security regulations in the healthcare sector.

The sidecar proxies also implement load balancing, distributing traffic evenly amongst microservices—and triggering auto-scaling when the load gets too great for the number of microservice instances deployed at any given moment. The converse is also true; microservice instances are scaled down when the load reduces, and the decreased load is balanced among the remaining containers.

In Conclusion…

Middleware in the cloud native ecosystem plays a fundamental role in how GHMAs communicate. While based on the ESB foundations, GHMA middleware is a communication layer that is located on top of multiple nodes—or peers, abstracting away the networking and integration elements. Therefore, in practice, when adopting a GHMA for your enterprise software, you don’t need to configure a Kubernetes service mesh.

As elucidated above, the sidecar proxies and networking service mesh between containerized microservices are all managed by the GHMA architecture, the layer that sits on top of the peer-to-peer framework, each peer comprising a Kubernetes installation, orchestrating a part of the global hybrid multicloud application architecture. 

Lastly, utilizing this application architecture results in an application that meets the requirements of a GHMA—including the requirements not discussed—especially one that is cloud native, highly scalable, highly available, agile, communicates asynchronously triggered by events, and is business process-centric. It creates a resilient and secure application for a healthcare application, improving communication between individual business functions as well as ensuring data privacy and compliance.  



© 2025 Fiorano Software and Affiliates. All Rights Reserved. Privacy Statement | Terms of Use