Microservices is a design strategy in which an application is broken down into smaller, self-contained, and autonomously deployable components. These individual microservice units are focused on distinct business functionality and communicate with their counterparts via clearly outlined APIs, usually employing nimble protocols like HTTP or messaging systems. By facilitating independent development, deployment, and maintenance of each microservice, microservices boost modularity, adaptability, and scalability.
As we delve deeper into this blog, you will begin to grasp the various lenses through which to perceive microservices. I will guide you through the critical elements to contemplate when deciding between a distributed system or a monolithic structure. I encourage you to stay with me until the end to fully appreciate the overall value of this writing.
Secrets Behind Microservices Meteoric Popularity
- Agility and Faster Time-to-Market: Microservices afford businesses the agility to rapidly introduce new features and updates. Given that these services can be designed and launched independently, collaborative teams are able to operate concurrently. This accelerates development cycles and consequently diminishes the time taken to penetrate the market.
- Scalability and Handling Complex Systems: Microservices provide scalability at the individual service level. This allows organizations to scale specific services based on demand. This flexibility is particularly beneficial for large, complex systems that require different components to scale independently.
- Support for Diverse Technology Stacks: Microservices architecture allows organizations to use diverse technologies for different services. This flexibility allows teams to select the most appropriate technology for each service’s requirements. This fosters innovation and leveraging the strengths of different tools and frameworks.
- Enabling Team Autonomy and Parallel Development: Microservices empower cross-functional teams to work autonomously on specific services. This autonomy enhances collaboration, decision-making, and ownership, leading to more efficient development processes and better product outcomes.
- Cloud-Native and Containerization: Technologies like Docker and Kubernetes drive the trends towards microservices. They enable breaking down applications into smaller, containerized services, facilitating easier deployment, scaling, and management in cloud environments.
Primary Factors of Significant Influence while Considering Microservices
Let’s talk about the set of considerations for the decision-making process of whether to employ a monolithic architecture or a distributed system.
Lines of Code
Lines of Code (LoC) represents a fundamental referenced parameter when disintegrating the monolithic structure into microservices. Or, even prior to the initiation of a new service. The LoC metric in isolation doesn’t definitively categorize a service as a microservice. It should undoubtedly occupy a prominent position in the choice metrics table during the decision-making process.
It’s crucial to acknowledge that not all services necessitate a transition into microservices. However, we often proceed with such a transformation, but why? The most frequently offered justification is that “This service is likely to scale up in the future”. This rationale resonates with me. The service separation from a monolith or a new microservice’s creation is undertaken with the potential growth of the service in mind. This also includes an increase in Lines of Code (LoC). In an ideal scenario, we would envision every single application to be self-sufficient. To further illustrate this, I am presenting a hypothetical graph depicting the relationship between Lines of Code and Pricing.
While this estimation may appear arbitrary, it proves applicable in the majority of standard scenarios. As depicted in the graph, even in their most basic implementations, microservices tend to carry a heightened cost. This cost increment is attributable to the minimal resources they necessitate in relation to human capital, oversight, and computational expenditure. It is important to underscore that cost considerations are paramount when the aim is to scale a business.
Undoubtedly, a codebase is expected to evolve over time. You may have decided to extract it from the monolithic structure and launch it as an independent service. However, such a step might not be immediately necessary. A well-constructed and thoughtfully architected codebase can always be converted to microservices in the future. It’s crucial to write loosely coupled and distributed code from the very beginning. This will adequately prepare you for future developments.
Simply by observing the diagram, one can discern when it’s appropriate to disassemble a monolith. However, this decision is not solely reliant on the diagram. It’s also crucial to consider a variety of other factors that significantly influence this decision.
Latency and Resource distribution
Consider the scenario of employing an individual who is single-handedly responsible for a chunk of tasks including front-end development, back-end operations, DevOps, security measures, infrastructure architecture, as well as product and sales. While it may initially seem efficient, the practicality of such an arrangement is questionable. After all, even the most competent individual, given their human nature, would inevitably introduce delays in these processes due to the sheer volume and diversity of responsibilities.
Similarly, a monolith, by handling all processes itself, can inadvertently introduce latency to other functions within a single service. This aspect serves as a fundamental, performance-oriented justification for why many choose to decompose a monolith into microservices. Let’s take a look at the following example of a monolithic structure
The APIS, specifically those associated with Bulk Export and Upload, are contributing to an overall service latency of 180 seconds. This occurrence hinders the performance of other APIs, which inherently possess the capacity to respond at a significantly faster pace.
Now, let’s focus on analyzing the latency distribution associated with breaking down the above monolith into a microservice structure.
As we can observe, our improvements are not merely confined to enhanced service level latency. We’ve also witnessed a reduction in computational demands due to the strategic segregation of slower services from faster ones. This tactic proves to be highly beneficial and forms an optimal factor to consider when decomposing into microservices.
Scaling individual components
A phase arrives in the lifespan of a product when we observe a need to escalate the scale of the individual elements within our applications. This necessity may arise due to a multitude of reasons. Drawing upon the revised microservice diagram provided earlier, let’s hypothetically place our “A Small Monolith” under scrutiny and presume it encounters the subsequent situation:
The registration process has become significantly more resource-intensive due to an influx of users registering on the platform, driven by our product team’s monthly campaigns. Despite the sufficient computing power our “Small Monolith” possesses to manage these processes, we are encountering network congestion and a higher failure rate with our “Other APIs.”
To address this, we implemented autoscaling, which initially resolved the issue. However, we are now faced with an overcapacity situation where the autoscaled resources are underutilized, resulting in a substantial increase in infrastructure costs.
The reason for partitioning the “A Small Monolith” further into a microservice and isolating the Registration flow into a distinct component is very straightforward. This strategy not only automates the scaling of the modest computational requirements for the registration process but also enhances the latency of “Other APIs” within our “A Small Monolith”. Therefore, post this implementation, our microservice breakdown may appear as follows:
Decomposing a service into microservices to meet scaling requirements can be effective, yet implementing this approach when your service does not necessitate scalability can be excessive.
It is not optimal to fragment a monolithic architecture into microservices solely to address inconsistent scalability demands. Such transient demands would only result in temporary costs, but a permanent shift to a microservices architecture incurs a lasting financial impact
Let’s proceed to the subsequent section.
Transactions and Inter-Process Communications (IPC)
Decomposing a monolithic system into microservices introduces not only cost and management complexities but also uplifts the challenges related to Transactions and Inter-Process Communications. This infrastructure transformation represents one of the most arduous and strategically significant choices to make.
Transactions that span multiple services pose a liability to the overall architecture. Resolving transactions across services entails interdependencies, which can lead to intricate deadlocks that are difficult to trace, as well as race conditions that can significantly impact the robustness of your services. These complexities can even extend to the engineers working on the system.
The process of managing transactions and inter-process communications in a microservices environment requires careful consideration and proper implementations to mitigate the risks associated with these challenges.
One drawback of microservices is the challenge of tracing communication flows across downstream services and processes. This can become a headache for DevOps teams as it requires additional effort and resources for logging and monitoring purposes. As a result, it can lead to increased workload and expenses in maintaining effective communication tracking within the microservices architecture.
When examining the conventional interprocess communication (IPC) within a monolithic architecture, the communication primarily occurs between modules that reside in memory. As a result, this setup ensures negligible latency within the overall system. However, the introduction of microservices introduces additional latency into the infrastructure as a trade-off for achieving scalability and manageability at scale.
In the microservices approach, in order to mitigate latency, additional layers such as Redis and Memcached may be implemented for faster data retrieval, and technologies like RabbitMQ or Kafka may be employed for efficient inter-service communication. However, it is important to note that these added layers contribute to a more intricate system, increasing costs and resource requirements for managing the infrastructure.
While the microservices architecture offers advantages in terms of scalability and flexibility, it also entails the trade-off of increased latency and complexity. Organizations must carefully consider the specific needs of their system and weigh the benefits against the potential drawbacks when making architectural decisions.
Certain scenarios may arise where latency or infrastructure complexity is simply unaffordable, as is the case with mission-critical systems like airline traffic control. Therefore, it is crucial to assess whether your specific use case necessitates such considerations.
Ultimately, it is important to make choices today that will simplify your future rather than create a cumbersome and challenging environment. Do read this article, where the Amazon Prime Tech team talks about distributed system overheads.
In a previous discussion, I highlighted the influence of lines of code on microservice costs. However, let us now explore pricing considerations beyond the realm of code. To illustrate this, let’s examine the Prime Video Stream Monitoring service as an example.
In order to ensure a seamless content viewing experience for their customers, Prime Video established a monitoring tool to track every stream accessed by users. This tool enables the automatic detection of perceptual quality issues, such as block corruption or audio/video synchronization problems, and initiates a corrective process. Initially, this tool was implemented as a distributed microservice and serverless system using AWS Step Functions. According to their assessment, the most cost-intensive operations were:
- The orchestration workflow
- and when data was passed between distributed components due to distributed systems overhead.
It is important to note that this approach may not be universally applicable in all cases. The example serves as a means to provide an alternative viewpoint, prompting thoughtful consideration before making a decision regarding the adoption of microservices.
Summing Up and Key Insights
In the spirit of objectivity, the industry has been diligently working to strike a balance amidst the enthusiasm surrounding microservices over the past decade. It has underscored that the advantages of microservices are pertinent in particular scenarios, rather than universally applicable.
It is essential to recognize that the IT landscape operates in cycles, where architectural trends can shift rapidly. While microservices have dominated the past decade, the notion of monolithic architectures making a comeback should not be disregarded. It is crucial to assess each situation independently, taking into account the unique needs and circumstances of the organization and application.
Ultimately, the decision to choose between a monolithic architecture or a distributed system should be based on a thorough evaluation of these factors, aiming to achieve scalability, agility, and effective management of the application. The key lies in finding the optimal balance between the advantages and challenges associated with each approach.
As the IT landscape continues to evolve, it is imperative to stay informed, adapt to emerging trends, and make informed decisions that align with the goals and requirements of the organization. By staying open to change and embracing a flexible mindset, businesses can navigate the complexities of architecture choices and embark on a path that fosters growth, innovation, and success in the dynamic world of technology.
Follow my blog for more awesome content like this.