Microservices in Practice: Functional and Non-functional Requirements of ludotheca-share-mesh

GitHub Repo: https://github.com/marco13-moo/ludotheca-share-mesh
As the system is a microservices architecture style, it has to conform to the following requirements, of which some are also seen as architectural characteristics to microservices:
Scalability
Axis Scalability of the resource-sharing system using Netflix Eureka Service Discovery
X-Axis scaling refers to scaling via instance replication. Through the use of cloud infrastructure, the overall system is able to demonstrate horizontal scalability.
When using Netflix Eureka on Kubernetes, the system is constrained in terms of scalability as all services are to scale together if deployment replication is used.
Within a teaching environment, one is able to demonstrate x-axis scalability using Eureka on a local machine.
This comes at a cost of the additional manual configuration overhead as unique port numbers are needed to be specified for each replicated instance.
Furthermore, the system when leveraging Eureka adheres to x-axis scalability when the API gateway with load-balancing applied.
In terms of a localhost scope: The API gateway, using the Spring Cloud load-balancing algorithm, is able to route requests to N identical instances of a specified service.
In terms of a Kubernetes deployment scope: Google Cloud Load Balancer, is able to route requests to N identical resource-sharing application instances [3:10].
Axis Scalability of the resource-sharing system using Kubernetes Service Discovery
Through the use of cloud infrastructure, the overall system is able to scale independently of service.
When using Kubernetes service discovery, the system adheres to having each service independently scaled if Kubernetes based scalability via deployment replication is used. This is due to each service's ability to have it's own deployment on Kubernetes. The system adheres to x-axis scalability when the API gateway with load-balancing is applied.
In terms of a Kubernetes deployment scope: The Spring Cloud API gateway coupled with Google Cloud Load Balancer, is able to route requests to N identical instances of a specified service.
Maintainability
The system is loosely-coupled and relies on RESTful API-based communication.
Thus, if business logic is changed but the API contract and return types remain the same, no changes to the other services are needed.
The database per service pattern allows for datastores to be altered without affecting any service.
As there are business logic instances that do require inter-service communication, such as the lending service's checkout call, any API changes to those specific inter-service calls will require alteration within the business logic.
Although minimal changes did occur when using Kubernetes service discovery opposed to Eureka, such changes do entail a new variation of the system. These changes were Spring Boot specific such as the reconfiguration of the Gateway YAML Spring configuration file and removal of the @EnableDiscoveryClient annotation.
Thus, a migration away from the Spring microservice chassis to a service mesh may prove beneficial in the future.
Testability - Inclusion of Integration and Acceptance Tests
The use of testing dependencies found within the software engineering module such as jUnit, Jackson and MockMVC.
MockMVC did provide illustration points of inter-service dependencies via integration tests, such instances are easily visible when tests requiring inter-service communication fail, such as the checkOutLoan test.
As the services communicate via synchronous HTTP-based REST request/responses, acceptance tests are demonstrated through the assertion of the expected HTTP codes.
jUnit tests provided isolation testing within the scope of the service subdomain for each microservice.
Future improvements include inclusion of additional testing patterns such as consumer-driven contract tests using Spring Cloud Contract [3:302].
- Contract tests mitigate the need for transitive dependencies of services during test execution, testing service end-points in isolation instead.
Although not part of the objects, but highly beneficial in aiding microservices development and deployment is the inclusion of Continuous Integration pipelines such as Jenkins. The automated staging of tests per new code commit provides rapid feedback to a developer [3:306].
Improvements upon acceptance tests are prevalent through abstraction. By use of a Domain Specific Language such as Gherkin, one can connect high level testing scenarios with low-level Java based testing implementation [3:337].
Deployability
Deployability of the Resource Sharing System using Netflix Eureka Service Discovery on Google Kubernetes Engine
- Although the container per service pattern was successfully implemented, services have to be deployed together to achieve inter-service communication. This voids the microservices characteristic of being independently deployable.
Deployability of the Resource Sharing System Using Netflix Eureka Service Discovery on a Local Machine
The container per service pattern was not applied when deploying to a local machine as a local container orchestration mechanism such as MiniKube is fairly resource intensive. Each service ran as their own servlet demonstrating the characteristic that microservices are to run their own processes [4] [19].
Through push-based externalization patterns, alterations of port number within a service's Spring configuration file displays independent deployability of each service, but is countered as there is a need for a form of centralized management of the service's configuration properties.
Deployability of the Resource Sharing System Using Kubernetes Service Discovery on Google Kubernetes Engine
The use of tooling typically found within microservices architectures are cloud-based and often leverage mechanisms native to the deployment platform. This is clearly illustrated with Fowler's microservices characteristic definition of services being "independently deployable by fully automated machinery".
This is achieved through mechanism and deployment use with cloud native tooling such as Kubernetes.
The use of Kubernetes service discovery allows for each service to be independently deployable and scalable using the service per container pattern.
Per service scalability adhered to a bare minimum of centralized management as scaling was simply achieved via Kubernetes deployment replication using the "kubectl scale deployment" command.
Additional Evaluation within the Scope of Deployability
The repository per service pattern allows for autonomous students teams to push code independently as well as instantiate their own CI/CD pipelines with Gitlab.
Adaptability
The resource-sharing system has achieved an ideal level of granularity as to adapt to new system demands. For example, the lending service is designed to accept "item" and "itemID" attributes as to be more generic.
This allows for additional items to be implemented in a future system such as DVDs or sports equipment.
The member service does not make any inter-process calls to the other two services, allowing it to be used within the context of a system that goes beyond the scope of a resource-sharing application.
Reliability
Communication, client facing as well as inter-process, is defined as a single API set.
This minimizes unnecessary complexities if different API sets are used for inter-service and client facing communication.
The use of synchronous requests with main feedback being reliant on status codes, the user is able to quickly identify if a service is down via a 500 status code opposed to relying on a circuit breaker, adding complexity.
Synchronous communication using REST across the entire service scope provides reliable system performance as a unified communication and protocol set is used.
The lack of asynchronous communication further improves reliability as the use of an additional component of a message broker or event-bus may create a bottleneck or single point of failure to the system.
Other non-functional requirements include:
Conservative Technology Stack for Reducing Cost of Evolution/Evolutionary Design Focus
The Spring framework family was chosen to guide development of the microservices architecture.
This proved useful as the Spring Cloud microservices chassis could be leveraged to hoist the distributed architectural style.
Although the language used within the teaching environment is predominantly Java, the lack of extensive language support within the Spring microservice chassis did prove insightful, acting as a catalyst towards the notion of pivoting a future evolution of the system towards a service mesh that can support a far more flexible range of languages.
The use of Kubernetes as a deployment mechanism meant that it was cloud-provider independent.
External cloud specific tooling was used in order to minimize the amount of dependencies found within the architecture to reduce evolution costs. Tools included Jib as an external containerization tool, Google Container registry and Google Cloud metrics.
These tools do have open-source, platform independent variants and thus will not compromise the system if another form of infrastructure hosting is chosen.
One significant tradeoff made was the avoidance of implementing a circuit breaker pattern with the now deprecated Hystrix library. As this system is used within a learning environment, the lack of a static fallback provided with Hystrix when a service call is timed-out is easily replaced with a 500 HTTP status code. A conservative RESTful API set for client-side and inter-process communication further aids this requirement as explained under reliability.
API-service communication between services allows the lending microservice to cater for a myriad of resources to be lent.
Microservices-based Decomposition of the Chosen System
System decomposition into microservices should adhere to Domain-Driven Design Methodologies.
Successfully decomposed using Domain-Driven Design whilst also adhering to Fowler's [4] definition of a microservice being built around business capabilities.
The system was successfully decomposed into loosely-coupled independent services, using RESTful API only communication.
Core patterns such as per service databases, service discovery and API gateways for request load-balancing was successfully implemented within the architecture.
Automated Deployment of Microservices onto Infrastructure
The system should incorporate a deployment pattern that allows for CI/CD methodologies to be leveraged.
Deployment executed via scripts packaging, containerizing and deploying microservices on Kubernetes.
Further illustrated by using the Blue-Green deployment pattern by pushing new container images of updated microservices into production.
Independent development and evolution
Through API only communication, decomposition by sub-domains organized with a focus on business capabilities and a service per repository pattern - an initial development team may be subdivided into cross-functional teams per each service.
Furthermore, a singular API contract specification and an API gateway-pattern, an additional team in charge of API design and external communication may also be instantiated.
With each service being independently scalable and deployable within the context of a Kubernetes based service discovery mechanism, individual schemas, datastores and CI/CD pipelines may be implemented, speeding development, increasing system longevity and preventing cascading failures.
The system adheres to having each service incorporate its own domain model.
Each service adheres to the entity, service and repository pattern [3:151] allowing for continuous refactoring. It achieves a clean architecture and a high level of code quality. The layered style allows the code as well as test-driven development to be modularised, thus achieving greater code longevity for ongoing maintenance and evolution.
Operational independence
Horizontal scalability per service is achieved via Kubernetes based service discovery using deployment replication.
Each service may also choose their own relational database infrastructure via configuration edits in each service's properties file, adding infrastructure flexibility between service and datastore.
Functional Primary Requirements
System Family Identification to Guide the Development of the Microservices Architecture
Through domain driven design, the family of systems was scoped towards resource-sharing systems and further scoped it's focus on a book sharing system.
Through decomposition by sub-domain, the bounded-contexts of "member", "book" and "lending" business capabilities were designated. This provided an acceptable level of granularity for the microservices architecture, whereby service boundaries are defined by purpose [2:248].
Adherence to Languages, Patterns, Frameworks and Communication Styles Found within the Software Architecture and Engineering Modules
Level 2 RESTful HTTP verbs were used for inter-process and client side communication as explored in software engineering.
Familiar frameworks, libraries and languages being Spring, jUnit/MockMVC and Java were used.
Additional libraries and tooling used such as the Spring Cloud family, Kubernetes and GCP toolsets were implemented and are specific to the microservices architecture. Fortunately such tooling does not require a steep learning curve and can be incorporated into both teaching modules with ease.
The system adheres to running both on a local machine or within the cloud.
It is retrievable from Gitlab and has the ability to allow students to leverage Gitlab's toolset to push changes to the services.
Future work may include Gitlab CI/CD and GKE cluster integration for each service when practicing software engineering.
Functional Secondary Requirements
Continuous Deployment Practices Incorporation
Continuous deployment was incorporated via Kubernetes.
Changes to services includes scaling service deployments, updating a new service container image into production as well as rolling back a container image change in production.
As CI/CD was not part of the objectives, primitive forms of automated deployment was achieved through shell scripts on Google Kubernetes Engine. Shell scripts included building Spring based services with Maven and wrapping the packages into Java Docker containers with Jib, pushing them to the Google Cloud Registry.
Additional scripts included automating the execution of the Kubernetes services and deployments of the 3 microservices and API gateway.
Additional scripts included variants in deploying either a Eureka or Kubernetes service discovery based system.
Database connections to services were automated with a push-based externalized configuration pattern via database configuration specified in the Spring configuration YAML file.
Externalised IP mapping to a cloud-based SQL server via the "mysqlservice" Kubernetes service exercised automated datastore deployment during production.
Monitoring Solutions Exploration
Kubernetes and Docker container monitoring solutions were leveraged within scope of the system running in production on Google Cloud Platform.
Although this does entail cloud provider specific restrictions, such a trade-off was made to mitigate additional complexity of incorporating monitoring dependencies into the services.
This reduces evolution costs if a future evolution to a service mesh is implemented. It further allows future services running different languages to easily leverage GCP monitoring solutions with no change to legacy services still running in production.
Subscribe to my newsletter
Read articles from Marco directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Marco
Marco
Senior DevOps Engineer exploring the world of distributed systems