Evolutionary Approaches to Software Architecture

sidsid
6 min read

One thing I've come to realize in my short career as a software engineer (or maybe as just an engineer) is that there are no silver bullets. If there was one then everyone would be using it. Engineering is about managing tradeoffs. Every solution to a problem brings with it other problems to solve. Thus, the question you most often have to ask yourself as an engineer is how many problems do you want to solve in order to solve the problem at hand?

If you've read my previous articles you know I am a fan of the monoliths before microservices approach. I think this approach helps you identify your domains and boundaries without dealing with the technical complexities of managing microservices. You get to focus on the critical challenges of introducing modularity into your system without dealing with new challenges associated with deployment, data consistency, networking, logging, observability, and fault-tolerance strategies.

I like to think of this approach as an evolutionary (maybe even an agile?) one and I'm starting to see benefits when it is applied to other areas as well:

  • Frontend architecture

  • Dynamic -> Static Typed Languages

  • Interpreted -> Compiled Languages

  • SQL -> NoSQL

  • PaaS -> FaaS

Frontend Architecture

This will get its own article in the future. Right now suffice to say that initially when you are using a monolith, you can use an MVC framework like Ruby on Rails, Phoenix, or Django to make your application and render frontends on the backend. This way you don't have to deal with an additional build stage for the frontend and can focus on getting your application out as soon as possible. You can use libraries like HTMX, Hotwire, and LiveView to achieve interactivity without attempting to build a SPA. React and the like and can be reserved for parts of your frontend that require a lot more interactivity than the non-SPA libraries allow for. Plus, once you understand your system and you see which pages/ components are the most popular or change most often, you can consider separating them out from your monolith's frontend.

Dynamic -> Static Typed Languages

In the beginning you could use a dynamic language to quickly build a prototype or a proof-of-concept. This way you get to quickly demo your idea without wondering about types. Once you start building for a customer and you start writing tests you may want to consider adding static typing if you want to eliminate tests that just check for types. Consider also adding static typing to maintain code quality if your codebase grows in size and you bring on more contributors.

This approach roughly mirrors the trend in software engineering over the past 20-25 years. People moved from languages like Java to languages like Javascript, Python, and Ruby because the dynamic languages sped up development. They sacrificed type-checking for speed. However, once such codebases became big and brought on more contributors, there was a need to introduce type definitions to maintain correctness and/ or code quality. Thus, we see Typescript, MyPy/ type hints in Python, and Sorbet/ type signatures in Ruby.

Interpreted -> Compiled Languages

Interpreted languages speed up development time because compiling applications can take very long depending on their size. However, compiled languages are more performant than interpreted languages since compilers have an optimization step due to the static type definitions. In other words, while interpreted languages enable rapid development, they sacrifice performance.

Start writing your application with an interpreted language. Once you understand the inputs and outputs of certain parts of the application, consider converting those parts to a compiled language. Take for example, a function. In the beginning, writing the function in an interpreted language gives you the ability to quickly iterate. Then, once you understand its inputs and outputs and you realize that this function does not change very often, you can convert it into a compiled language. This will not only give you a performance boost but it will also help solidify the function so that future modifications will require more deliberation.

Theo-t3's opinion on migrating from Typescript -> Rust is a good example of this approach.

SQL -> NoSQL

NoSQL is an umbrella term for different types of non-SQL databases like key-value and graph databases. To me it looks like NoSQL databases offer specializations for certain data use cases. For example, key-value databases are great for data storage and retrieval by key while graph databases are great for modelling relationships.

This is why it might be a good idea to start with a SQL database like PostgreSQL. Such a database will provide the most flexibility for your data handling needs even though it may not meet the performance of any one of the specializations offered by NoSQL databases. For example, PostgreSQL's full-text search might get the job done but may not be as good as ElasticSearch while its data storage and retrieval may not be as fast as something like Amazon DynamoDB.

However if you look at the disadvantages of NoSQL databases, you will see that they can have issues too. For example, key-value databases have difficulty handling complex queries. This is why Amazon DynamoDB is great when you know the data access patterns and can build indexes for them. As the size of the data increases, querying can be difficult. You have to index the data in order for it to be accessible. Alternatively, you could scan the whole table to query the data too but that can be time-consuming.

SQL databases offer the most flexibility to query data (it's in the name) which means you can quickly iterate and learn about your application's use cases. Once you understand the data access patterns for a particular section of the application for example, you could start reaching out to use a key-value database like Amazon DynamoDB to speed up that section.

PaaS -> FaaS

FaaS (Functions-as-a-Service) are stateless functions that respond to events. You can start writing your applications this way but it can be difficult to reason about. This is a highly distributed system, even more so than microservices. Although a microservice is also supposed to be stateless (and there are challenges to achieving that), each service is a group of functions that can share state which makes them easier to reason about.

Thus, the safest approach for me is to iteratively go from monolith -> microservices -> functions.

Thus, I would choose to deploy the monolith and any of the microservices that spin off from it on to a Platform-as-a-Service (PaaS) first unless I have some specific need for deploying with Docker (Containers-As-A-Service) and Kubernetes. Then, I'd steadily break the microservices into individual functions that can be deployed separately (FaaS).

Conclusion

Evolutionary approaches make the most sense to me. They allow me to reason about the core of the application before thinking about technical optimizations. I'm not saying technical optimizations aren't necessary but to me it is better to first focus on bringing value to the customer. For example, the customer doesn't really care if we use monoliths or microservices. What matters is the product. Focus on the value proposition. Then once that is solidified, you can consider optimizing it by using different architectures and technologies which will no doubt help the customer in the end as well.

Some would say this is Agile development.

To borrow from frontend parlance, you could also say this is progressive enhancement :)

0
Subscribe to my newsletter

Read articles from sid directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

sid
sid