In my previous blog post, I briefly discussed the vendor lock-in PTSD I experienced during my previous employment. In this post, I want to dive deep into Meetings.bio’s tech stack and take you through the thought process that led to our current stack.
Our Tech Stack
Let’s start by listing our tech stack for the impatient:
- Frontend: NextJS written in TypeScript
- Backend: Python, mainly based around Django, Django Rest Framework, and Celery
- Database: PostgreSQL and Redis for caching/Celery queue
- Infrastructure: containers running in Kubernetes cluster based on the GitOps concept
The stack is nothing magical or unusual, and we’ve (intentionally) not invented anything new. So far, it works exceptionally well for us, and in the following paragraphs, I will expand on (some of) my reasoning behind choosing each component.
No Vendor Lock-In
One of the most important decision factors was the ability to change vendors if needed. We’ve therefore resorted mostly to open-source options or options where the same interface is supported by many vendors, and we could make a change without needing a code rewrite (i.e., SQL). That doesn’t mean we don’t use any external vendors and host everything ourselves. We need to focus on the core of our business and value creation. Losing time by trying to self-host git repositories and maintaining them would not be the best use of our time and resources. Therefore, we’re “renting” quite a lot of our core infrastructure: a managed Kubernetes cluster, git repositories, S3 storage, error tracking, etc. We just made sure that we would be able to switch providers or even self-host if needed.
Separation of UI (Frontend) and Logic (Backend)
In my first full-time job as a developer, I have seen firsthand how hard it is to decouple UI and logic if an API isn’t present from the design phase onward. While the rapid pace of development is present overall in tech, it is especially brutal in the way UI is done. Therefore, I believe it is crucial to be able to switch UI implementation somewhat painlessly (meaning there is no need for a rewrite of the entire backend) if/when needed. Our application thus uses a REST API for communication between frontend and backend.
Types to the Rescue
Speed of Development
I’m well aware of the irony of going on about types and having Python as the main language. The primary reason for this choice was the need for quick development as one of our primary requirements. As a startup, we’re still discovering what our ideal software looks like and having the ability to rapidly develop new functionality is vital to us.
In my previous jobs, I’ve developed with most of the prominent ORM/web application frameworks, including Spring Framework (Java), .NET Core (C#), Express (JS). For me, Django is the undisputed king of productivity and scalability. Yes, it has a learning curve and takes quite some time to do the initial setup. In my experience, these onboarding costs are quickly outweighed by the productivity I’m able to achieve as a developer.
I’m well aware that Python is slow, resource-intensive, and doesn’t have a strong type system. For our use, it’s fast and efficient enough. I also annotate almost all my code with type annotations and can get most of the static type-system goodness in an otherwise dynamically typed language.
GitOps: Infrastructure Transparency
One of the most significant renaissances in tech in recent years occurred in the infrastructure field. With the rise of containers and orchestration frameworks, infrastructure deployment and maintenance changed (and for the better). What was previously done by sysadmins using SSH and running commands on a (virtual) server, can now be written in code. With the previous system, it was challenging to be 100% certain about the state of production infrastructure. Deploying new infrastructure was slow and, in my experience, often a poorly documented process.
GitOps for us solves all of the problems described above. All of our production deployment state is stored and described in manifests in a git repository. We use ArgoCD to synchronize the repository with the Kubernetes state and, with a cluster bootstrap capability, it works like magic. We’re currently running almost 30 different production services, and the amount of time I don’t spend managing our infrastructure still amazes me. After the initial setup, most things run with almost no need for manual human interventions.
TLDR: Don’t Reinvent the Wheel
As I already mentioned, our stack is nothing groundbreaking. It’s perhaps so common that it can justifiably be called boring. It works well for us, and as a CTO, I actually like the boring part: it means we can focus on the areas where we create the most added value and don’t have to spend too much time and resources dealing with technical issues.