My Website infrastructure Stack



A little background

Throughout this post I will go through the technology stack and the reasons I chose each part of the stack. I won't be creating any How Tos in this article to keep it short enough to not lose attention but I will be creating independent articles for each part of the stack.
The goal of my website is two parts. One, for it to be a distributed cluster so I can loose any one part of my website and it will proceed to work. The second goal is to be able to recreate it from a few simple files in a few minutes on any provider. 

The Provider
The main reason I chose AWS was the credits I was offered as a student. If you think about it a pretty strong marketing move, get students hooked on the ecosystem so when they go into the workforce they convince their employers to switch or migrate to AWS.

But the real question should be why did I stick AWS. Although I've primarily worked with AWS, I've worked with some other cloud providers like Azure and GCP and AWS does win out in several categories. First, the well known options AWS provides, if you can think of a service you need, chances are they have it and even if they don't they're adding new services every week. The second, it is  one of the oldest and biggest cloud providers giving it key advantages. One of which I enjoy is compatibility with both frameworks and open source tools. 


The Underlying Infrastructure
 My goal is to make the infrastructure non reliant on any one service provider. My current underlying infrastructure is pretty simple, I'm using three EC2 instances spun up with KOPS running the flatcar Linux distribution that's running tech stack in docker containers. Now there are multiple reasons some one might choose these technologies but there are specific reasons I chose them that I will explain.

FlatCar
There are two main reasons I chose to go with Flatcar, the first being compatibility. Flatcar was built to be a drop-in replacement for the former Coreos by Redhat. They previously announced the Distro would reach End of Life in May 2020 and be succeeded by Coreos by fedora, Only I didn't find the new Coreos a drop-in replacement so I went with Flatcar which promises to offer support and updates for what appears to be an indefinite amount of time. The second reason I chose Flatcar was for its automatic updating capability. I've had experience updating 20 Servers a day manually several times a week and it does take a lot of time that could be better spent elsewhere. With Flatcar being as minimal as it is, it's safe and preferred to update automatically, I no longer have to worry about rushing to update when a critical update to the kernel or other crucial component is released. And for those that require a bit more control, you can configure for when it goes offline to reboot after critical kernel updates.

Kops
If you're not aware, KOPS is built to make deploying Kubernetes clusters on independent instances easy, and that's really the biggest reason why I chose it, ease of K8s installation. One line in the terminal and you have a fully operating kubernetes cluster on a supported provider in a minute or two and to complete it, it writes the config information to your .kube/config file. KOPS also allows you to run kubernetes on your own instances instead of using a providers managed service which definitely helps avoid vendor lock-in in the future

Containers/Docker

In some ways this decision seems self explanatory but I did originally consider going with VPS configured by SaltStack, but this lost out to the several major advantages of containers. Containers are immutable by design which takes away the worries of drift away. We've also all had the experience where we had packages with competing dependencies. On top of this I can use that container anywhere, from my local workstation to staging to prod and it will remain the same without any weird local changes messing up progress as it moves between environments. Plus it makes it easier to scale if done properly it should have one service per container so you can only scale one service instead of needing to spin up another VPS with all services installed.

The Orchestration

Kubernetes (K8s)
Container orchestration provides High Availability to the masses and K8s is not an exception. Let's say for whatever reason your container crashes, K8s will automatically restart it. It's also helpful in rollouts for changes either to infrastructure or code only deploying when it knows the minimum number of pods (eg. 50%) is healthy. So if you made a typo that prevents apache to start while deploying it will still have working pods while you work on fixing the problem.
Kubernetes uses YAML files which bring several advantages. Any IAC infrastructure allows you to track it in version control giving you the ability to track the history of changes. If done properly, you can give the YAML configs to another person and they should be able to spin up matching infrastructure with a few commands.
Since Kubernetes is open source and the biggest container orchestration platform it usually gets features and tool support first. The tool I've been eying is ISTIO, which among many features allows you to have granular control of routing allowing for processes like A & B testing.

Conclusion
While my current tech stack is a good starting place, there's much more to do. At this moment it can handle traffic quite well, but if I were to get a influx of traffic suddenly it would probably slow to a halt, or even stop. In the future I want it to auto scale to meet demands at any time. I also want to reform my processes to allow integrated testing of my infrastructure like destroying the containers and underlying ec2 instances and redeploying every night and perhaps using chaos monkey or similar to ensure it can operate without interruption.

Now that you know what my infrastructure stack looks like, Have a look at my blog post about about the services I use and why I chose them

Comments

Popular posts from this blog

I just made my first options trade on Robinhood

How we sped up a Postgresql procedure 50X

Enhancing connectivity from AWS to my on-premise network