My website Services stack.

 



Table of contents
    Mysql
    Traefik

Introduction

  In this post I will talk about the services that are running my sites along with reasoning behind each. Similar to my website infrastructure post I will not be doing any How-Tos but format it as a birds-eye view of how my stack is implemented.

    Most of my dynamic code is written in PHP, from my registration/Social site to my contact form. It's a well tested language which made it a very good language to learn and program with. In addition  many frameworks and libraries are written in php and let's not forget some of the biggest platforms in the history of the web are created using PHP like WordPress, Shopify, and Facebook (HHVM).
  Although php is a great and powerful language, I plan on moving to Nodejs for a variety of reasons. One of the beauties of Node is it much easier to build  applications with separation of services. Because of this it makes it much easier to scale. I also like the fact of having one language to run everything both on frontend and backend.

I use Nginx as the front end  webserver. Although I've managed hundreds of Apache servers, I find Nginx much easier to configure and lightweight in general. In addition it's no doubt it's gaining market share which means you will find it in an increasing amount of businesses, big and small, and because of the footprint, the community is helpful. And of course to actually process PHP I use PHP-fpm.  I use these together in a pod running load balanced across servers to reduce single points of failure.

There's two types of data I need to manage in my stack. Session data to track logins and other data like forum posts,timelines, messages, etc. 
  When I made my site load balance across multiple pods, I ran into the problem of session persistence. When you keep your data like login cookies through server sessions the data isn't the same across different servers. Now you have two different options, either a stateless solution like JWTs or a scalable solution like Redis. This is a whole other article in itself but I chose Redis for a variety of reasons after tons of research on the matter. So I set up a Redis cluster operator  in a distributed cluster.

  For my SQL Stack I chose to use RDS. Anyone who's worked with setting up and maintaining servers know it's a lot of work. There's even a whole job for it a DBA, fortunately with RDS, you no longer need a DBA and AWS manages everything from setting it up to optimizing it. In addition you can make it replicate across availability zones and even to other SQL servers in another cloud provider or on premise.

    Finally I use AWS Parameter store and KMS to store encrypted credentials that gets pulled at runtime. The genius part of this is you no longer have to go hunting through your code to change all instances of credenetials, you only have to change it on AWS parameter store, rebuild your pods and you have all code with the latest credentials.

For load balancing I chose Traefik running on an ELB instead of doing an ALB because I didn't want to deal with vendor lock-in. Traefik was built for containers and micro services so it works perfectly with my setup. It's also been shown to be much faster than Nginx, partly because it only does load balancing and not much else. In addition, the Nginx container didn't work on my ARM cluster for testing so choosing Traefik was the best choice.

The only downside with Traefik is you can't load balance Lets encrypt Certificate generation on the non-enterprise version. Using Cert-Manager I'm able to not only automate the generation of Lets encrypt certificates but also load balance it.


Conclusion

You can see this all in action on my sites and both my code and Configuration files on my github. While I think my stack is impressive, I know it'll evolve on itself as I learn more efficient ways of running my site and new technology gets developed.

Comments

Popular posts from this blog

I just made my first options trade on Robinhood

How we sped up a Postgresql procedure 50X

Enhancing connectivity from AWS to my on-premise network