message

Hosting WordSesh 2014 – Challenge Accepted!

Hosting WordSesh 2014 – Challenge Accepted!

wordsesh

Few months ago WordSesh organizers contacted us asking if we would host their online conference. Needless to say, we got quite excited to help this great WordPress event happen. The project was very interesting from a technical point of view too, as we needed to ensure that thousands of visitors will be able to follow the free live stream for 24 hours without any downtime or other technical issue.

What is WordSesh and why is it hard to host such event?

WordSesh is a 24 hour free online conference that gathers all the best WordPress speakers from all over the world in a free non-stop sessions marathon. This year was the third edition of the conference and it was expected to be the biggest so far.

This meant that for the 24 hours while the event was held the website would experience an enormous and not easily predictable traffic surge. Of course, it was crucial that any traffic increase during this time frame was handled seamlessly, because even the shortest downtime could mean a serious failure for the event. For example 10 minutes downtime on a normal website, with steady traffic during the whole year, might go unnoticed, but you can imagine how detrimental even 10 minutes downtime may be if they occur during the most visited session of the event, for example.

We chose Linux containers for infinite scalability

Since we didn’t want to risk any downtime with WordSesh, we opted for a hosting technology that we love for its almost unlimited ability to scale on the fly without downtime – the Linux containers. Using containers gave us the peace of mind that no matter what happens we will be able to add more resources both vertically and horizontally to each part of the infrastructure we built for WordSesh. At any moment, we had people on duty ready to add more resources or face any other potential issues.

We built a redundant infrastructure from the start

Basically, we had two load balancing containers, two PHP FPM containers and two MySQL ones to handle all the traffic during the event.

The two load balance containers running NGINX were set to distribute the incoming connections to the other containers behind them. Since those were configured to scale horizontally, we were able to add new load balancers in seconds if high load was detected. Those had some caching enabled for the static resources only and configured to work with the Cloudflare CDN service.

After the loadbalancers, we connected two PHP FPM containers with opcode caching enabled and a Memcached service running. It was particularly helpful for the gravatars of the hundreds visitors that were shown on the main page of the website. Showing all these gravatars resulted in high number of queries to the database and Memcache optimized serving those queries.

Right after the PHP FPM containers, there were two MySQL containers configured with a master/slave replication. As you can see, everything was configured and set with a lot of redundant resources from the start to handle traffic spikes. Here, I’d like to mention that the different containers were located in different host nodes as an extra precaution.

We added a state-of-the-art dynamic caching

Finally, we wanted to enable a dynamic caching so we can handle the maximum number of simultaneous connections to the site before we need to add new hardware. As you probably know, the main challenge when implementing a dynamic caching system is to purge the cache whenever a change on the site occurs. Though, as part of our SuperCacher we have developed a very effective WordPress plugin that handles this challenge, it was developed for normal sites that reside on a single server and was not able to purge the cache efficiently in such highly complex infrastructure with multiple MySQL and PHP containers.

This is why, we used a technology called Stale cache to regenerate the cached content every 10 seconds with a single connection to the PHP containers. This means that we were basically serving cached content all the time to the WordSesh visitors but that cache was at most 10 seconds old. This worked great and the containers easily handled all the traffic coming their way.

In conclusion, I am happy to say that this solution handled the traffic perfectly and there weren’t any problems during the WordSesh event. This was a great experience for our team that once again proved we can host huge sites with great traffic spikes!

Related Articles

Back to top button