Posted over 1 year ago. Visible to the public.

Scaling your platform and performance tweaks

On high workloads or increasing traffic to your website/application you may need to improve your basic setup. We offer some possibilities to do this:

Scale horizontally

We recommend redundant systems. That's the reason why our default setup includes two application servers. Not only reliability is improved (if one appserver goes down, the other one continues to serve requests) but also load is balanced between the two application servers. If your application servers run out of resources the easiest way to improve the performance is to start an additional application server instance and add it to the loadbalancing. In most cases this solution is better than just vertically scaling your servers as it increases redundancy too. If one of your vertically scaled application servers goes down the only one left may not be able to bear the workload. You would need huge VMs to balance this. Your setup should be designed to stand the outage of one of your application servers. Scaling horizontally also allows you to increase your performance for a short time without changing the setup on your already existing machines (temporarily boot additional instances).

Scale vertically

This is cost-efficient and in some cases more desirable than horizontal scaling. For example if you have an application which uses a high amount of memory you may want to increase the available RAM. Another example are VMs working on complex calulations which may benefit from additional CPU cores. If you want your hosting to withstand more traffic or start additional services you should think about scaling horizontally. Adding more memory to start more services on few machines just increases the risk of downtimes or an outage.

HTTP caching on our Load Balancers

By using this technique, we are able to handle a six-digit amount of concurrent visitors to a Ruby-backed website. You have to set your Cache-Control Header correct and we have to enable HTTP caching on our loadbalancers. Custom solutions for controlling the cacheable content are also possible. You can find out more about the caching setup here.

DNS Round Robin

On seriously high amounts of traffic the SSL overhead or the bandwidth usage could slow down the load balancers. If you expect more than 10.000 requests per second please contact our operations team for consultancy.
How it works: Configuring more than one A record for an address returns those addresses on a different order on every DNS request. We can configure those IP-Addresses on our load balancers for your vhost. Each IP-Address gets moved onto an other load balancer to balance the load between those. This does not affect reliability as our load balancers can do failovers on IP-Level.

Dedicated Database Instances

By default you will get databases on our shared database clusters. If your database uses a lot of memory, you have a huge workload in general or special requirements for your database (e.g. the most recent PostgreSQL Version or a specific vacuum configuration) we can setup a dedicated cluster for you. In this case please contact our operations team.

Memcached, Redis, …

Caching or key-value storage can improve the performance of your application if you implement them correctly. We can setup instances for these services on your application servers or on dedicated servers.

Owner of this card:

Avatar
Claus-Theodor Riegg
Last edit:
over 1 year ago
by Andreas Herz
Posted by Claus-Theodor Riegg to opscomplete
This website uses cookies to improve usability and analyze traffic.
Accept or learn more