Posted over 4 years ago. Visible to the public.

Scaling your platform and performance tweaks

On high workloads or increasing traffic to your website/application there might be a need to make infrastructural changes to improve the setup.

Scale horizontally

We strongly recommend redundant systems. That is the reason why our default setup includes two application servers. Not only is reliability improved (if one server is down, the other one continues to serve requests) but also load is shared between the two application servers. If your application servers run out of resources the easiest way to improve the performance is to start an additional application server instance and add it to the load balancing.
In most cases this solution is better than just vertically scaling your servers as it further increases redundancy too. If one of your vertically scaled application servers is down the one left might not be able to handle the shared workload alone. You would need VMs with many resource to balance this, but all those resources might then be dormant in day to day workloads which is not very cost efficient.
Your setup should be designed to stand the outage of one of your application servers. Scaling horizontally also allows you to increase your performance for a short time without changing the setup on your already existing machines. It is possible to temporarily create additional instances for an expected event for example.

Scale vertically

This is cost-efficient and in certain cases more desirable than horizontal scaling. For example if you have a memory intensive application you may want to increase the available RAM. Or complex calulation tasks which may benefit from additional CPU cores. If you want your hosting to withstand more traffic or start additional services you should think about scaling horizontally.
Adding more memory to start more services on few machines increases the risk of downtimes or an outage due to one server not being able to handle the workload of two.

HTTP caching on our Load Balancers

By using this feature, we are able to handle a six-digit amount of concurrent visitors to a Ruby-backed website. You have to set your Cache-Control Header correctly and we need to enable HTTP caching on our load balancers. Custom solutions for controlling the cacheable content are also possible. You can find out more about the caching setup here.

DNS Round Robin

On very high amounts of traffic the SSL overhead or the bandwidth usage could slow down even our load balancers. If you expect more than 10.000 requests per second please contact our operations team.

How this works: Configuring more than one A record for an address returns those addresses on a different order on every DNS request. We can configure those IP-Addresses on our load balancers for your vhost. Each IP-Address gets moved onto a different load balancer to balance the traffic and SSL load between those. This does not affect reliability as our load balancers can do failovers on IP-Level.

Dedicated Database Instances

By default you will get databases on our shared database clusters. If your database uses a lot of memory, you have a huge workload in general or special requirements for your database (e.g. the most recent PostgreSQL Version or a specific vacuum configuration) we can setup a dedicated cluster for you. In this case please contact our operations team.

Memcached, Redis, ...

Caching or key-value storage can improve the performance of your application if you implement them correctly. We can setup instances for these services on your application servers or on dedicated servers.

Owner of this card:

Avatar
Claus-Theodor Riegg
Last edit:
about 1 month ago
by Marius Schuller
Posted by Claus-Theodor Riegg to opscomplete
This website uses short-lived cookies to improve usability.
Accept or learn more