By default, our load balancers pass all incoming requests to your application without throttling them in any way.
If you want to avoid situations where too many requests can be send to your application (or a part of it) in a short amount of time, we provide so called "request rate limiting". Please be aware that HTTP request rate limiting alone will never be a full DDoS protection, as an attacker may exploit a variety of other vectors to slow down/overload your application.
If you want to establish a HTTP request rate limit for your website, just let our operations team know so we can enable it. We will also happily assist you finding the right parameters for your requirements.
The most simple configuration is to set up a single limit zone for your whole website/application. If you have more sophisticated rate limiting requirements, we can also set up multiple zones for your application and apply different rate limits for certain locations Show archive.org snapshot within your application vhost.
For each zone, the following parameters can be set:
If we configure a rate limit, each incoming request is matched by using the "zone key", which would be the user agents IP by default. The zone key is used to group and count requests that belong together. Eventually it will be used to rejected requests that exceed the defined rate per zone.
While using the users IP as a zone key would often be a good fit for basic rate limiting, you could also use an API user, a session token or any other available HTTP-Header/most available nginx variables for matching requests.
Examples of zone keys, for different limit targets:
For each request limit zone you can define a single zone key which also can consist of multiple variables, the maximum number of requests per second, a burst amount and the desired HTTP response code to return once the rate limit was hit. You can use multiple zones per vhost or location. Requests with an empty key value are not accounted into the rate limit.