Rate limiting with iptables
can you suggest some good rules for rate limiting with iptables?
I'm using load runner over my server and with a simple DSL connection
my server stopped responding to other requests until I stopped the load runner load.
I configured Apache for 20 maximum worker so my server serves 20 request per time,
but with a tool like load runner or jmeter you can load my serveer using a simple dsl.
I want to limit too many connection from the same ip.
Is it possible? I know it is possible but how to do it?
Thanks.
12 Replies
iptables -A INPUT -p TCP --dport 80 --syn -m recent --name http --update --seconds 60 --hitcount 5 -j REJECT
iptables -A INPUT -p TCP --dport 80 --syn -m recent --name http --set
What could be a good rate limiting settings?
Limit a user who does 5 connection in 60 seconds is not good, a user normally makes much more than 5 connection in a minute while surfing a web page.
If you do want to rate limit at least set the max connections to something pretty high as web browsers do all sorts of parallel connection stuff to load sites faster. Remember that a source IP is not a user. If NAT is involved at the client end it could be any number of users.
@sednet:
Rate limiting incoming web connections is a rather odd thing to do. Have you considered using a better performing web server like nginx or putting varnish or maybe cloudflare in front of your site?
If you do want to rate limit at least set the max connections to something pretty high as web browsers do all sorts of parallel connection stuff to load sites faster. Remember that a source IP is not a user. If NAT is involved at the client end it could be any number of users.
give me your site IP, I will knock out your server with a mobile connection.
if you not rate limit the server you can use nginx and all you want that workers will be saturated as soon as someone will attack you.
@sblantipodi:
give me your site IP, I will knock out your server with a mobile connection using apache jmeter.
Okay. My IP is 127.0.0.1
I'm not interested in their IPs, I'm interested in the techniques used to survive to
a max connection attack.
- Les
@akerl:
Limiting HTTP connections at the iptables level isn't really a viable solution. HTTP-level attacks use tricks besides raw connection count to affect your server, and the fix there is to adjust your web server and site to avoid being vulnerable to those tricks. Primarily, a client shouldn't be able to cause massive amounts of work per request. The rate limiting you'd need to impose at the iptables level to stop those kind of attacks would effectively mean dropping all HTTP traffic, which pretty much amounts to denying access to your own service.
- Les
Thanks for the answer.
What is the best way to survive to a simple apache jmeter attack?
Using apache jmeter with a mobile connection I can knockoff most of the "simple site" mine included.
How can I survive to this kind of software?
> using a better performing web server like nginx or putting varnish or maybe cloudflare in front of your site
> the fix there is to adjust your web server and site to avoid being vulnerable to those tricks. Primarily, a client shouldn't be able to cause massive amounts of work per request.
@akerl:
> using a better performing web server like nginx or putting varnish or maybe cloudflare in front of your sitenging performs better but it dies as like as apache while the requests increase.
improving performance is not a good solution for me.
> the fix there is to adjust your web server and site to avoid being vulnerable to those tricks. Primarily, a client shouldn't be able to cause massive amounts of work per request.
this is not an answer really.
how can I fix my webserver? how can I stop client to don't do massive amounts of work per request?
The real problem is not in the massive amount of work per request but in the massive requests with really small amount of work.
Many many requests with small amount of work can saturate your worker and make your site unresposive to others.
@sblantipodi:
The real problem is not in the massive amount of work per request but in the massive requests with really small amount of work.
Many many requests with small amount of work can saturate your worker and make your site unresposive to others.
this remains unanswered.