token bucket filter - slowing down network too much
Running this command
tc qdisc replace dev eth0 root tbf rate 500kbps latency 50ms burst 1540
should slow the interface down to 500kbytes/sec, but it actually slows down to about 20kbytes/sec.
Any ideas?
5 Replies
@rsk:
kbps = kilo BITS per second. One byte is eight bits, so you should use "4000kbps" to get 500 kbytes/s… or check out tc's documentation if it allows you to specift values in kbytes.
If you have nothing to contribute go away.
tc man page
"All parameters accept a floating point number, possibly followed by a unit.
Bandwidths or rates can be specified in:
kbps Kilobytes per second
mbps Megabytes per second
kbit Kilobits per second
mbit Megabits per second
bps or a bare number
Bytes per second"
@gmt:
If you have nothing to contribute go away.
My personal preference is for units of bits per yoctosecond, or possibly even bits per zeptosecond.
James
burst Also known as buffer or maxburst. Size of the bucket, in bytes. This is the maximum amount of bytes that tokens
can be available for instantaneously. In general, larger shaping rates require a larger buffer. For 10mbit/s on
Intel, you need at least 10kbyte buffer if you want to reach your configured rate!
If your buffer is too small, packets may be dropped because more tokens arrive per timer tick than fit in your
bucket. The minimum buffer size can be calculated by dividing the rate by HZ.
Also, as seen in the last sentence above, the man page for tbf seems to imply a heavy dependence on the kernel's HZ. I think the linode kernels are tickless, at least mine is. Not sure what effect that has, but it may be affecting your rates.
Xen VMs don't have very precise clocks, so that might be one
reason why the reliable tbf is also not performing well.
>
I also set the burst sizes manually and the speed again becomes
exceptionally low.
This is exactly what I'm experiencing, even giving huge bandwidths results in slow speeds.
Looks like a Xen bug.