Time to first byte more than double in a new Linode
I created a new linode in UK and got into the new hardware with a E5-2670.
I had an IPB board with Debian Squeeze in another provider and just setup every thing like was before, the only deference is worker_processes in nginx that I changed from 4 to 8.
Now my time to first byte it's almost 2x / 3x than before.
Why is this appending? What should I do or where should I start?
Thanks
21 Replies
Before I could see 0,3 to 0,4 and now I have 0,7 to 0,9.
When I change backup to 4 I get the same values.
Check mysql. Mysqltuner is a wonderful tool.
Check your DNS speed with one of the many on-line testers. A good website tester should also tell you about DNS delays.
Check your system isn't doing something really stupid with top or longview.
Today TTF is 0,2 and this is the the expected value for the power we have in linode
Could this be related bad neighborhood in my node?
@nfn:
Hi,
Today TTF is 0,2 and this is the the expected value for the power we have in linode
:) Could this be related bad neighborhood in my node?
If you changed nothing and your time to first byte went down to a quarter of what it was yesterday then it has to be due to host load. It's outside your control, you can only talk to support.
@sednet:
@nfn:Hi,
Today TTF is 0,2 and this is the the expected value for the power we have in linode
:) Could this be related bad neighborhood in my node?
If you changed nothing and your time to first byte went down to a quarter of what it was yesterday then it has to be due to host load. It's outside your control, you can only talk to support.
That, and/or caches have "warmed up."
For what it is worth though, I get decent TTF times out of Wordpress without static file caching or PHP-FPM. APC is essential to that, since otherwise the startup overhead for each request is about 50% the cost of the request processing time.
Static file caching really helps with wordpress~~
I wish I hadn't migrated a bunch of my nodes… they were performing better before because I was getting less CPU steal %!
Side-note: the ones I didn't migrate now have even less CPU contention on them.
Just like with almost any other "new, cool service", the initial rush of users ends up overloading things.
I'm sitting here waiting on a manual tar|gzip backup that took maybe 5 minutes on the old hardware, after migrating it's at 30 minutes and counting. Watching htop I'd say I'm getting at least 90% cpu steal
@MichaelMcNamara:
I was getting about 1500ms or greater FTB values until I migrated from Apache to Nginx utilizing W3TC on WordPress. With W3Tc there's no need to fire up PHP for every page visit and Nginx just serves up the static HTML disk cache files. It brought my FTB values down to around 150ms (that's a huge performance increase!).
http://blog.michaelfmcnamara.com/2012/1 … x-php-fpm/">http://blog.michaelfmcnamara.com/2012/11/apache2-mod_php-vs-nginx-php-fpm/
A better option is just to ditch Wordpress and use Pelican. It can import your Wordpress blog automatically and is ridiculously fast (static HTML only). Plus you can now do your blogging using Vim and Git
@Stever:
+1 on wishing I could go back and not take the free upgrade. I didn't really need the RAM and now the CPU and disk IO are horrid.
I'm sitting here waiting on a manual tar|gzip backup that took maybe 5 minutes on the old hardware, after migrating it's at 30 minutes and counting. Watching htop I'd say I'm getting at least 90% cpu steal
:(
Sounds to me like the new nodes are getting hammered with all the migrations. I'd expect it to calm down somewhat once this huge mass of migrations has finished. I'd imagine it is putting quite a load on their internal network and the host machines IO in particular.
@Stever:
+1 on wishing I could go back and not take the free upgrade. I didn't really need the RAM and now the CPU and disk IO are horrid.
I'm sitting here waiting on a manual tar|gzip backup that took maybe 5 minutes on the old hardware, after migrating it's at 30 minutes and counting. Watching htop I'd say I'm getting at least 90% cpu steal
:(
I had exactly the same issue post-upgrade, opened a support ticket and had a new migration to transfer to a new machine within 10 minutes, haven't had a problem since. I think I'm back on the old hardware (but with the increased RAM), but I wasn't CPU bound anyway - and it's better than completely unusable like it was post-migration.
It doesn't help that linodes now have 8 virtual cores, not sure what the reasoning behind that decision was. Even though the core count is doubled on the new hosts, it would have probably reduced contention.
This reminds me of when using VMware (I know it works a little differently than Xen, but here goes:)
We constantly had "discussions" between the network group (in charge of the VMware infrastructure) and the Database admins. The DBAs wanted more cpu cores for their systems (and more RAM) while the network group wanted less cores (RAM was understandable in this situation). To quench the issue a test environment was setup with both configurations, and the DBAs were asked to test the machines. It turned out that the less CPU cores had better performance than the full cpu cores. (6 cores vs 2 cores on a host with quad 6 core xeon processors). The issue in VMware was that all "requested" cores must have an available cycle before the host would give the guest the requested cpu cycles. So with 2 or 4 cores, it got the cpu cycles needed quicker than with 6 cores.. (with multiple guest machines on the system, only difference was the DB machines)… so… at least with VMware "more is not always better".
Edit: Xen may or may not have other interesting performance issues, but it 100% does not have that one.
Now, that's no worse than when we had 40x4 threads on 4x2 real cores but there was an opportunity to reduce the contention there (by doubling the real core count and keeping the virtual core count the same at 4 per linode).
I'm saying I'm not sure what the point of doubling the virtual core count was.
@Guspaz:
I'm saying I'm not sure what the point of doubling the virtual core count was.
It was marketing and because this change could be made without buying extra hardware.
I doubt there were many people who were CPU bound before the upgrade.