Bad CPU Performance after Linode Resize, Because of unallocated Disk Storage?

I recently updated a Linode-2GB to a Linode-8G.
The Linode-2G had initially come with 30GB Storage.

On the Linode-Dashboard for the Linode, was an option to upgrade the storage capacity of the Linode-2G from 30GB to 50GB.
I initiated and successfully completed the storage upgrade, but did Not allocate the additional storage to the Linodes Disk and Swap partitions.

I then proceeded to do a Linode-Resize from Linode-2GB to a Linode-8G.
The process completed successfully, but again I did not bother to allocate the additional storage space that came with the Resize.

So my space allocation would be sitting at
Disk: 30GB, Swap: 256 MB, Available: 160GB

The Linode had been running a MySQL server, and the performance had been alright
~ i.e. as monitored though Longview, and "via ps,top,iotop" commands.

But as it sits now, without having made any changes to the MySQL configuration, the performance has gotten much worse;

The total CPU% usage now varies from 100% - 150%, with CPU time spent in Wait, using the majority of resources.

Running "free -m" I see I should have plenty of RAM available.

In running "iotop", I see command "[jbd2/sda-8]" taking up at least 50% I/O at any one time.

I've read some articles that mention high "jbd2" I/O operations is an indication of a bad-raid configuration, or generally just a bad configuration of the Disk resources.

I've looked through the Troubleshooting guide to aid me thus far.
https://www.linode.com/docs/troubleshooting/troubleshooting/#did-you-add-additional-storage

That all said, this leads me to think that the reason for the problem is that because I never allocated any of the additional Storage resources, the Linode is now running in an inefficient state as a result.

Am I correct in thinking this? Will allocating the rest of the available resources fix the issue?

2 Replies

"I built a new garage; is that why my water heater isn't working well?"

These are completely unrelated things. The jbd2 process is responsible for maintaining the ext3/ext4 journal; if it's doing a lot of I/O, that means you're doing a lot of disk operations that require journaling (by default, this is everything except actually writing data to disk; eg, creating files, deleting files, etc). If you're seeing a lot of iowait, that could be an indication of noisy neighbors doing a lot of I/O themselves.

Thanks for the reply @dwfreed;

It turned out the source of the increased Wait loads for the jbd2 process were due to some of the in place MySQL configuration/log settings I was using from before.
Relaxing them as much as possible, I was able to significantly bring down the CPU% with respect to the Wait Loads.

I'm not convinced however that the migration of the server from the upgrade was unrelated to the performance degradation;
Though reallocation of the disk space is probably unrelated, I think Linode probably has performed some hardware changes related to thier current server offerings

~i.e. https://blog.linode.com/2018/05/17/updated-linode-plans-new-larger-linodes/

And assuming an upgrade would migrate the server to a new VM on potentially new Hardware, the configurations that I had in place were no longer optimal.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct