Is there a performance benefit to several small linodes .vs one larger linode?
After 15+ years working with bare metal, I've made the jump to full virtualization. I run a fairly large website using mariadb, php-fpm, redis, elastic search and object storage.
For years, I've always run 2-3 servers, separating out DB from web. This was done mainly to get RAM and RAID configurations best suited for handling DB loads and file storage. But after a month on linode, I'm learning just how different working in virtualization is. I initially separated out services, putting redis on it's own high-memory linode, and running standard linodes of different sizes for the web and DB servers. AS I see the performance, and have been tuning server performance, I've come to the conclusion I can run everything on a standard 96GB linode with room to spare.
I'm running an innodb buffer pool that allows the entire DB to run in RAM, which uses a max mysql memory of 55G, Redis uses around 15GB, apache/php-fpm is using about 5GB, and elastic search runs another 8GB heap. So all in, that's a memory requirement of around 83GB.
Currently, Maria and elastic are running on a 16 core, 64GB linode, and are only using on average, about 1 core worth of processing (the DB is very well optimized, and with the entire DB loaded into the innodb buffer, there's just not a lot of demand on the processors unless I'm dumping or restoring the DB or running the odd-ball big query manually)
php-fpm is optimized to run well on 4 cores and redis uses almost nothing.
I've been looking at all the possible configurations, including block storage. But in the end, the most cost effective solution seems to be move EVERYTHING on to a single, 96GB standard linode. That would leave me with 4X the disk space I need, it would leave me with about 12GB un-used ram, and with 20 cores, I'd really have so much more processing power than I would ever need. In reality, I could probably run it all on a 64GB linode, but it would leave me little extra room for traffic spikes, or backup operations, file moves, etc.
If I keep all the services split up, the cost is around 20% more just because I end up with more wasted resources when split across several smaller linodes.
In the "bare metal world" I'm a-custom to, I would never put everything on one server because of the "noise" of sharing resources across the services. But when dealing with full virtualization, if I'm only using a 1 core / 24GB linode for redis, the reality is other people are using the remaining 15 cores on that processor and the ram is shared with other users as well. So it occurs to me, if all of the hardware is already shared among the various users of linode's service, is there really any downside to me moving EVERYTHING into one large linode as I described?
For me, it would mean only one server to keep patched and updated. It would mean NFS shares I no longer have to manage, and it would mean one big linode to "restore from backup" in a crisis. of course. Not to mention, when I do need to do the occasional big maintenance operation, I've got a ton of cores and ram sitting there to bzip2 in half the time it takes now. (I'm estimating on a 20core linode, the processors would average only about 15% use because everything is running in ram)
Am I flawed in my thinking here? Is there some performance downside, or other negative I'm not considering that having one large linode would cause? Or do I just save myself the 20% in costs, and lesson the server management I need to do and combine it all into one big linode, that will still have a lot of spare resources left over when all is said and done.
2 Replies
The performance differences are small in my experience, so they likely wouldn't be the deciding factor between having one or more Linodes. Instead, I'd recommend considering how critical uptime is for your use case. For situations where you need to have as much uptime as possible, I'd recommend a high availability setup across two or more Linodes.
In a usual high availability setup, each back-end Linode shares a portion of the load for the site and database, so you'll likely be able to use two or more smaller Linodes to handle the same amount of traffic as one larger Linode. It depends on your particular site and configuration, however.
High availability isn’t as critical to me as overall performance. For example, I’ve used replication of the database in the past. However, that was years ago before we used innodb and elasticsearch because we needed the dB read/write operations split for performance , not durability.
These days, 95% of our file assets live in versioned object storage and the DB can be restored in a matter of hours in a disaster event. So for my use case the most important thing is speed, which generally means having enough ram to keep the entire dB loaded in buffer pools and enough ram to keep redis and elastic working at their peak.
I just wasn’t sure if I was really gaining anything by using separate high ram Linode for the dB and redis when I could actually have more total ram and cores be combining them down to a single large Linode, while saving money in the process.