UML performance question
I can see lots of problems with the idea – we all use different flavors of linux, for one. And I doubt that the token system could be easily adapted not to charge us for RAM disk accesses.
But putting aside those problems, would such a setup help to alleviate the disk drive bottleneck that VPS systems in general (ie., not just linode) have?
I could see reasons why it wouldn't -- maybe the COW file system works in a way that would require frequent magnetic disk accesses even if we used a RAM disk for the base image. And in the real world, when you're doing something disk intensive, you're probably doing it to your own data, and not to common OS files. So I don't know if it would help much.
I'm not putting this out there as a practical suggestion, because I know it really isn't practical. I'm just curious if it would help performance much.
2 Replies
UML's COW implementation requires that a COW image and its derived images to be of the same size.
Multiple distros imply multiple COW images
Disk performance bottlenecks most commonly occur from nodes swapping pages in/out to disk
Also, file-backed disk images have a double caching effect – both the UML and the HOST are caching the data, which is inefficient. In order to avoid the double caching problem that's inherit with going through the host's vfs layer we've been deploying hosts with an LVM backend -- and each Linode is directly accessing their partition(s) through LVM dev nodes…
Our newer, hardware RAID hosts seem to have a much higher tolerance for high disk contention. We're planning on deploying a good number of those hosts in anticipation of a huge resource increase (hint hint). So, that means more RAM for Linodes, less swap -- faster host servers that are more tolerant to thrashers.
-Chris
@caker:
We're planning on deploying a good number of those hosts in anticipation of a huge resource increase (hint hint).
Mmmmm - sounds good!