UML vs Xen

My Linode is on UML and is quite happy there, but I'm wondering if/when I should put in a ticket to switch to Xen. From what I've seen, the Xen hosts are a bit less stable, owing to the maturity of UML. Also, UML has the token bucket feature to keep one linode from squishing its neighbors. What reasons are there for switching over? Is Xen that much faster?

I'm currently following the rule of "if it ain't broke, don't fix it" but at the same time I don't want to be missing out. =)

Thanks!

16 Replies

I've been on a early Xen host almost a year and in that time there's been one host crash that was from a bug that was known about. The host kernel was patched and rebooted and no problems since.

That process took about 20 minutes so I'm pretty close to five nines for the year.

The other problem is the 2.6.28* kernels seem to have issues with keeping time, so I have to run ntpd on my linode, which is of course pretty minor.

It's possible that performance is better on the Xen hosts (and you get multiple virtual cpu's inside a Xen instance), but I've definitely had less reliability under Xen than under UML.

Linode staff have been their usual helpful selves but shrug I'd stick with UML if I was you unless you need some of the featues Xen provides (CPUs, pv-grub)

question: how to check which kind of host I am on? Sorry but I really have no idea…

@blacktulip:

question: how to check which kind of host I am on? Sorry but I really have no idea…

% cat /proc/io_status

If that works then you're on UML

% grep processor /proc/cpuinfo

If that returns 4 CPUs then you're on Xen

I don't remember, but I think /proc/cpuinfo would also say "User Mode Linux" in the vendor field…

@sweh:

@blacktulip:

question: how to check which kind of host I am on? Sorry but I really have no idea…

% cat /proc/io_status

If that works then you're on UML

% grep processor /proc/cpuinfo

If that returns 4 CPUs then you're on Xen

I don't remember, but I think /proc/cpuinfo would also say "User Mode Linux" in the vendor field…

Thank you very much. It seems I am on xen then.

@sweh:

shrug I'd stick with UML if I was you

Thanks to everyone for your replies! The quote above, including the shrug, was pretty much my thinking both before and after this thread. In the end it's probably not a huge deal either way, but there's no great reason for me to switch.

I'll probably necro this thread in another year or three and see if anything's changed. =)

@kirbysdl:

I'll probably necro this thread in another year or three and see if anything's changed. =)

In another three years, Linode will probably have converted all of the UML users over to Xen. They already did that in the Atlanta data center.

Personally, I'm still on UML. I'm not sure what I think of Xen. I used to be pretty anti-Xen thanks to the growing pains Linode has had with it, but most/all of them have been worked out by now. Recently, I had been starting to get pro-Xen, but then I noticed all the problems with the pv-ops kernels, so now I'm neutral. A non-pv-ops kernel would work fine, of course, but it would also be older, and where's the fun in that?

Xen does have benefits (mainly SMP and custom kernels), so I am looking forward to it, but I'm not going to put in a ticket to switch. UML hasn't let me down yet, and it's good enough for my needs. But when the time comes, it will be nice to be able to take advantage of Xen.

(Actually, my biggest reason for not switching is that rebooting would ruin my uptime. 317 days!)

Edit: typo

Edit: Rewrote the "non-pv-ops" sentence

I just rebooted my home (not linode) server to get to debian 5. 550 days! =P

http://www.curby.net/stats2/uptime.html

We have a policy at work to reboot machines every 90 days…

  6:05pm  up 500 day(s), 10:08,  2 users,  load average: 0.31, 0.11, 0.06

Oops!

@sweh:

We have a policy at work to reboot machines every 90 days…

That's a little disturbing on many levels…

The general policy in Operations organizations I've worked in or otherwise been involved with is to avoid turning servers off unless absolutely necessary. Hell, as long as their primary function was basically OK, we'd let them sit in half-dead states for months, even Windows boxes (thankfully I don't deal with those much anymore).

"Rebooting" is just another word for "taunting Murphy", particularly when it's Real Hardware and not a VM.

@nknight:

@sweh:

We have a policy at work to reboot machines every 90 days…

That's a little disturbing on many levels…

The general policy in Operations organizations I've worked in or otherwise been involved with is to avoid turning servers off unless absolutely necessary. Hell, as long as their primary function was basically OK, we'd let them sit in half-dead states for months, even Windows boxes (thankfully I don't deal with those much anymore).

"Rebooting" is just another word for "taunting Murphy", particularly when it's Real Hardware and not a VM.
I know in a lot of Microsoft environments this is actually SOP to schedule reboots, hell back in the day I remember technet had a "reboot" script posted for Win Admin's.

Anyway I worked at a company a few years back who had physically lost a HP9000. Didn't know where it was at other than still up and running on the network. When I left it's uptime was well over 3 years :wink:

@nknight:

The general policy in Operations organizations I've worked in or otherwise been involved with is to avoid turning servers off unless absolutely necessary.

A reboot does not mean turning it off.

@nknight:

"Rebooting" is just another word for "taunting Murphy", particularly when it's Real Hardware and not a VM.

You don't have to power it down, but a nice reboot can make sure that it'll come back up in a usable state should it get powered down. Servers have a nasty tendency to accumulate broken init scripts…

@hoopycat:

@nknight:

"Rebooting" is just another word for "taunting Murphy", particularly when it's Real Hardware and not a VM.

You don't have to power it down, but a nice reboot can make sure that it'll come back up in a usable state should it get powered down. Servers have a nasty tendency to accumulate broken init scripts…
Yeah, this is the main reason; business continuity and resilience. If we can be sure the server comes up cleanly when we reboot then there's a good chance it'll come back after an unplanned outage (which would be the worst time for a broken configuration or init script to cause problems).

@sweh:

Yeah, this is the main reason; business continuity and resilience. If we can be sure the server comes up cleanly when we reboot then there's a good chance it'll come back after an unplanned outage (which would be the worst time for a broken configuration or init script to cause problems).

Or a planned outage. Happened to me at work once. Have a failover cluster. Drop one server, add memory, bring it up, fail over, drop other server, add memory, won't come back up. 24 hours of troubleshooting later, bad config. That was not a fun weekend.

It happened to me yesterday. I've been uninstalling packages from my linode trying to see what I can live without. Apparently Debian's /etc/init.d/networking can't bring up an interface without the ifupdown package installed. :shock:

Knowing's half the battle, brought about by a test reboot to see if things still worked.

Logging in via lish and reinstalling ifupdown is the other half.

Wow, what a derail huh. :wink:

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct