[Off Topic] Home server - need advice
The networking stuff and backup are pretty simple to do. I'll put them in separate virtual machines and can get that sorted out nice and quickly. The problem is I also want a virtual machine dedicated to running my development environment. I want to be able to use something like VNC to connect to it and use it from my main desktop computer (currently running Windows 8 Pro). I have a couple of questions about that though. If I set up Linux in a virtual machine will I be able to view the desktop of the Linux VM at my Windows machines native resolution (2560 x 1440) using a VNC viewer? I basically want to be able to use the machine as if I was sitting at it. According to the specs of the integrated graphics of the CPU I am planning on buying it does support that resolution but only if outputting via DisplayPort. Since this will be VNC over ethernet will this cause any issues?
I'm pretty new with dealing with local servers and VNC in particular so sorry for the stupid questions.
10 Replies
@Guspaz:
You can use whatever resolution you want over VNC if you set it up right, but you should consider using NX instead of VNC. It will perform much better than VNC, particularly at such high resolutions.
Thanks. I'll look into that.
@vonskippy:
If we're doing "shoulda's" he should use a real bare metal hypervisor like ESXi or Xen and skip the VM on top of a OS clusterf**k that KVM is.
The only reason I wanted to use KVM was because I heard it ran FreeBSD better than other options (I want the networking VM and backup VM to be running FreeBSD). I'm open to other options though.
FreeBSD 9 (and 8) is officially supported by VMWare (using ESXi 5).
And the interweb is chock full of articles on running FreeBSD on top of ESXi, example
KVM was my preference as I know lots of people use it for FreeBSD virtualisation. Frankly though for my uses it should be fine. I haven't really heard too many negatives about KVM, why are you so down on it? Is there anything I should know before deploying it?
The main reason is security - much harder to hack a specialty OS (which is what the core bare metal hypervisor is), then to hack a general OS (which is what KVM runs on top of).
Then there's less management/maintenance. The bare metal hypervisor is basically install and forget, the general OS is a never ending patch cycle and security tweaking project.
Although there's much debate (and several different benchmarks to fish the results you want to prove your point from), for the setups we run, a bare metal hypervisor is just (way) more efficient. The box's resources go to the Guest OS's instead of the Host OS.
Finally, I just like ESXi and it's management tools. Easy to setup, easy to manage, easy to monitor. KVM just seems way less mature. It's clunky, and for me anyways, way harder to setup, especially getting the third party management tools to work (like ProxMoxVE or Archipel).
Speedwise, KVM has made some serious improvements in efficiency, so it's no longer a slam dunk just to pick Xen or ESXi because their better performers, the VM playing field is pretty level performance wise across the entire selections (i.e. ESXi, Xen, KVM, Parallels, and Hyper-V Server).
Of course YMMV, so best to setup both ESXi and KVM and see what YOU think is better.