Nginx from source, repository or ppa?

I believe I am getting ready to put some websites on my LEMP.

So far I have used Ubuntu server 11.04 and Nginx (full, installed from ppa). There are still some things I need to try out, so I will probably wait till 11.10 comes out to go live.

What are the advantages and disadvantages of installing from source versus from repository versus from ppa?

  • repository: easy, reliable, old version,

  • ppa: easy, recent version

  • source: difficult, recent version, DIY,

When installing from repository or ppa and choosing the nginx-full or nginx-extras, you pretty much have everything installed that you could possibly need. That is how I see it, for later versions when I know what I need and not, I can always switch to installing from source.

I have been learning with Ubuntu server. Are there any advantages for switching to Debian?

PS: What is the importance of the hostname? I forgot to change it and now it is li666-666(.members.linode.com) (changed the numbers). Everything works. If you will be hosting multiple domains (mydomain1.com to mydomain9.com) on that server, how do you choose the hostname for the server? Just pick 1? mydomain5.com or do you use a subdomain vps.mydomain5.com?

12 Replies

@pannix:

What are the advantages and disadvantages of installing from source versus from repository versus from ppa?

  • repository: easy, reliable, old version,

  • ppa: easy, recent version

  • source: difficult, recent version, DIY,

When installing from repository or ppa and choosing the nginx-full or nginx-extras, you pretty much have everything installed that you could possibly need. That is how I see it, for later versions when I know what I need and not, I can always switch to installing from source.

On production systems based on a well-maintained binary-based Linux distribution such as Debian and Ubuntu, it is generally a bad idea to compile your own programs if there's already the same program in the repository. Self-compiled programs are difficult to keep up-to-date, because you need to watch each of the official web sites for security updates. This is particularly important for Internet-facing programs such as nginx, and web programming languages such as PHP. Packages in the repository often look old, and they actually are old when it comes to features. But they usually incorporate many of the bug fixes and all of the security fixes found in later versions. (Even PPAs often fall behind on security updates. Choose your PPAs carefully. Check that the maintainer has a history of keeping his PPA up-to-date.)

Also, Self-compiled programs may be more difficult to troubleshoot, because you might experience crashes that nobody else is experiencing. If a program from the repository crashes, on the other hand, lots of people will end up having the same experience and it will be much easier to look up the issue on Google. Moreover, self-compiled programs might break when other programs and libraries from the repository are updated, especially when upgrading your OS to the next version.

Benefits of compiling include having access to more recent versions, and perhaps a bit more performance due to being optimized for your machine's architecture. But you need to weigh the pros and the cons. Sometimes the programs you need might not be in the repositories at all. I used to compile APC and PHP-Memcached before they became officially included in Debian and Ubuntu. I also used to compile Redis, but I don't need to do that anymore, either. I still need to compile PHP-Redis.

@pannix:

I have been learning with Ubuntu server. Are there any advantages for switching to Debian?
Debian is usually considered more stable, but in reality there is very little difference between Debian and Ubuntu on the server. Most of the difference that people talk about have to do with the graphical interface, which is largely irrelevant in a server. So it comes down to preference.

For example, I prefer Debian because that's what I've been using for the last few years. The last time I set foot in an Ubuntu machine, I found it annoying that it tried to steer me away from calling init scripts directly. But the init scripts still work fine, and someone who is used to recent Ubuntu versions might find it more intuitive to call "start nginx" than "/etc/init.d/nginx start".

@pannix:

PS: What is the importance of the hostname? I forgot to change it and now it is li666-666(.members.linode.com) (changed the numbers). Everything works. If you will be hosting multiple domains (mydomain1.com to mydomain9.com) on that server, how do you choose the hostname for the server? Just pick 1? mydomain5.com or do you use a subdomain vps.mydomain5.com?
If it's just a web server, it doesn't matter. All modern web servers use virtual hosting, so you only need to specify your domain(s) in your web server configuration files. Apache tends to complain if it can't find the full hostname, but it doesn't affect performance in any way. As for nginx, it couldn't care less.

The hostname becomes much more important if you want to send mail. Your hostname should match your reverse DNS (which it does by default), or other mail servers will treat you as a spammer. But even in that case, you can tell your MTA (such as Postfix) to use its own hostname, different from server's hostname. DNS servers often work the same way.

I'm not very familiar with Ubuntu, but that's only because it didn't quite work for me for desktop use. It was rather unstable.

Ubuntu worked ok for my home server, but I don't quite agree with the use of sudo, especially if you want to use your password to ssh into your server. If you allow passwords in your server's ssh daemon and someone figures out your password, they can easily use sudo to run root-only commands to screw up your system, though if you turn off passwords in your server's ssh and you use an RSA key to ssh, you don't have to worry too much about that. RSA keys are considered more secure, too – you get two keys, a public, and a private, and only you should ever have the private key; also, since RSA encryption is very hard to break, it's extremely unlikely that someone will break into your server if you use this method.

If it has to be either Debian or Ubuntu, I'd recommend Debian. For me, it's been slightly more responsive that Ubuntu, and you can setup sudo if you wish by logging in as root and running the following command:

export EDITOR=your.favorite.text.editor && visudo

Replace your.favorit.text.editor with your favorite command line text editor (considering it's installed). The basic format for visudo entries is:

user1      ALL=(ALL) /path/to/command1 , /path/to/command/2
user2      ALL=(ALL) ALL
user3      ALL=(ALL) NOPASSWD: ALL
%group      ALL=(ALL) ALL

After the ALL=(ALL) bit is the commands you want to allow. You must use the full path to the command, e.g. /sbin/nano. There must be a comma between each. If you decide to specify a group, put a percent sign (%) in front of the group to tell sudo that it's a group and not a user. NOPASSWD allows you to run sudo without a password (NOT RECOMMENDED).

DISCLAIMER Unless you use RSA keys to log in and turn off passwords in your server's ssh, this makes it easy for someone to mess with your system if they guess your password.

I don't really like Debian or Ubuntu, or most of the distros offered by Linode. The others I haven't tried, so couldn't offer an opinion on. I still need to get my own distro to upload to my Linode, though, and if it works out well, I may have to suggest it to Linode for their list of distros :-)

Well, you've got to start somewhere and Ubuntu seemed and still seems like a good place. After that I can give Centos or Fedora a try.

You made me curious, what distro are you talking about?

I use Ark Linux. It's the first to have well and truly worked for me. Fast, stable, great for new and experienced users alike. It's the first distro that stuck with me, I've been using it since February 2007. I was somewhat scared of Linux back then. I knew hardly anything of Linux. Ark fixed that, I tried it in 2005 along with a bunch of others, but it was such a "digital culture shock" that I went back to Windows. Ark was the only one that stuck when I tried again in 2007, and it's the only one that sticks with me to this day.

I remember trying Ubuntu back in 2005 then again in 2007. Having come from Windows and never used a command line, I expected a GUI, and both times it gave me a command line even though I saw a GUI in the screen shots. A lot of other did that. Only Linux Mint and Ark gave me the GUI and Mint lagged horribly. It wasn't until I replaced the computer that all the others started giving me the X Server, and they all are either very laggy or severely bloated compared to Ark Linux, nor did they provide the same level of comfort and use that I came to expect from Ark.

It's been awhile since Ark's last release (2008) due to some technical issues on the development release, but we're getting close. It's definitely stable enough for server use, our head developer is using it full time on his web server, his old job uses it on one of their servers. What's holding it back from release now is the need to replace a couple of graphical apps and to repackage a desktop environment (in other words, stuff pertaining to the GUI), and that's not needed for my server since it isn't going to run an X Server :) It's just a matter of installing Ark and then installing the needed packages for my server and removing the X Server (and possibly compiling nginx and php-fpm for my web site, I'm not sure if Ark has packages for those). Once I do that, I can upload Ark :D

(edited to fix a spelling error and a grammar error)

@Piki: Interesting story. I wonder what prevented Ubuntu from giving you a GUI back then. Maybe your video card was not compatible with their drivers? But you're right, the desktop edition of Ubuntu really is bloated. You know it's bad when a Linux distribution takes longer to boot than Windows XP on the same machine.

Thankfully, most of that GUI bloat is irrelevant in servers. When you're comparing distros on the command line where almost everyone uses Linode's kernels and the same set of GNU tools, most of the differences boil down to preference: apt vs. yum vs. pacman vs. portage vs. do-it-yourself compiling, locations of various files, the amount of hand-holding, frequency of stable releases, etc.

For example, Ubuntu has a nifty feature that tells you which package you need to install if you try to use a command that hasn't been installed yet. Newbies love that. But it also nags you to use commands exactly the way Mr. Mark Shuttleworth wants you to use them, e.g. "start nginx" vs. "/etc/init.d/nginx start", or having to edit a file before running "dpkg-reconfigure locales". For people who are used to it, this is not a problem. For people who aren't, the little things quickly get annoying. But every distribution has perks like this. For instance, enabling additional language packs in the Debian GUI is a pain in the ass, and I have yet to see a distribution that does font anti-aliasing better than Ubuntu and Linux Mint. But that, again, is a matter of preference.

@hybinet:

@Piki: Interesting story. I wonder what prevented Ubuntu from giving you a GUI back then. Maybe your video card was not compatible with their drivers? But you're right, the desktop edition of Ubuntu really is bloated. You know it's bad when a Linux distribution takes longer to boot than Windows XP on the same machine.

The card that particular computer had was supported by the F\OSS ATI driver. The Ubuntu IRC staff said it was trying to assign the proprietary drivers that support newer cards than I had, but they couldn't figure out why. When I tried Ark, the desktop came up immediately, though due to a misplaced file in the driver package, I had to symlink the file to the correct location. The package was fixed in the release after.

Thankfully, most of that GUI bloat is irrelevant in servers. When you're comparing distros on the command line where almost everyone uses Linode's kernels and the same set of GNU tools, most of the differences boil down to preference: apt vs. yum vs. pacman vs. portage vs. do-it-yourself compiling, locations of various files, the amount of hand-holding, frequency of stable releases, etc.

Aside from the Linode kernel, the packages are still built by the distribution maintainers and so will have the modified code that is found in the official distribution repositories and will be built against the same libraries as everything else in the official distribution repositories. This could be good or bad. This is in part because of stability ans security, and in part from personal preference. Also, even at the command line there can be a lot of bloat – I have seen some command line only distros that include games using ncurses. I even saw one awhile back that was supposed to be for security and used some GUI tools for administering things, and it included a slots game and several patience card games, don't remember what the distro was called though. Go figure!

> For example, Ubuntu has a nifty feature that tells you which package you need to install if you try to use a command that hasn't been installed yet. Newbies love that. But it also nags you to use commands exactly the way Mr. Mark Shuttleworth wants you to use them, e.g. "start nginx" vs. "/etc/init.d/nginx start", or having to edit a file before running "dpkg-reconfigure locales". For people who are used to it, this is not a problem. For people who aren't, the little things quickly get annoying. But every distribution has perks like this. For instance, enabling additional language packs in the Debian GUI is a pain in the ass, and I have yet to see a distribution that does font anti-aliasing better than Ubuntu and Linux Mint. But that, again, is a matter of preference.

Most things in Linuxe are a matter of preference – why in the world would 50 different programs exist just for editing text? :D Personally, though, I think that no matter what the distro is geared toward, it shouldn't over-complicate things with a huge number of programs for the same task. Preferably it should include maybe up to three different text editors and two web browsers, etc. if it's geared to, for example, desktop use, and include alternatives in the repos in case the user doesn't like the defaults or decides he wants to try something else. A lot of the distros that worked on my computer had this problem, and it annoyed the crap out of me – on a few, I had to scroll through a single menu before I found Firefox amid an entire army of web browsers just to log into my Gmail because I don't like using email clients, though there certainly wasn't a shortage of those either. Nowadays, though, just thinking of that makes me laugh :lol:

(edit to replace an incorrect word with a similar looking correct word)

@Piki:

on a few, I had to scroll through a single menu before I found Firefox amid an entire army of web browsers just to log into my Gmail

Reminds me of the Unity fiasco that Ubuntu is currently going through. Unity's "dash" menu almost eliminates the distinction between locally installed apps and apps that are available through Canonical's Software Center. As a result, the user always gets to see a bunch of useless apps at the top of the menu even if the computer itself only has a minimal setup. Ridiculous.

But we're getting far off-topic here… 8)

@hybinet:

Reminds me of the Unity fiasco that Ubuntu is currently going through. Unity's "dash" menu almost eliminates the distinction between locally installed apps and apps that are available through Canonical's Software Center. As a result, the user always gets to see a bunch of useless apps at the top of the menu even if the computer itself only has a minimal setup. Ridiculous.

But we're getting far off-topic here… 8)

Debian will hear many of the same complaints when they switch to GNOME 3 ;)

@Guspaz:

Debian will hear many of the same complaints when they switch to GNOME 3 ;)
My prediction is that most of the complaints will be directed toward the GNOME team, not Debian. After all, Debian is "only repackaging" what the upstream throws at them, and only after a very long wait period. Ubuntu, on the other hand, is supposed to be "solely responsible" for the Unity fiasco. People are so unfair :P

I think that, in either case, it's the responsibility of the distro to make those decisions. I think Ubuntu made a mistake with Unity, and Debian is making a mistake with GNOME 3's shell, which superficially (I've not tried it) looks pretty much identical to Unity (which I have tried and hate).

Debian has various options. They can continue to use GNOME 2's shell with GNOME 3 (forking it if they must; if Ubuntu can maintain their own shell, so can Debian), or they can switch to a different DE like LXDE, Xfce, KDE, etc.

What worries me is that every OS is going towards this sort of nonsense. Win8 looks like a tablet, and the new Metro interface can't be disabled (it replaces your start menu, so you can't get rid of it), Ubuntu is going to Unity, most other distros are going to GnomeShell 3, OS X is bringing more and more iOS-isms in…

What's left? KDE/LXDE/Xfce? Are these the only remaining traditional desktops among all the major platforms? What happens when KDE decides to take this approach too?

There's TDE. They are an unofficial continuation of KDE3, and their goal is to maintain the same look, feel, usability, and feature set as KDE3 while providing bug fixes, security patches, adding missing features, and replacing old or broken dependencies with newer working ones.

But this is getting seriously way off topic… perhaps split this discussion into a new thread if we wish to continue it? :)

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct