NFS on private IP halting the boot process

Hi all,

I am having a problem with NFS between my two linodes in the Newark data center. Once the system is booted, I am able to mount the drive using the private IPs, but when I added the line below to the fstab file, it freezes the boot process trying to connect. It works if I use the public IPs but that's obviously not a real solution to the problem.

/etc/fstab

jaxified:/var/www       /var/www        nfs     rw,rsize=4096,wsize=4096,hard,intr,async,nodev,nosuid   0 0

Obviously, the /etc/hosts file has, right now, the public IP for the host jaxified. If I switch it to the private IP, only then do I get the problems. I have the IPs configured correctly on both hosts and the /etc/exports is below:

/etc/exports

/var/www        207.192.74.210(rw,async,insecure,no_subtree_check)

Any help would be appreciated. Thanks in advance,

Terry

5 Replies

I may be confused about how private IPs work, but I'd guess that when accessing the private IP on Linode 2 (the NFS server), Linode 1 (the NFS client) is using it's own private IP. Thus, the exports line is wrong. Hmmm. Except it works when both systems are up. Routing? Maybe something affects the routing during the boot process, so that the private IP isn't reachable when the disks are automounted.

Sorry, just rambling.

No, ramble on. I've considered the routing issue and it's possible. The code above is working right now on the public IPs but when i change both files to the private IPs the problem occurs so the IPs in the code sections above are correct for the working public network.

My other thought is that since eth1 is obviously virtual maybe the virtualization (Xen I think) is only brining up one interface at boot time and the second after boot? Unfortunately I don't have a lot of experience with Xen.

Thanks for your response.

Terry

P.S. Linode has assured me they are not blocking any ports on the internal network in Newark.

For what it's worth, I've set up NFS across a couple of Linodes using the back-end network before, and it works a treat.

Are you sure the network configuration (of both devices) is occurring before additional mounts are attempted?

-Chris

I was wondering about that myself. It's on debian 4.0 and the mountnfs script is in /etc/network/if-up.d but I'm not sure how to check that it's configuring the second interface before it's running that script.

If you have any suggestions, I'm absolutely all ears :)

Terry

Don't run the the mount script from ifup.d. Instead, put the reference in your /etc/fstab. This will be processed after all your interfaces are configured (assuming you have all the relevant interfaces configure as "auto", of course). This is what I do with my Debian systems, and it seems to work fine. Ifup.d scripts are nice for laptops and such, but (IMO) more trouble than not for servers, where the interfaces are more-or-less fixed.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct