openvpn

It's looking like openvpn isn't working.

log:

> Mar 28 02:48:21 h2 ovpn-client_h2[1916]: OPTIONS IMPORT: route options modified

Mar 28 02:48:21 h2 ovpn-client_h2[1916]: Note: Cannot open TUN/TAP dev /dev/net/tun: No such device (errno=19)

Mar 28 02:48:21 h2 ovpn-client_h2[1916]: Note: Attempting fallback to kernel 2.2 TUN/TAP interface

Mar 28 02:48:21 h2 ovpn-client_h2[1916]: Cannot allocate TUN/TAP dev dynamically

Mar 28 02:48:21 h2 ovpn-client_h2[1916]: Exiting

> # ls -al /dev/net/tun

crw-rw-rw- 1 root root 10, 200 Feb 22 2004 /dev/net/tun

something missing?

7 Replies

Ok .. tuntap is back in build #4. Can you confirm, please?

Thanks,

-Chris

That did it, thanks.

Otherwise its going completely smooth, except an initial problem mounting swap that was resolved by rebooting, as described in another thread.

I've got the following and more running, Linode 80:

java 1.5/tomcat 5.5 embedded

nagios

postfix

bind9

apache2

postgresql 8

openvpn

nfs

Seems very fast. We don't have the IO limits right? And is this host a quad opteron?

@kiomava:

Seems very fast. We don't have the IO limits right? And is this host a quad opteron?
There aren't IO limits persay, just different priorities on disk access that I can modify for each Linode (actually each virtual device).

The host is dual opteron. Your seeing four virtual CPUs inside the node.

-Chris

So with Xenodes, we won't have the io_tokens patch and you'll use the xen io priority knob to enforce equal IO? I wonder how that will work with a more loaded down host, guess we'll have to wait for more beta users. Seems excellent so far with the current levels. Are there any /proc interfaces to let us see xen-specific IO stats and prioritization?

Also just outta curiosity, since we're on opterons, any 64 bit kernel support?

It's actually a CFQ disk scheduler knob, not Xen…

Nothing in /proc, other than a few simple values in /proc/xen/. Works both ways. With UML I could view files in /proc, like /proc/swaps -- one of the more useful methods of detecting badly behaving nodes.

As for 64 bit kernels, probably not for a long time. Xen only allows one type of xenU kernels per host -- so they'd all have to be 64 bit. It would also create a pool of boxes that only other 64 bit users could migrate to/from.

They have stated that in the future, running a 64 bit Xen hypervisor would be possible that would support both PAE and 64bit guests. So, it is a possibility.

-Chris

Forgive my ignorance, but I thought it was possible to run your userland 32-bits even with a 64-bit kernel. Couldn't our current system images run on 64-bit Xen without much tweaking?

@Xan:

Forgive my ignorance, but I thought it was possible to run your userland 32-bits even with a 64-bit kernel. Couldn't our current system images run on 64-bit Xen without much tweaking?

This is definitely possible when running a 64 bit Linux kernel on an AMD64 bit chip. Whether it's possible for a situation like Xen I'm not sure as I believe this facility is actually provided by the chip rather than the kernel itself.

Andrew

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct