Host5 net speed
recently, or be stretched quite a bit.
When I signed up I was getting a meg+ transfer/second, not it seems
I mostly get 50K/second, ocassionally a few hundred K/second,
but never up to the 1M+ I used to get.
-Ashen
24 Replies
Don't know if it was the box load or the link though.
@Ashen:
I don't know, host5 seems to either be under quite a heavy load
recently, or be stretched quite a bit.
When I signed up I was getting a meg+ transfer/second, not it seems
I mostly get 50K/second, ocassionally a few hundred K/second,
but never up to the 1M+ I used to get.
-Ashen
It's hard for me to tell what kind of transfer rate I am getting from my Linode on host5, but it doesn't seem appreciably slower now than it did a month or two ago.
One thing I have noticed is that when I ssh to my Linode, it takes a long time for the ssh process to log me in, even though my Linode is almost completely unloaded. But then after I am logged in, response is quite snappy. It's almost like it's taking host5 time to swap my Linode, or parts of my Linode, in, even though the memory used by the Linode is supposed to never swap.
Perhaps it's just that my Linode has to load pages of the ssh binary, or other files, off of disk, and that access to not-recently-used parts of my Linode virtual disk are not in host5's filesystem memory cache, and so it has to go to disk for them, and maybe host5's disk is really getting thrashed with all of the Linodes on it.
I'd suspect that the biggest bottleneck on Linodes is the disk performance, it must be hell on a machine to have to service 32 different threads accessing completely unrelated parts of the disk at the same time.
So I went one by one down the Linode hostname list, from li-1.members.linode.com, to l-254.members.linode.com, and the same range for li2.
I found that about 50% of these addresses had no web servers on them, and of the ones that did have web servers, at least 75% of them just had "congratulations, you just installed Apache" type unconfigured web pages on them.
Perhaps things are slowing down because everyone is finally getting their web sites going …
http://69.56.173.86/http://69.56.173.86/stats/
@inkblot:
I leave that page up on every web server I run. You may have seen it at
. The real websites are all vhosts on that machine. Try http://69.56.173.86/. http://69.56.173.86/stats/
Oh yeah, I saw that one quite a bit
@bji:
It's hard for me to tell what kind of transfer rate I am getting from my Linode on host5, but it doesn't seem appreciably slower now than it did a month or two ago.
Monitoring our total bandwidth on our switch at ThePlanet today we averaged around 2-5Mbits/sec – so there's nothing weird or wrong going on within our local network. Transfer rates between us and remote sites can vary depending on too many factors to list…
@bji:
One thing I have noticed is that when I ssh to my Linode, it takes a long time for the ssh process to log me in, even though my Linode is almost completely unloaded. But then after I am logged in, response is quite snappy. It's almost like it's taking host5 time to swap my Linode, or parts of my Linode, in, even though the memory used by the Linode is supposed to never swap.
Any idle process (or parts of it) in the eyes of Linux is a candidate to be swapped out. The difference is, when it wants to come back out of swap, the RAM is available. I imagine what you're seeing is a bit of disk I/O, too, as you mentioned.
How is the response time on successive ssh logins?
-Chris
@caker:
Any idle process (or parts of it) in the eyes of Linux is a candidate to be swapped out. The difference is, when it wants to come back out of swap, the RAM is available. I imagine what you're seeing is a bit of disk I/O, too, as you mentioned.
How is the response time on successive ssh logins?
-Chris
Successive logins are fine.
I was under the impression that our Linode physical RAM is pegged in memory and never swaps on the host. It's any file which is not cached in the Linode's filesystem cache in memory, that would have to be paged in. Right?
It's not like our Linode's physical memory ever gets swapped out, right?
If it does, then it works quite a bit differently than I thought when I first read the Linode description on the "What is a Linode?" page:
"If you ask for 64MB of ram, you're getting it - it will never be swapped out"
I guess the distinction is a little tricky, and I want to make sure I understand it. It was my belief that on the host (host5, for example), the memory which is set aside for the Linodes is marked somehow as not swappable. I thought maybe even that the host system is set up with no swap partition or file so that it can't swap out any Linode memory.
Of course, within a running Linode itself, the kernel may decide to swap memory out to the swap partition or file of the Linode, which is just a region of the Linode's virtual disk on the host. Once any memory is put there by the Linode, it is a candidate for being swapped out to the host's real hard disk, but not before.
Am I right?
@bji:
Successive logins are fine.
I was under the impression that our Linode physical RAM is pegged in memory and never swaps on the host. It's any file which is not cached in the Linode's filesystem cache in memory, that would have to be paged in. Right?
It's not like our Linode's physical memory ever gets swapped out, right?
If it does, then it works quite a bit differently than I thought when I first read the Linode description on the "What is a Linode?" page:
"If you ask for 64MB of ram, you're getting it - it will never be swapped out"
I guess the distinction is a little tricky, and I want to make sure I understand it. It was my belief that on the host (host5, for example), the memory which is set aside for the Linodes is marked somehow as not swappable. I thought maybe even that the host system is set up with no swap partition or file so that it can't swap out any Linode memory.
Of course, within a running Linode itself, the kernel may decide to swap memory out to the swap partition or file of the Linode, which is just a region of the Linode's virtual disk on the host. Once any memory is put there by the Linode, it is a candidate for being swapped out to the host's real hard disk, but not before.
Am I right?
The swapping out is done in your linode, as far as the host is concerned it will never swap out your 64 meg of memory, or what ever physical memory you have allocated. Depending on what you are running in the linode, it may decide to swap out some process, so if sshd is not been used, it may be swapped out so some other process apache etc can use that memory.
So you are right, in that it is your linode that is swapping things out and not the host.
Adam
I like using vhosts on Apache - that way, the access logs for the real sites don't get cluttered up with junk from script kiddies and worm-infected machines vainly trying to take advantage of IIS exploits. Code Red/Nimda-style hits are still happening constantly.
Just to clarify, that statement is technically inaccurate and I will make sure it gets updated. The essence of the statement is true: that we don't over sell the machines, that there *is always $linodeRam * $numLinodes physical memory available* (at least) on the hosts. This is compared to other providers using technologies like Virtuozzo, whereby they advertise "256 MB OF RAM" and a little asterisk which says "* only 8MB physical" !
Without going too deep into the details, it is technically feasible that the host's VM system will choose least used pages from a Linode "memory file" to swap, but only for the benefit of using that now-free RAM for disk cache. I trust the way the VM system does this and it hasn't been a problem, so enough said.
I want the hosts to actually HAVE swapfiles, because there are other processes that can swap, like all the "screen"/remote console processes that remain idle 99% of the time. Plus, if the hosts ever needed more memory and didn't have it, init wouldn't have to start randomly killing Linodes
Now, think about this: The host keeps pages in it's cache, just in case it needs them again. Your Linode also does the same thing. We get this double-caching effect, which is not very efficient. This is one of the things Jeff Dike (author of UML) and I have talked about, and I'm considering sponsoring the development of a better memory manager for UML.
Your Linode kernel is going to swap LRU (least recently used) pages out to swap, but those pages might stay inside the hosts's cache, so its a positive and a negative…
-Chris
@caker:
BJI,
Just to clarify, that statement is technically inaccurate and I will make sure it gets updated. The essence of the statement is true: that we don't over sell the machines, that there *is always $linodeRam * $numLinodes physical memory available* (at least) on the hosts. This is compared to other providers using technologies like Virtuozzo, whereby they advertise "256 MB OF RAM" and a little asterisk which says "* only 8MB physical" !
Without going too deep into the details, it is technically feasible that the host's VM system will choose least used pages from a Linode "memory file" to swap, but only for the benefit of using that now-free RAM for disk cache. I trust the way the VM system does this and it hasn't been a problem, so enough said.
I want the hosts to actually HAVE swapfiles, because there are other processes that can swap, like all the "screen"/remote console processes that remain idle 99% of the time. Plus, if the hosts ever needed more memory and didn't have it, init wouldn't have to start randomly killing Linodes
:) Now, think about this: The host keeps pages in it's cache, just in case it needs them again. Your Linode also does the same thing. We get this double-caching effect, which is not very efficient. This is one of the things Jeff Dike (author of UML) and I have talked about, and I'm considering sponsoring the development of a better memory manager for UML.
Your Linode kernel is going to swap LRU (least recently used) pages out to swap, but those pages might stay inside the hosts's cache, so its a positive and a negative…
-Chris
Very interesting. Have you tried any testing with a system with a Linode host with no swap? I don't know what the hosts do besides run Linodes but I hope it's very little, and I can't imagine whatever a host needs to do needs much RAM. You could use ulimit on the non-Linode processes on the host to keep them from using too much memory. Let's say that you set aside 256 MB for the Linode host aside from the memory used by the Linodes themselves. Then if you turned off swap on the Linode host, and ulimited the host processes so as to guarantee (or come as close to guaranteeing as possible) that the non-Linode processes don't use more than 256 MB, then you could be sure that all of the Linode memory is kept in RAM all the time.
The reason that I propose trying this is, I really think that guaranteeing that Linode memory stays in RAM is important. I'm sure that Linux's memory management was written to be efficient in the case where fetching pages from memory actually gets them from memory. The latency that is induced by having to fetch what the Linode thinks are memory pages, from the host on the disk, might wreak havoc with Linux's memory management efficiency. It's just a guess though; I don't run user-mode Linux (except at Linode) and I don't have any way to really know. It just seems weird to me that a Linode kernel which thinks it is storing data in RAM might actually be storing it indirectly on the host disk, and vice-versa - when a Linode pages memory out to its virtual disk, it's likely that it will just be a memcopy on the actual host since the host will cache the virtual disk write in memory.
I wonder if the Linux memory management code will work alot better if RAM is always RAM and disk is always disk … it might be better for the Linode host system performance overall.
The other thing I wonder about, is how badly it would screw all of the other Linodes on a host system if someone set up a Linode with a 256 MB root partition, and a 2 GB swap file, and ran a process that simply looped through all of its virtual memory space, touching pages … and thus contended with other Linodes for their RAM …
I think the reason that three processes were sitting on the run queue waiting and waiting is that they were waiting for their Linode memory to swap in. That's my guess anyway. Once again I think that pegging Linode memory in RAM would probably really help alleviate this problem.
If you try to automaticly crack password, and each attempt takes several seconds, you will only have several hundred attempts/hour. Back in the 1980s your could easily try ten thousand attempts/hour.
Make sure your root password contains numbers & letters.
I'ev been on host2 for three months and its fine
@gmt:
The long login time is not a bug, its a feature! Network logins (ssh etc) take several seconds as a security measure.
If you try to automaticly crack password, and each attempt takes several seconds, you will only have several hundred attempts/hour. Back in the 1980s your could easily try ten thousand attempts/hour.
Make sure your root password contains numbers & letters.
:lol: I'ev been on host2 for three months and its fine
:roll:
That may all be true, but it takes my ssh 1 - 2 MINUTES to log in late at night.
I just ssh'd in and it took only a few seconds to log in. I'm beginning to think that it's not a memory issue … because after long periods of inactivity during the day, it doesn't get any slower to log in via SSH.
I'm thinking that something runs nightly on host5 (around 1:15 am) that really, really slows the machine down, possibly using up lots and lots of memory as well.
-Chris
Today I've noticed that my ssh sessions just drop.
I am on a very reliable network, so that cannot be the problem.
Sometimes my ssh sessions just drop for no reason,
and I get disconnected errors. I see nothing about this in /var/log
messages so either my routers are doing something naughty with
IP addresses or such, or something is up with the linode network.
@caker:
Yes, it's 30 cron jobs inside the Linodes running slocate at the same time. I've been going through the distro templates to make [slocate|find] only run once a week.
While running only weekly will help us all, there will still be that weekly time that everything comes to a halt. Why don't you change the cron job a little to something like this, so that the start times of all the hogging processes are offset a little:
!/bin/sh
sleep some random time between 0 and 2 hours
sleep $((RANDOM %7200))
cpu/mem/disk hogger process runs below
I also made (or will have made) each distro's default cron-time a little different, so at least there's a mix of loads…
-Chris
@inkblot:
I have seen a lot of complaints about dropped connections in various threads here. I would just like to note that I have never experienced any dropped connections to or from my linode.
ditto.
@inkblot:I take this as evidence that the Linode network is Just Fine.
Well, in all fairness it just proves that our host and it's network is fine, but I'm still happy.
Kenny
@caker:
BJI,
Just to clarify, that statement is technically inaccurate and I will make sure it gets updated.
-Chris
Just a friendly reminder, it's been 1 month and the site hasn't been updated.
Thanks,
Bryan
@bji:
Just a friendly reminder, it's been 1 month and the site hasn't been updated.
Thanks –. updated
Also, I have some more information – looking at the averages across the hosts, the most any UML process has 'in swap' is 200 kbytes. And I've never seen it go higher than that. Thought you'd like to know!
-Chris
@caker:
@bji:Just a friendly reminder, it's been 1 month and the site hasn't been updated.
Thanks –. updatedAlso, I have some more information – looking at the averages across the hosts, the most any UML process has 'in swap' is 200 kbytes. And I've never seen it go higher than that. Thought you'd like to know!
-Chris
Thought you'd like to know that I have been having great performance for the past couple of months. Haven't noticed any slowdowns at all. Fantastic! I'm very pleased with my Linode!