[RESOLVED] Linode not recommended for CPU intensive tasks
That is the answer I got from support when I asked about the reason for the recent performance degradation.
After the last "upgrade" some processes are taking more than three times longer and CPU usage ratio is much, much higher.
I have been 5 years with Linode and invested over 20000$. It was really good until now, but not any more. I checked with them before, years ago, and their answer was more positive, they said "that is the kind of tasks we like to see at Linode", instead of recommending moving away.
It is a pity, I have made a partial migration to digital ocean, but I am not that convinced.
It would be good if Linode investigates the demand for CPU-intensive tasks and provide for that niche at a price that is mutually beneficial.
I wonder if anybody else is in a similar situation.
I was doing the numerical modelling that provides data for meteoexploration.com, the web is still at Linode, temporarily, but the process not any more.
All the best,
jgc
13 Replies
If they can't be parallelized, you're going to have a bad time on newer CPUs. I'm having the same experience with ModelSim on "real" Xeon E5s: per core, they're just plain slower. The difference is that there's twice as many of them, which is where the improvement lies if applications can take advantage of it.
I blame ModelSim, because Chrome is wicked fast.
Also, unless you've purchased stock in Linode, you didn't invest $20,000. It's an operational expense, not a capital expense. Don't anthropomorphize business expenses and they'll be easier to cut loose if need be.
Running stuff is absolutely fine on Linode - it's what we're here for, after all. But, you need to keep in mind that ultimately this is a shared platform, so things may vary. I think that was the point the support rep was trying to relay.
-Chris
@caker:
Since this was brought up here, I'll provide some context. The question from the ticket was why things that take a few seconds to run sometimes take longer to run. What follows is one of the sentences from the response (which was only partially quoted in the original post): "I would not recommend running CPU intensive applications on our platform, as due to the nature of the shared virtualized environment, the amount of CPU time you can use varies wildly."
Running stuff is absolutely fine on Linode - it's what we're here for, after all. But, you need to keep in mind that ultimately this is a shared platform, so things may vary. I think that was the point the support rep was trying to relay.
-Chris
and this is why we love linode, thanks caked for the reply.
happy new year to all linoders
Unfortunately after the update I mentioned before the core processes that I was running were taking too long. An increase in time of 200% to 300% is a bit excessive. I am perfectly aware that this is a shared environment, but such an increase looks like an overloading on the part of Linode.
For all the speculative un-informed comments above, I was running a perfectly capable multi-threading, parallelized task: the WRf atmospheric model:
They are running now on different servers and faster than in the best times at linode, although I still see linode better than my present servers in other aspects.
@jgcorripio:
I used to like linode too, and very much.
Unfortunately after the update I mentioned before the core processes that I was running were taking too long. An increase in time of 200% to 300% is a bit excessive. I am perfectly aware that this is a shared environment, but such an increase looks like an overloading on the part of Linode.
For all the speculative un-informed comments above, I was running a perfectly capable multi-threading, parallelized task: the WRf atmospheric model:
http://www.wrf-model.org/index.php They are running now on different servers and faster than in the best times at linode, although I still see linode better than my present servers in other aspects.
Have you tried to open a ticket to ask linode why of this 300% perfomance loss?
@sblantipodi:
@jgcorripio:I used to like linode too, and very much.
Unfortunately after the update I mentioned before the core processes that I was running were taking too long. An increase in time of 200% to 300% is a bit excessive. I am perfectly aware that this is a shared environment, but such an increase looks like an overloading on the part of Linode.
For all the speculative un-informed comments above, I was running a perfectly capable multi-threading, parallelized task: the WRf atmospheric model:
http://www.wrf-model.org/index.php They are running now on different servers and faster than in the best times at linode, although I still see linode better than my present servers in other aspects.
Have you tried to open a ticket to ask linode why of this 300% perfomance loss?
Sounds odd. Since jgcorripio started Linode doubled the number of available cores and upgraded lots of their hosts. Compute jobs should be considerably faster, not slower.
Maybe lots of people started running CPU bound jobs?
I did, that is when I got the answer that "I would not recommend running CPU intensive applications on our platform"
sednet,
You are right, It should, but it wasn't.
And let me go back to the first post. This is not a complaint, it is a suggestion for a niche market. Linode can do what they please whenever they are clear about it, customers are free to stay or go.
My question was if there are enough demand for an offer based on CPU demand rather than memory. I would be happy to pay more for guaranteed CPU usage.
@jgcorripio:
sblantipodi,
I did, that is when I got the answer that "I would not recommend running CPU intensive applications on our platform"
sednet,
You are right, It should, but it wasn't.
And let me go back to the first post. This is not a complaint, it is a suggestion for a niche market. Linode can do what they please whenever they are clear about it, customers are free to stay or go.
My question was if there are enough demand for an offer based on CPU demand rather than memory. I would be happy to pay more for guaranteed CPU usage.
That's what the CPU priority part of the plan is for. Only one plan type is put on each host machine, and the available CPU power is divided equally. Therefore, with a 1GB plan the CPU is split 40 ways
As for your particular issue, as was mentioned above not every Linode host machine has the same CPU. Linode tries very hard to keep them consistent, but it's one seriously big fleet. It is to a certain extent a gamble whether you end up on one of the servers still running a 2009 CPU or one running a top-end 2013 CPU. If raw CPU performance is your top priority (for most people older ones with many cores is fine as multi-threading typically reigns supreme when it comes to the web), consider resizing or re-creating your Linode until you end up with the desired CPU. That or just kindly ask Linode for a migration to one.
If you really do need a huge amount of guaranteed CPU power, it's probably going to be more economical for you to just get a dedicated server. While you can essentially rent a dedicated server (or half of one) by going with the biggest Linode plans, it's very expensive compared to how much a dedicated server can be had for from OVH. That's the reality that I believe the Linode support representative tried to insinuate (without literally saying 'Don't use our service!'), as no matter which way you stretch it a VPS is still a shared environment and there's only so much CPU you can dedicate to someone when 16 cores have to be split up before it starts getting prohibitive to finance and organise.
@jgcorripio:
My question was if there are enough demand for an offer based on CPU demand rather than memory. I would be happy to pay more for guaranteed CPU usage.
I vote no. As far as I remember you are the only one asking for this.
I looked on the WRf website and it's certainly a nice project, but it's something that just doesn't fit in a virtual server environment if you care at all about predictable runtimes. And all that CPU you are using is slowing down other people's Linodes. I don't know exactly what the requirements for that software are but I suspect you might be better off with building or buying fast machines and running this at home. Or using spot instances on amazon may be an option if you don't need constantly running machines. You can save a lot of money that way. Spot c3.8xlarge instances seem to be ridiculously cheap in Tokyo right now and cheaper than on-demand everywhere else.
We marked this as resolved as we now offer Dedicated CPU Instances.