Guaranteed Service Without Problems

Hello,

I have researched and read a lot of topic about Linode politics of CPU consumption and is so lax. For example, I see my grow and it's incremental. Starting with 30 - 40 % average and now start to be 80 - 100%. So, by my last research I understand about "neighbors happiness" politic is working now and this really worries me.

I do not know when my "neighbors" start to be unhappy. When my CPU consumption will be highest than 100% (1 core) when will be more than 300% (3 core), etc…

So, this lax policy will get me crazy. I need grow and scale in the next months and to reduce de POF (points of failure) I want to grow to max scale as possible without problems. My uptime is priority so thats why I need to minimize points of failure (getting max of resource as possible from the every server) and at the same time scale horizontally.

So, what is my limit using per example the cheapest plan (in CPU)?

Supposing if I get the plan Linode 16GB of $320 / month, I will haven't neighbors? So thats allow to me use the 800% of CPU server WITHOUT ANY PROBLEM?

PS: At the start of my experience in Linode I had a consumption of 350 - 400% in peaks and I haven't any warning. But what is "neighbors happiness" policy. Is when some neighbors make a complaint about problems of CPU efficiency in their node? When Linode get actions?, or in other words: when Linode know I'm in infringement?

This gives me headaches!

Thanks.

9 Replies

Your questions are probably better asked of Linode's Support team.

There's a point where a shared VPS no longer suits a particle growing projects needs. At that point you're best to look at either a dedicated or co-located box. Neither have any "neighbors" to worry about.

This my 4th day so far i been using linode, and im using cheapest plan 1024 and I have to say this best vps I ever had lol.My linode runs really well and no slow down.My net speed is excellend and as for cpu.. haven't seen any limit/slowdown.If you are being limited.. maybe it the datacenter your on or host server you been placed on

@vonskippy:

There's a point where a shared VPS no longer suits a particle growing projects needs. At that point you're best to look at either a dedicated or co-located box. Neither have any "neighbors" to worry about.

Thats obviously. But I have Linode by their grow tools (like Nodebalancer and similars) which other don't provide. But the other options was out of this topic.

@Chrisnetika:

This my 4th day so far i been using linode, and im using cheapest plan 1024 and I have to say this best vps I ever had lol.My linode runs really well and no slow down.My net speed is excellend and as for cpu.. haven't seen any limit/slowdown.If you are being limited.. maybe it the datacenter your on or host server you been placed on

My experience was different. In the 7th day at 8AM (-3GTM) I start to have big problems with the server and the team notify about an isolated server problem and they need restore an backup. Well, in other words I lost like 12 HS of DB because my DB 2 backups was corrupted. And I lost like 110 USD in technical service (outsourcing). So I start with the "wrong feet", imagine all this in the first week of service, but today is not the past, I wanna grow in Linode but with clear rules.

When you're working on a cloud platform, you shouldn't think about scaling vertically (using a bigger and bigger node), you should think about scaling horizontally (using more and more nodes). In a proper cloud infrastructure, performance variations between individual nodes shouldn't be as big a problem, because you scale your number of nodes to handle any additional load.

I disagree with VonSkippy; what he says only applies if you're only ever occupying a single node/server/etc. If you're building for the cloud, you can handle any amount of growth by scaling. As an example, Netflix's entire infrastructure runs on a huge number of VPS; they're entirely hosted by Amazon EC2.

@Guspaz:

When you're working on a cloud platform, you shouldn't think about scaling vertically (using a bigger and bigger node), you should think about scaling horizontally (using more and more nodes). In a proper cloud infrastructure, performance variations between individual nodes shouldn't be as big a problem, because you scale your number of nodes to handle any additional load.

I disagree with VonSkippy; what he says only applies if you're only ever occupying a single node/server/etc. If you're building for the cloud, you can handle any amount of growth by scaling. As an example, Netflix's entire infrastructure runs on a huge number of VPS; they're entirely hosted by Amazon EC2.

It seems to me it's both, vertical and horizontal. First, the vertical needs to have the computing accommodations to meet a minimal load with all software requirements met; an Apache node stack is not equal to a DB2 node stack, for example. From there, as the client load increases then it becomes a horizontal scaling problem.

I agree in the end it's mostly horizontal.

@Guspaz:

When you're working on a cloud platform, you shouldn't think about scaling vertically (using a bigger and bigger node), you should think about scaling horizontally (using more and more nodes). In a proper cloud infrastructure, performance variations between individual nodes shouldn't be as big a problem, because you scale your number of nodes to handle any additional load.

I disagree with VonSkippy; what he says only applies if you're only ever occupying a single node/server/etc. If you're building for the cloud, you can handle any amount of growth by scaling. As an example, Netflix's entire infrastructure runs on a huge number of VPS; they're entirely hosted by Amazon EC2.

I absolutely agree about the mind of "think horizontal". But, do you need to know what is your "vertical power", otherwise, how do you will know your real CPU real power and capacity? << Yeah, I have 10 nodes with 1 core, 8 cores, 20 cores FOR USE CONTINUOUSLY (NOTE "CONTINUOUSLY") (?) >>. I'm not talking about peak consumption.

I'm absolute agree but it, but it doesn't mean that it is not too lax.

Finally I have a concrete answer:

> Hello,

This would be entirely up to the host, but a safe range would be within the 300% range. Any higher then that and you will start causing contention within your host, with hitting about 500% causing noticeable effects to your neighbours. Pegging your cores at 800% would be entirely bad.

As such, no single Linode should ever have to be consuming that much CPU, with the right configurations you can stay in a healthy parameter and serve a lot of content at once. Not even production servers of large companies use that much CPU (though they have multiple Linodes to spread the usage).

There would be no maximum CPU consumption per plan, it would merely be dependant on how badly the user taxes the host.

If you have any other questions please do not hesitate to let us know!

Regards,

Soh

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct