Linode is crashing even when my application is idle

My linode has got 512MB of memory and the application is running fine without any issues both in local and production(Linode server). Application is consuming enough memory with in the specified jvm arguments -xms -xmx values between 128MB - 256MB, monitored using java melody.

But sometimes OOM killer is killing the entire java process such that my application goes down. If the application is consuming more memory then its fine but i am sure that the application is idle at the time of this occurrence.

Please let me know why OOM killer is initiated even though my application is idle and even consuming a little amount of memory. What should i do to overcome this issue? What are the other parameters that would be cause for this issue. Let me know your suggestions too.

Jun 12 06:55:49 li552-90 kernel: [ pid ] uid tgid totalvm rss nrptes swapents oomscoreadj name

Jun 12 06:55:49 li552-90 kernel: [ 1335] 0 1335 4339 0 13 103 0 upstart-udev-br

Jun 12 06:55:49 li552-90 kernel: [ 1337] 0 1337 5357 0 15 88 -1000 udevd

Jun 12 06:55:49 li552-90 kernel: [ 1908] 0 1908 5356 2 14 89 -1000 udevd

Jun 12 06:55:49 li552-90 kernel: [ 2008] 0 2008 3795 0 13 55 0 upstart-socket-

Jun 12 06:55:49 li552-90 kernel: [ 2112] 0 2112 5356 1 14 89 -1000 udevd

Jun 12 06:55:49 li552-90 kernel: [ 2293] 0 2293 1814 38 7 88 0 dhclient3

Jun 12 06:55:49 li552-90 kernel: [ 2374] 0 2374 12487 30 29 122 -1000 sshd

Jun 12 06:55:49 li552-90 kernel: [ 2390] 102 2390 5952 34 16 37 0 dbus-daemon

Jun 12 06:55:49 li552-90 kernel: [ 2402] 101 2402 63523 478 27 621 0 rsyslogd

Jun 12 06:55:49 li552-90 kernel: [ 2428] 0 2428 4225 5 13 36 0 atd

Jun 12 06:55:49 li552-90 kernel: [ 2429] 0 2429 4776 25 15 35 0 cron

Jun 12 06:55:49 li552-90 kernel: [ 2480] 106 2480 7359 36 18 53 0 ntpd

Jun 12 06:55:49 li552-90 kernel: [ 2831] 0 2831 3186 1 12 38 0 getty

Jun 12 06:55:49 li552-90 kernel: [12774] 108 12774 337577 9632 109 11118 0 mysqld

Jun 12 06:55:49 li552-90 kernel: [19612] 103 19612 52588 2630 67 3881 0 whoopsie

Jun 12 06:55:49 li552-90 kernel: [14698] 107 14698 709808 67871 356 49443 0 java

Jun 12 06:55:49 li552-90 kernel: [19000] 0 19000 8459 71 22 8 0 cron

Jun 12 06:55:49 li552-90 kernel: [19002] 0 19002 1098 25 8 0 0 sh

Jun 12 06:55:49 li552-90 kernel: [19005] 0 19005 1073 29 8 0 0 run-parts

Jun 12 06:55:49 li552-90 kernel: [19008] 0 19008 1098 36 9 0 0 apt

Jun 12 06:55:49 li552-90 kernel: [19123] 0 19123 34453 22129 71 0 0 update-apt-xapi

Jun 12 06:55:49 li552-90 kernel: Out of memory: Kill process 14698 (java) score 619 or sacrifice child

Jun 12 06:55:49 li552-90 kernel: Killed process 14698 (java) total-vm:2839232kB, anon-rss:271484kB, file-rss:0kB

here total-vm is more than my memory space, how could this happen, how java is consuming upto 2.7GB of memory when my linode memory is only 512MB.

7 Replies

Not that this addresses your problem directly, but you're eligible for a free upgrade to 1024 MB of RAM.

you mean to say that if i upgrade my RAM to 1024MB this will not occur or what?

No, perhaps the problem would still occur. But if you're running out of memory, doubling your RAM certainly wouldn't hurt…

If the application is using obscene amounts of memory it's quite possible that the problem lies in the application itself.

Also, what is the actual command line arguments used? (The memory related arguments are -Xms / -Xmx afaik, but you wrote -xms -xmx?)

Could you please tell me why these kinds of issues occur. Is this a linode's technical issue or i have to configure something else to eradicate this. I need more explanation on this issue and its cause. Any external factors that initiate OOM killer i dont have a clear idea on this. i need to know why OOM killer is initiated to kill the processes how to know the actual cause of this? need help….

thanks in advance.

It isn't plausible that with whatever else is running on your server, a 256 MB java process is actually causing you to use up all available RAM?

@Adcfdata:

Could you please tell me why these kinds of issues occur. Is this a linode's technical issue or i have to configure something else to eradicate this. I need more explanation on this issue and its cause. Any external factors that initiate OOM killer i dont have a clear idea on this. i need to know why OOM killer is initiated to kill the processes how to know the actual cause of this? need help….

thanks in advance.

The issue does not sound Linode specific. (Maybe related to you having relatively little memory to play with.)

The general idea wrt the OOM killer:

If the kernel finds that that you are dangerously close to being out of memory (essentially when the choice is between kernel panic or somehow freeing memory) the OOM killer is run.

The OOM killer selects the process that according to some weird metrics would be most beneficial to get rid of in terms of resolving the memory shortage.

I would suggest you focus on not using too much memory. Maybe some monitoring of memory usage would help track down what is going on prior to this happening?

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct