Postfix/amavis issue
Aug 14 21:49:10 mail postfix/smtp[1032]: connect to 127.0.0.1[127.0.0.1]:10024: Connection refused
In addition, CPU usage and I/O spikes around the time these problems occur. I am not sure if the CPU and I/O issue cause the problem or vice versa the problem causes the CPU and I/O issues. I have tried adjusting the number of concurrent Amavis processes with not much luck. Also, when I do a top command it doesn't show either CPU or memory maxed out. I am both a newbie at Linux sysadmin and lost at this point. Any help would be much appreciated.
Thanks,
George
The mail server went down tonight and I found one additional clue in the log which looks like it ran out of memory
Aug 19 21:47:47 mail amavis[18888]: (!)Net::Server: 2012/08/19-21:47:31 Bad fork [Cannot allocate memory]\n at line 166 in file /usr/share/perl5/Net/Server/PreForkSimple.pm
Aug 19 21:47:50 mail amavis[18888]: Net::Server: 2012/08/19-21:47:48 Server closing!
Running out of memory late Sunday night seems odd since the server is working under a light load. Regardless, I bought another 90Mb but the server immediately consumed it.
Any suggestions on how to deal with this would be much appreciated.
Thanks
George
3 Replies
Based on thread #12post
Edit: But you purchased more memory it was consumed immediately. I wonder if you purchased enough?
MaxClients
Aug 19 21:47:47 mail amavis[18888]: (!)Net::Server: 2012/08/19-21:47:31 Bad fork [Cannot allocate memory]\n at line 166 in file /usr/share/perl5/Net/Server/PreForkSimple.pm
I played around with apache settings and I am setup now as follows after tweaking it several times.
StartServers 1
MinSpareServers 3
MaxSpareServers 9
ServerLimit 9
MaxClients 9
MaxRequestsPerChild 3000
Also I raised the swap file size to be equal to the RAM size. I realize Linode recommends 1/2 RAM size but that seemed not to be enough as SWAP was filled up within 1/2 hour of running at the recommended value.
Also the max memory for Php was 128M which I lowered to 64M. This did not have much impact but none of the sites on the server need more than 64M so it seemed like a reasonable setting to change to.
With the current settings it's running OK right now and hopefully it remains so but only time will tell since the problem is intermittent.