CPU and network traffic alerts from an idle server?
Just earlier I received 3 warnings:
1. "Your Linode, xxx, has exceeded the notification threshold (90) for CPU Usage by averaging 133.3% for the last 2 hours"
2. "Your Linode, xxx, has exceeded the notification threshold (5) for inbound traffic rate by averaging 5.58 Mb/s for the last 2 hours."
3. "Your Linode, xxx, has exceeded the notification threshold (5) for outbound traffic rate by averaging 15.30 Mb/s for the last 2 hours."
Have you experienced this? What to make of these warnings and any hint on the underlying root cause? What to do about them? I suppose those network traffic counts against the quota (I am not sure if I need to worry about it).
I will probably dig deeper myself, but wondered if anybody else has anthing to share.
Thanks a bunch!
Haidong
7 Replies
Log in to your server and check what is using all these resources (top/htop/iftop). Longview could help as well if you aren't able to find it through regular methods.
Ok, a quick follow up now that I have a bit of time before bed time…
top showed that there were many Python running under root. htop showed that the commands were
python ./svwar.py -d useri -v xx.xxx.xx.xxx(IP address)
and a few "python ./svcrack …" and "python ./svmap …"
So my machine was comprised and somebody was running sipvicious on it. I immediately powered it down as soon as I figured that out this morning.
I don't know how the machine was comprised, though. I haven't destroyed the machine yet so I guess I can look, although I am not sure if I have the time/expertise to look deeper. Any pointers welcome.
I will have to rebuild the machine. I still plan to use the stock 7 Debian and will only allow ssh key pair connection this time, hoping it prevent this from happening. I will also tinker with Fail2Ban a bit further.
This episode was certainly unsettling. Any pointers on how to make the machine more robust is appreciated!
"apt-get update" re-downloads the list of packages. Especially needed if you added/changed any of the repos in /etc/apt/sources.list
"apt-get upgrade" actually does the update. installing nginx should have updated the software that nginx depends on but may not have updated the other stuff…
@haidongji:
Thanks Nuvini.
Ok, a quick follow up now that I have a bit of time before bed time…
top showed that there were many Python running under root. htop showed that the commands were
python ./svwar.py -d useri -v xx.xxx.xx.xxx(IP address)
and a few "python ./svcrack …" and "python ./svmap …"
So my machine was comprised and somebody was running sipvicious on it. I immediately powered it down as soon as I figured that out this morning.
I don't know how the machine was comprised, though. I haven't destroyed the machine yet so I guess I can look, although I am not sure if I have the time/expertise to look deeper. Any pointers welcome.
I will have to rebuild the machine. I still plan to use the stock 7 Debian and will only allow ssh key pair connection this time, hoping it prevent this from happening. I will also tinker with Fail2Ban a bit further.
This episode was certainly unsettling. Any pointers on how to make the machine more robust is appreciated!
If you suspect SSH you can check with the last command and /var/log/auth.log - however I must ask, you did use a strong password right? No dictionary/5 letter passwords? Bruteforcing happens all the time and as long as you have strong passwords they shouldn't be able to get in through SSH. Using keys is always good though. Fail2ban helps with the logspam and the time it takes for bruteforcing, but if you use weak passwords it'll only delay things. Unless you're worried about logsizes there's no need to use Fail2ban as it does not increase security.
Personally I DROP SSH traffic for everyone except for a few whitelisted IPs and use private/public SSH keys that are protected with passphrases.
Make sure that you create separate users for the things you run with low privileges so you don't have everything running as root and keep stuff up-to-date of course. Feel free to hop on the Linode IRC to get some further advise. I could also have a quick look at the security, nothing major but things like a portscan/software versions used for running services, etc, perhaps something really simple that they used to get into your system.
changing default port of SSH DOES improve security because it reduces probability that you'll be successfully attacked by vast number of bots attempting at default port
using fail2ban or similar DOES improve security because it reduces probability that attacker X will manage to bruteforce its way with his next attempt
using stronger passwords DOES improve security because it reduces probability that attacker X will manage to bruteforce its way with his next attempt
using keys instead of passwords increases that particular vector security by orders of magnitude
Important relation to learn: reducing probabilities of successful attack = increasing system security. It's about vectors, different attack vectors. There's no one solution for everything, but you can reduce the surface against individual vectors. There's no single switch for that.
It's not that I don't get where you are coming from, changing the port can be handy -if- a vulnerability is found so scripts are less likely to target you when they're just looking for Port 22. (Or just firewall it for only whitelisted IPs.
I never said that passwords aren't good, I like long/strong passwords. I also said that Fail2ban does help somewhat, but unless you want to leave your system running for 50 years and someone decides to bruteforce your SSH for all that time, if you use a strong password (mine are 50-100 characters or the max length possible) or SSH keys to authenticate, Fail2ban won't provide much additional security. If passwords aren't that strong, yes, Fail2ban will help more since it bans after
Fail2ban only reduces the amount of tries attackers can perform in
And make no mistake, the internet is a cyber battlefield.
The Debian OpenSSL fiasco is a prime example of why you're wrong. Any IP from which someone is trying unauthorized access is an IP that should be considered potentially very dangerous and should be blocked off because you do not know what else the IP will try and in how short a time span.
Sure, banning an IP does zero against a distributed attempt, but it nonetheless reduces the attack surface, and in some cases significantly.
And it's trivial to run fail2ban. We're not talking about spending hours to set up something to protect against an unlikely breach.