Need help determining the entry point for attackers

Hi all,

I've been having a problem on my Linode lately where it's pushing out a ton of bandwidth and has the CPU cranked. If I check the top processes, it's usually something like "ls -la", "echo 'find'", or a netstat command taking all the CPU, and all being run as root. Clearly someone has found a hole somewhere and is able to run seemingly the same command over and over, looking for certain files. I've been mitigating by checking netstat for established connections, finding the IPs connected on strange ports, looking them up (they're typically in China) then blocking them via IP tables. But every few days or so, a new IP starts the same thing.

So my question is, how can I figure out where they're coming in from and how do I lock things down? I just run a regular LAMP server with Varnish on the latest Ubuntu, and I've disabled SSH for root. I've also checked Apache logs to see if there is a nefarious script being accessed but did not see anything out of the ordinary, plus the port they're usually connected on is non-standard, was port 55417 this time.

Any help is appreciated!

Kevin

6 Replies

You need to scrub your VPS and start fresh - you've been compromised and there is no guaranteed way to clean it.

This is way daily verified backups are oh so important.

As vonskippy said you need to start fresh.

As for how it happened, could be one of a number of things, common attack vectors are:

SSH brute force attacks, have you disabled ssh password access?

Compromised web scripts, if you're using open source software is it all up to date?

Out of date packages, is your system up to date?

Thanks, all. I feel like it was an SSH brute force attack. When it first started, SSH was the top process. I was able to check root's bash history to see what they did and tried to clean up us much as I can, as well as upgraded Ubuntu and updated packages. I've since disabled SSH for root but they seem to still be getting in. I've been checking Apache logs to see if there's script access but things look normal during attacks. Yes looks like I might have to start clean. I may try to disable SSH password access first and see how that goes. I have data backups but not storing any sensitive data, so I'm willing to experiment a bit further first. Thanks again, I've learned a lot from this experience.

No, you MUST start fresh, there is no way to "clean up" a compromised system. Once in, the first step is to plant several back doors, unless you sign EVERY package (and now want to check each package against your pre-hack condition), you have no clue where those backdoors are. Start fresh, anything else is a win for the hackers and a waste of time for you.

That's a good point. Yes it looked like from the bash logs that they replaced some of the core binaries, so it's hard to tell what else they were and are able to do from that. I guess I'll be starting from scratch and doing a better job of locking things down from the start. Thanks again!

Unfortunately, I can say "been there suffered that" - being hacked seems to be one of those life lessons everyone learns first hand at least once (and hopefully only once).

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct