Very high disk i/o?

My server was experiencing very high disk i/o for a several hours last night, and was not very responsive in this time.

When it started, I checked out what was running and I didn't see anything other than the usual apache processes. Load was spiking too, though it wasn't clear why. It got better though, but then the issue came back overnight.

I see

ip_conntrack: table full, dropping packet.

in the log several times, and then it rebooted last night. I didn't reboot it, and hadn't in about 60 days.

Does this sound like a security issue? Syn flood? Any tips are appreciated.

11 Replies

Argh, I figured it out: I had 13.6 GB of Apache log files on a server with a 15GB disk. Well, time to … do … something!

The logrotate package will rotate the logs weekly (configurable).

Sure, I have it set to rotate logs daily actually. I like to save the logs for our records, so I have it set to keep up to a full year of logs.

Each day's access.log is about 1.5GB uncompressed, which goes down to 100MB. So, they sure do take up a lot of space when you have a few months worth!

@marshmallow:

Sure, I have it set to rotate logs daily actually. I like to save the logs for our records, so I have it set to keep up to a full year of logs.

Each day's access.log is about 1.5GB uncompressed, which goes down to 100MB. So, they sure do take up a lot of space when you have a few months worth!

I am biased since my main job is in computer security, but I like to keep some uncompressed logs around. How do you deal with reviewing the files when you need to? The only thing I can really think of is using some odd command line kung-fu like:

tar -xOzf logfile.tgz | grep "search string"

I can see that being a pain for large files. Perhaps there is a way to leave 7 days uncompressed and compress anything after that? Any thoughts?

Have you tried zgrep "search string" *.gz ?

@Xan:

Have you tried zgrep "search string" *.gz ?

I have not. That did work though.

Cool. There's zcat as well, whose function you can probably guess. Also bzcat and bzgrep.

Also, less is pretty good at figuring out how to display most compressed files.

Huh, the less that I'm using doesn't seem to do that, but zless and bzless work.

@Xan:

Huh, the less that I'm using doesn't seem to do that, but zless and bzless work.
Yeah, now that I look at it it may be a gentoo-specific thing. The functionality seems to be enabled by

export LESSOPEN='|lesspipe.sh %s'

and then a fairly substantial script in /usr/bin/lesspipe.sh

Not sure why other distros wouldn't be using it though, it is pretty handy.

@carmp3fan:

I can see that being a pain for large files. Perhaps there is a way to leave 7 days uncompressed and compress anything after that? Any thoughts?

I usually do use zcat|grep.

I think you could set logrotate to leave 7 days uncompressed and compressing what is after that, but I'm not certain if that's built in as an option.

Usually it is set to change the compress logname.1 as logname.2.gz, move the old log to logname.1, and create a new current log fie. So, yesterdays log is left uncompressed by default until the next day. I think it does that in case a process is still writing to it, though.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct