The disk of my linode fills up very quickly
The disk of my Linode fills up very quickly, I have the impression of being pirated, or where can the problem come from I am a little worried
Cheers
7 Replies
Could you be running some process that is in an infinite loop or which is set to run too often and filling up some log file? Maybe you are "under attack' and your Apache log is filling up.
(I clean out my logs each nite via a script using this command:
#!/bin/bash
# -- RUN AS ROOT VIA CRON
sudo truncate --size=0 /var/www/html/xxxxx.com/logs/access.log
sudo truncate --size=0 /var/www/html/xxxxx.com/logs/error.log
.
.
.
I could never get the "logrotate" program to work!!!)
-Al
I could never get the "logrotate" program to work!!!)
To rotate apache2 logs, add this to each virtual host that does logging:
# Log file locations
LogLevel warn
ErrorLog "|/usr/sbin/rotatelogs -l /var/log/apache2/myvirtualhost.error.log.%Y.%m.%d 86400"
CustomLog "|/usr/sbin/rotatelogs -l /var/log/apache2/myvirtualhost.access.log.%Y.%m.%d 86400" combined
This will cause apache2 to create the log files /var/log/apache2/myvirtualhost.{access, error}.log and rotate them every day. I have a cron job that deletes log files older than 4 days:
#!/usr/bin/env -S bash
#
HTTPD_LOG_HOME=/var/log/apache2
LIMIT='+3' # keep this many days' worth of logs in addition to today's log
# DANGER WILL ROBINSON! You may have to adjust the file name regex
# to NOT find certain files you want to keep! Or, pipe the results through
# 'grep -v someregex' to remove those files from the found list.
#
## Access logs...
#
for i in $( \
/usr/bin/find $HTTPD_LOG_HOME -name *access.log* -type f -mtime $LIMIT \
)
do
/bin/rm $i
done
## Error logs...
#
for i in $( \
/usr/bin/find $HTTPD_LOG_HOME -name *error.log* -type f -mtime $LIMIT \
)
do
/bin/rm $i
done
I realize this is not logrotate but this info may help the OP. For specifics on configuring logrotate, see here:
https://www.tecmint.com/install-logrotate-to-manage-log-rotation-in-linux/
You don't have to run logrotate, Linux has a system cron job set up to do that already. All you have to do is configure what you want logrotate to do for you.
-- sw
I'm using nginx and I have 80GB and 4GB of RAM and it's filled all in less than a week. I'm overwhelmed, what's going on?
You may find this other post from the Community Questions site helpful:
It has several different commands you can use to check your disk usage and track down where the largest files are located.