Root crontab script will not SSH to another Linode server
I have a script (trunk-it.sh) that must be run as root because it truncates some files owned by root.
sudo truncate --size=0 /var/www/html/xxxx.net/log/access.log
ssh xxx@xxx.xxx.xxx touch /home/xxx/backup-times/mars-truncate-time.txt
I run it in /etc/crontab (root crontab). The truncate command works, but the SSH does not. No error… it just does not do the touch on the other server.
If I run truck-it.sh from the user crontab, the truncate commands don't work (as expected) but the SSH does.
I'm using standard ssh-pub key pairs.
I made sure that the ssh config allows root login.
Why won't the root crontab "see" the keys so as to make a connection to the other Linode… assuming that is the issue?
Thanks.
4 Replies
First, assuming the ssh(1) command is able to log in correctly (not a given IMHO), it's logging into the remote as root. root's login environment (especially PATH) may not be what you expect. That's what I would check first. When running stuff remotely, it's always best to specify the fully-qualified path name of the thing you want to run; e.g.; /usr/bin/touch instead of just touch.
Second, it's really bad idea™ to have root logins enabled for sshd(8). It's a giant (YUUUUUGGGGEEEE!!!) security hole…even if the login credentials are established using certs. I'd find another way to do this if I were you.
To turn off root logins in sshd(8), set
PermitRootLogin no
in /etc/ssh/sshd_config. This is a global setting…no remote anywhere will be able to login as ssh xxx.xxx.xxx.xxx -l root. There are no exceptions.
Hint: this option on ssh(1) will be your friend:
-l login_name
Specifies the user to log in as on the remote machine. This also may be specified on a per-host basis in the configuration file.
-- sw
Thanks for the info.
I knew about the security issue with PermitRootLogin yes" and only set it to that for testing. I set it back.
I will change the script with fully qualified paths and see what happens.
As I mentioned the script works just fine when run as the user… both in crontab and with the ./ prefix.
As a work-around I simply took out the "ssh" command from the script and set the script to run in the root crontab at 02:15. I then set up a new script in the user crontab to do the SSH at 02:20 leaving five minutes for a job that takes about two seconds! It all works fine.
Thanks.
As a work-around I simply took out the "ssh" command from the script and set the script to run in the root crontab at 02:15. I then set up a new script in the user crontab to do the SSH at 02:20 leaving five minutes for a job that takes about two seconds! It all works fine.
Take it from someone who's been there and done that…this scheme is going to fall apart. I can guarantee it…
When you can write a decent-enough server in 5 lines of perl(1), ruby(1) or python(1) and send messages to it from shell scripts using socat(1), it makes no sense to do stuff like this…especially when you know deep down (admit it!) that I'm right…
What you describe may be good enough but for the long-term, it's going to be completely inadequate (add about 5 more similar schemes with inter-related impacts and you'll understand what I'm talking about).
-- sw
this scheme is going to fall apart… you know deep down (admit it!) that I'm right…
You MAY be right.
I'm 73 years old. I've been writing computer code for 47 years (started with Ross Perot's EDS in 1974 right out of grad school.) I don't know everything but I know a few things.
This code is not mission-critical. It is not even mission-necessary. We'll see how it goes before adding more time and resources to the system.
Sometimes (actually more often than most programmers want to admit) good enough is good enough.
I heard this quote from Ross Perot when I first started out:
"Give them the third best to go on with; the second best comes too late, the best never comes." - Watson-Watt, who developed early warning radar.
YMMV.