bash script to update svn repository
find ./ -path '*.svn' -prune -o -name "*" -print -exec chown www-data:www-data {} \;
The path of course will be set to an absolute when added to the script, this was just for testing.
Now here's my question: What is the best way to allow the script to run the chown command - obviously the user doesn't have access to do it, so should I simply sudo ./myscript
since I can't add the sudo directly to the command (unless it will just prompt me when I run the script I guess?)
2 Replies
@mwaterous:
Now here's my question: What is the best way to allow the script to run the chown command - obviously the user doesn't have access to do it, so should I simply
sudo ./myscript
since I can't add the sudo directly to the command (unless it will just prompt me when I run the script I guess?)
Is this script being run interactively? Someone is going to need to answer a password prompt for sudo, and it's not clear to me that the person running the script will have that right?
If the script isn't being run interactively, then you could put it in the main /etc/crontab and specify it to run as a user with the necessary rights. Or it used to be that you could setuid a script, but besides the security risks, I'm not sure that works consistently in all current systems.
There's another approach that I use in a similar situation that bypasses the need to change any ownership, which is to make the local checkout of the svn repository owned by www-data. Have the script that is doing the updating run as www-data (in my case it's a cgi-bin script triggered by a web form used to both preview and implement the update and thus runs as the web server). All the files will thus "naturally" be accessible to the web server.
You'll need to do a one time setup as the www-data user to authenticate against the SVN repository, but that's about it.
What I do is have two trees on the server - a staging tree that is the actual checkout (and thus has the .svn directories), and the production tree that doesn't. The web page to control the update permits either updating the staging tree (running 'svn update'), or syncing the production tree with staging (using rsync locally). The staging tree is additionally accessible with a unique URL for testing before production deployment, and since the final deployment is a local rsync rather than a remote svn update (the repository is remote) the time to update the production copy is minimized. It also removes the need for .svn directories in the production tree.
So people work on their own copies of the web site as needed, and when ready, commit their changes, then access the web page, and first update the staging copy, then after any final tests, release to production.
– David
@db3l:
Is this script being run interactively? Someone is going to need to answer a password prompt for sudo, and it's not clear to me that the person running the script will have that right?
Just a preface, I'm completely new to writing bash scripts, so I'm sure there's a lot of nifty things I could do to this, but I'm completely oblivious to what they are.
With that said, the idea here is to fire the script by hand - it won't ever be run through cron as I would run the risk of updating to a broken copy if I did that. As such I should be able to run sudo svn-update.sh
and just enter my password from the command line, right?
@db3l:
a staging tree that is the actual checkout (and thus has the .svn directories), and the production tree that doesn't.
How do you do this?
Right now both of mine, live and development, have the .svn directories. Afaik the only way to grab a copy without these is svn export
- can yourun export like co
where it will simply overwrite any modified files if you export it to a directory that already contains files?