Help Me Organize My Code
I have coded around 4 websites. Every now and then when I want to make changes to the code, I first get all the files via ftp, the sqldb from working site, work on local machine, move everything back to server.
At times I also download the working version of the websites, all the files, and export the sql and save them on localmachine as backup. So i have tons of folders like, sitename-dateofdownload. I have reached to a point where its consuming tons of space on my local machine.
Ideal scenario:
Some sort of simplest version control, where all the files and sql get mirrored on my local machine with a few clicks I develop and test and once I commit changes, the original files and sql on the server are sort of backed up while my changes at local machine transfers to the server.
Sorry for being extremely newbie, I am sure experienced developers would understand what I am trying to ask. I would really appreciate if you could recommend the software + link to some sort of tutorial.
Thanks
6 Replies
With non-familiar-to-SCM devs, ease of start is very important - especially with those that are mostly "not liking these new fangled things that are just additional annoyance".
I've had best conversion success rate with Mercurial (http://mercurial.selenic.com/
As for the backups, consider rdiff-backup or something else rsync-based.
Schedule the database export first, then the backup run including your website files and the database dump. (gzip –rsyncable is pretty useful for the bigger DB export files, too.)
@rsk:
(gzip –rsyncable is pretty useful for the bigger DB export files, too.)
Whoa! I've never heard of gzip –rsyncable and it doesn't seem to be in the gzip man page although it is an accepted option.
Surely it will only work where the uncompressed input doesn't change size? Surely that isn't too often?
not included in upstream
As for how it works, imagine it this way. Normal gzip behavior is to use fixed-size blocks (not sure whether that's input or output), so if you make a change that affects the length of the input data, everything after that point in the output would be different because the block boundaries would shift relative to the input data. Suppose that instead, gzip started a new output block each time it encountered a newline in the input. This way, each line of a file would be compressed separately.
If you made a change to one line, or even added or removed lines, the output blocks generated for the preceding and following (unchanged) lines would be the same as before; only the block(s) that changed will be different. rsync is able to see that only part of the compressed file has changed.
The rsyncable patch
And for historical reasons the dump is a single .sql file that has the most growing tables near the beginning and then about half way down, so the file size changes, and the old data inside is moved to different position every time.
(Guess I could change it to dump separate files but it works, and I can be sure the tables have been dumped together, so…)