Moving big mysql database from another VPS to Linode

Hi

I am trying to move my site to Linode and I have a big database. The dump .sql file is around 20Gb. I have copied this to my Linode server and try to import it to Mysql but it fails. It exits suddenly and many of the tables are empty but I don't get any error messages or notifications. At the same point I am also logged out from my ssh.

I am using

mysqldump -u root-p <password>-Q --add-drop-table -O add-locks=FALSE -O lock-tables=FALSE favorious_production | gzip -c > production_dump.sql
scp production_dump.sql.gz root@favorious.co.uk:~</password> 

on the other VPS.

Then on my Linode:

gunzip production_dump.sql.gz
mysql -u root -p favorious_production < production_dump.sql

I tried this with a smaller database dump file and it worked but when I try it with the production one it imports data for about 2 hours and then exits with no error message but with not all data in.

Has anyone had this problem before? I am trialling Linode and I really like it so far but I cannot move the site here unless I manage to move the DB. I will also try moving the MYSQL data files but I am told that this is less likely to work.

Thanks

Georgios

14 Replies

What size Linode are you working with?

After you do the DB dump - make a MD5 checksum, then after you copy it to your new server, make sure the MD5 verifies.

If it does and you still can't successful load the DB file - perhaps try replicating it from the old DB server to the new DB server. Then once it's up and running on the new DB server, remove the replication relation.

http://dev.mysql.com/doc/refman/5.0/en/replication.html

Thanks - I will try those solutions as well. I am off on holiday for the next 5 days but will report back to the forum once I have more info.

You could also try splitting the dump file into pieces (say one per table).

Also try checking /var/log/.err for errors (note the file maybe located elsewhere but that's where mine goes, it's the mysql error log).

Are you running out of memory and swapping hard for some reason? (Not likely on a 2880, but it could be configuration error.) Open another terminal window and run top or htop to monitor CPU and memory usage over time…

Are you running out of disk space? (With 2880 you only have 128GB.) Is the size of the database 20GB before gzipping or after? What is the size of the live database?

Hi

I have checked the MD5 sum and all is fine.

I also checked what's happening during the import. CPU load is around 1 and no swap is used.

I tried importing a version of the file that was created using –skip-extended-insert. This means that it has a separate INSERT statement from every row.

The SQL data file is 20Gb unzipped.

I guess all I have left is the replication. I will read more about it and try to set it up.

Thanks

Georgios

You could also try using maatkit (http://www.maatkit.org/) I use it for backups/restores, it will split the database files into smaller chunks for you, you can install it via apt/yum/whatever your package manager is.

Hi Favorious,

> At the same point I am also logged out from my ssh.

Have you tried running the job under nohup or screen? If it's your net connection dropping or your laptop going to sleep, then nohup or screen will save the day.

@droidy - Thanks, screen seems to be working fine as I have been loading data to mysql for the last 12 hours with no problems.

@obs - thanks for the maatkit tip. I didn't use it for this but it looks like a good tool for various tasks so I will give it a go.

You may already have solved this, but just in case: older versions of Mysql had a default 4GB limit on some types of tables; dumping those tables didn't necessarily include the ALTER statements to increase the maximum number of rows.

Also, which file system are you using? Any limits on file sizes (though if the MD5 is ok, then presumably all's well).

I am using the latest MySQL as this is a fresh install done few weeks ago. Filesystem is ext3. I didn't change anything there since I don't know enough to start messing with filesystems.

The import hasn't finished yet. It's been a day and a half and I am estimating it needs another 2-3 days. The total size of the main mysql data file will be around 40Gb.

Damn..that's a long import, let us know how it goes, if it fails again I'd defiantly go down the replication route.

For many table types you can just copy the raw data, as long as the server is shut down. This should cut it down to less than an hour with a rsync.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct