I had a huge MySQL dump that took forever (as in: days) to import, while I actually just wanted to have the full database structure with some data to use on my development machine.
After trying several suggestions on how to speed up slow MySQL dump imports (which did not result in any significant improvement), I chose to import just some rows per table to suffice my needs. Since editing the file was not an option, I used a short Ruby script to manage that.
Here is how:
pv huge.dump | ruby -e 'ARGF.each_line { |l| m = l.match(/^INSERT INTO \`.+\` .+ VALUES \((\d+),/); puts l if !m || m[1].to_i < 200_000 || l =~ /schema_migrations/ }' | mysql -uUSER -pSECRET DATABASE_NAME
The command above does the following:
huge.dump
to stdout. You could do that with cat
, but I chose to use pv
for a nice progress bar.INSERT
statement.VALUES (\d+,
").schema_migrations
we also print it (because we want them all).USER
and SECRET
with your database credentials, and DATABASE_NAME
with the database you are going to import into.Note the following:
sed
and awk
, but I did not want to go down that road.id
column (which you shouldn't do), or where that column is not the first.