mysql
, The Command-line Tool
mysqladmin
, Administrating a MySQL Server
mysqlbinlog
, Executing the queries from a binary log
mysqlcheck
for Table Maintenance and Crash Recovery
mysqldump
, Dumping Table Structure and Data
mysqlhotcopy
, Copying MySQL Databases and Tables
mysqlimport
, Importing Data from Text Files
mysqlshow
, Showing Databases, Tables, and Columns
mysql_config
, Get compile options for compiling clients
perror
, Explaining Error Codes
mysqldump
, Dumping Table Structure and DataUtility to dump a database or a collection of database for backup or for transferring the data to another SQL server (not necessarily a MySQL server). The dump will contain SQL statements to create the table and/or populate the table.
If you are doing a backup on the server, you should consider using
the mysqlhotcopy
instead. See section 4.8.7 mysqlhotcopy
, Copying MySQL Databases and Tables.
shell> mysqldump [OPTIONS] database [tables] OR mysqldump [OPTIONS] --databases [OPTIONS] DB1 [DB2 DB3...] OR mysqldump [OPTIONS] --all-databases [OPTIONS]
If you don't give any tables or use the --databases
or
--all-databases
, the whole database(s) will be dumped.
You can get a list of the options your version of mysqldump
supports
by executing mysqldump --help
.
Note that if you run mysqldump
without --quick
or
--opt
, mysqldump
will load the whole result set into
memory before dumping the result. This will probably be a problem if
you are dumping a big database.
Note that if you are using a new copy of the mysqldump
program
and you are going to do a dump that will be read into a very old MySQL
server, you should not use the --opt
or -e
options.
mysqldump
supports the following options:
--add-locks
LOCK TABLES
before and UNLOCK TABLE
after each table dump.
(To get faster inserts into MySQL.)
--add-drop-table
drop table
before each create statement.
-A, --all-databases
--databases
with all
databases selected.
-a, --all
--allow-keywords
-c, --complete-insert
-C, --compress
-B, --databases
USE db_name;
will be included in the output before each new database.
--delayed
INSERT DELAYED
command.
-e, --extended-insert
INSERT
syntax. (Gives more compact and
faster inserts statements.)
-#, --debug[=option_string]
--help
--fields-terminated-by=...
--fields-enclosed-by=...
--fields-optionally-enclosed-by=...
--fields-escaped-by=...
--lines-terminated-by=...
-T
option and have the same
meaning as the corresponding clauses for LOAD DATA INFILE
.
See section 6.4.9 LOAD DATA INFILE
Syntax.
-F, --flush-logs
-f, --force,
-h, --host=..
localhost
.
-l, --lock-tables.
READ LOCAL
to allow concurrent inserts in the case of MyISAM
tables.
Please note that when dumping multiple databases, --lock-tables
will lock tables for each database separately. So using this option will
not guarantee your tables will be logically consistent between databases.
Tables in different databases may be dumped in completely different
states.
-K, --disable-keys
/*!40000 ALTER TABLE tb_name DISABLE KEYS */;
and
/*!40000 ALTER TABLE tb_name ENABLE KEYS */;
will be put in the output. This will make loading the data into a MySQL
4.0 server faster as the indexes are created after all data are inserted.
-n, --no-create-db
CREATE DATABASE /*!32312 IF NOT EXISTS*/ db_name;
will not be put in the
output. The above line will be added otherwise, if a --databases
or
--all-databases
option was given.
-t, --no-create-info
CREATE TABLE
statement).
-d, --no-data
--opt
--quick --add-drop-table --add-locks --extended-insert
--lock-tables
. Should give you the fastest possible dump for reading
into a MySQL server.
-pyour_pass, --password[=your_pass]
mysqldump
you will be prompted for a password.
-P, --port=...
--protocol=(TCP | SOCKET | PIPE | MEMORY)
-q, --quick
mysql_use_result()
to do this.
-Q, --quote-names
-r, --result-file=...
--single-transaction
BEGIN
SQL command before dumping data from
server. It is mostly useful with InnoDB
tables and
READ_COMMITTED
transaction isolation level, as in this mode it
will dump the consistent state of the database at the time then
BEGIN
was issued without blocking any applications.
When using this option you should keep in mind that only transactional
tables will be dumped in a consistent state, for example, any MyISAM
or
HEAP
tables dumped while using this option may still change
state.
The --single-transaction
option was added in version 4.0.2.
This option is mutually exclusive with the --lock-tables
option
as LOCK TABLES
already commits a previous transaction internally.
-S /path/to/socket, --socket=/path/to/socket
localhost
(which is the
default host).
--tables
-T, --tab=path-to-some-directory
table_name.sql
file, that contains the SQL CREATE commands,
and a table_name.txt
file, that contains the data, for each give table.
The format of the `.txt' file is made according to the
--fields-xxx
and --lines--xxx
options.
Note: This option only works if mysqldump
is run on the same
machine as the mysqld
daemon. You must use a MySQL account that has the
FILE
privilege, and the login user/group that mysqld
is running as (normally user mysql
, group mysql
) must have
permission to create/write a file at the location you specify.
-u user_name, --user=user_name
-O var=option, --set-variable var=option
--set-variable
is deprecated since MySQL 4.0, just use --var=option
on its own.
-v, --verbose
-V, --version
-w, --where='where-condition'
"--where=user='jimf'" "-wuserid>1" "-wuserid<1"
-X, --xml
-x, --first-slave
--master-data
--first-slave
, but also prints some CHANGE MASTER
TO
commands which will later make your slave start from the right position
in the master's binlogs, if you have set up your slave using this SQL
dump of the master.
-O net_buffer_length=#, where # < 16M
--extended-insert
or --opt
), mysqldump
will create
rows up to net_buffer_length
length. If you increase this
variable, you should also ensure that the max_allowed_packet
variable in the MySQL server is bigger than the
net_buffer_length
.
The most normal use of mysqldump
is probably for making a backup of
whole databases. See section 4.4.1 Database Backups.
mysqldump --opt database > backup-file.sql
You can read this back into MySQL with:
mysql database < backup-file.sql
or
mysql -e "source /patch-to-backup/backup-file.sql" database
However, it's also very useful to populate another MySQL server with information from a database:
mysqldump --opt database | mysql --host=remote-host -C database
It is possible to dump several databases with one command:
mysqldump --databases database1 [database2 ...] > my_databases.sql
If all the databases are wanted, one can use:
mysqldump --all-databases > all_databases.sql
Posted by Jeff Armstrong on Thursday June 27 2002, @8:14am | [Delete] [Edit] |
On my system - mysqldump Ver 8.16 Distrib
3.23.46, for pc-linux-gnu (i686)
mysqldump -u root -p -A
only dumps the FIRST database, if the user
specified is not same as the current unix user.
In the above example, you must actually be
logged in as root to really get all the
subsequent databases.
Posted by Karl Katzke on Friday May 17 2002, @6:24am | [Delete] [Edit] |
Note that if you include the -F option (Flush
tables) with the -A option (All databases), and
you are using logging, your logs will be cycled
each time mysqldump starts another database. So
if you've got six databases, your logs will be
flushed six times in rapid succession as
mysqldump reads from each one. Not sure if it's a
bug or a feature. (MySQL 3.23.46)
Posted by Jon Benson on Friday May 17 2002, @6:24am | [Delete] [Edit] |
If you'd like a quick example example of how to
dump to a local machine (with the mysqldump
binary locally) from a remote server this is how
i did it (windows machine):
>mysqldump -C -h SERVERADDRESS -u USER -pPASSWORD
DATABASE [table] > c:\localpath.sql
Posted by Mike Looijmans on Friday May 17 2002, @6:24am | [Delete] [Edit] |
Workaround for the -F -A problem is to use 'mysql
-u root -e "FLUSH LOGS"' before running mysqldump,
but this makes the flush and backup less 'atomic'.
Not a problem on our system, since no one will be
running updates at 2:00am
Posted by Ulf Küßner on Friday May 17 2002, @6:24am | [Delete] [Edit] |
mysqldump is a bit tricky to use, yet not a bad
tool for backups. Esp. the pipe right into
another servers db is a nice thing. I wonder if
it would be possible to add a replace syntax,
instead of only an insert. That would make
incremental backups right into another db much
easier (a new timestamp triggers a backup of an
updated row) . Now one has to dump a csv file on
one server, transfer it to another and then use
load data infile replace into table ...
OK, one could also use the server/slave variant -
problems with that is, I keep hearing it is not
as easy to use and understand as I'd like. ;-)
Posted by [name withheld] on Saturday August 10 2002, @11:56am | [Delete] [Edit] |
If you want to compress your backups directly even
without hitting your HD. You can try this command:
#mysqldump --opt -u user --password="password"
database | bzip2 -c > database.sql.bz2
Posted by [name withheld] on Sunday October 27 2002, @1:33pm | [Delete] [Edit] |
I'm using the following script as daily cron, it works.
#!/bin/sh
date=`date -I`
mysqldump --opt --all-databases | bzip2 -c
> /var/backup/databasebackup-$date.sql.bz2
Posted by P J on Friday February 21 2003, @4:17pm | [Delete] [Edit] |
To dump only select records into a file based on a timestamp field you can use this (last_modified is the timestamp field). This is used in a shell script to be used as a cron to take records that are more than a month old and dump them into an archive file (then the dumped records are deleted).
/yourpath/mysqldump "--where=(month(last_modified)+year(last_modified)*12 <
month(current_date)+(year(current_date)*12)-1)" database table > archive.sql
Posted by Dragos Moinescu on Wednesday March 12 2003, @8:04am | [Delete] [Edit] |
to dump a big table (called tblname) from a database (called dbname) with a where condition (called where condition without WHERE) do the following:
mysqldump --user=username --password=password --where='where condition without WHERE' dbname tblname | tar cfz > tblname.tgz
this will dump and archive it using tar.
if you wish not to add CREATE TABLE stuff, just add --no-create-info before --where.
enjoy
Posted by [name withheld] on Tuesday March 18 2003, @10:18am | [Delete] [Edit] |
We are using InnoDB tables and restoring from a mysqldump is problematic because the dump creates tables with foreign key definitions. There should be an option to create all foreign keys after the inserts.
Posted by george on Wednesday April 2 2003, @3:04am | [Delete] [Edit] |
For big dumps, -q (i.e. no output buffering), looks like a good idea.
A "default" dump of a 4GB table quickly ate up 1GB of real memory, and started grabbing swap but with nothing to show on disk!
Retrying with -q, the memory usage was trivial and at least I could watch the progress of the dump via the size of the dump file. It certainly felt a lot more comfortable!
Posted by Daniel D'Cotta on Sunday April 20 2003, @10:05pm | [Delete] [Edit] |
Well after alot of trial an error, the following works for me.
--where="isPartner='0'&isMaster='0'"
My next question is:
How do I insert only rows based on conditions of other tables?
Posted by Chung Lau on Monday April 21 2003, @11:05am | [Delete] [Edit] |
in response to:
>Posted by [name withheld] on Tuesday March 18 2003, @10:18am
>We are using InnoDB tables and restoring from a mysqldump
>is problematic because the dump creates tables with
>foreign key definitions. There should be an option to
>create all foreign keys after the inserts.
you can turn off foreign key checks while you're importing innodb databases
SET FOREIGN_KEY_CHECKS = 0;
source database.sql;
SET FOREIGN_KEY_CHECKS = 1;
Posted by Jesus Cruz on Saturday April 26 2003, @7:58am | [Delete] [Edit] |
when using mysqldump on --tab option, make sure you are executing this commnad on mysql user, if you execute it by root you are not going to be able to habe this option work.
Posted by [name withheld] on Tuesday April 29 2003, @1:21pm | [Delete] [Edit] |
If you're trying to dump a single table and mysqldump interprets the table name as a database name, try using the --tables option:
mysqldump -p -T mySubdir --tables myDB myTable
Posted by Matt Rosin on Friday May 9 2003, @7:39am | [Delete] [Edit] |
Two wierd things came up but I worked around them..
1.) Some of these things may not work depending on the version of the database you are accessing.
I got an SQL syntax error when adding the --lock-tables switch dumping from a remote 3.22.22 server (client was 3.23.56). I was hoping the C-like comments would magically not lock tables but I guess they are also not recognized by this version. Error was:
mysqldump: Got error: 1064: You have an error in your SQL syntax near '32311 LOCAL */,Test READ /*!32311 LOCAL */,cart READ /*!32311 LOCAL */,cookie RE' at line 1 when using LOCK TABLES
My command was:
mysqldump -C -v -u myusername -pmypassword -h thedbhost thedatabase --add-drop-table --add-locks --quick --lock-tables > ./dl.tmp
2.) Since the mysql client does not return an error code it is not easy to see if it failed (e.g. when running from a perl script). I decided to only copy dumps which were greater than 0 bytes since nothing is saved when you get an Access Denied message. How strange then when I intentionally misspell the database name to see that the dump does fail, but only after writing a few lines stating the (wrong) name of the database. This I feel is a bug. Not a big one but just remember, wherever you go, there you are. (--Buckaroo Banzai)
Posted by [name withheld] on Tuesday June 17 2003, @9:27am | [Delete] [Edit] |
This is not good way to backup all databases with:
shell> mysqldump --all-databases
because when trying to restore with:
shell> mysql <dump.sql
got error:
ERROR 1050 at line 1040: Table 'columns_priv' already exists
Posted by Rogério Cunha on Tuesday June 24 2003, @12:08pm | [Delete] [Edit] |
I'm also using InnoDB tables and 'mysqldump' should either separate foreign key definitions from the table creation or reorder the table creation accordingly.
Posted by Brian Vicente on Wednesday July 9 2003, @12:07pm | [Delete] [Edit] |
After using the option to split mysqldump's output to files that create the tables and files that has the data; is there a shortcut to using these files for a restore? Would one have to import the data files individually after the table creation files are performed?