Search the MySQL manual:

4.4.1 Database Backups

Because MySQL tables are stored as files, it is easy to do a backup. To get a consistent backup, do a LOCK TABLES on the relevant tables followed by FLUSH TABLES for the tables. See section 6.7.3 LOCK TABLES and UNLOCK TABLES Syntax. See section 4.5.3 FLUSH Syntax. You only need a read lock; this allows other threads to continue to query the tables while you are making a copy of the files in the database directory. The FLUSH TABLE is needed to ensure that the all active index pages is written to disk before you start the backup.

Starting from 3.23.56 and 4.0.12 BACKUP TABLE will not allow you to overwrite existing files as this would be a security risk.

If you want to make an SQL level backup of a table, you can use SELECT INTO OUTFILE or BACKUP TABLE. See section 6.4.1 SELECT Syntax. See section 4.4.2 BACKUP TABLE Syntax.

Another way to back up a database is to use the mysqldump program or the mysqlhotcopy script. See section 4.8.6 mysqldump, Dumping Table Structure and Data. See section 4.8.7 mysqlhotcopy, Copying MySQL Databases and Tables.

  1. Do a full backup of your databases:
    shell> mysqldump --tab=/path/to/some/dir --opt --all
    
    or
    
    shell> mysqlhotcopy database /path/to/some/dir
    
    You can also simply copy all table files (`*.frm', `*.MYD', and `*.MYI' files) as long as the server isn't updating anything. The script mysqlhotcopy does use this method.
  2. Stop mysqld if it's running, then start it with the --log-bin[=file_name] option. See section 4.9.4 The Binary Log. The binary log file(s) provide you with the information you need to replicate changes to the database that are made subsequent to the point at which you executed mysqldump.

If you have to restore something, try to recover your tables using REPAIR TABLE or myisamchk -r first. That should work in 99.9% of all cases. If myisamchk fails, try the following procedure (this will only work if you have started MySQL with --log-bin, see section 4.9.4 The Binary Log):

  1. Restore the original mysqldump backup, or binary backup.
  2. Execute the following command to re-run the updates in the binary log:
    shell> mysqlbinlog hostname-bin.[0-9]* | mysql
    
    In your case you may want to re-run only certain binlogs, from certain positions (usually you want to re-run all binlogs from the date of the restored backup, possibly excepted some wrong queries). See section 4.8.4 mysqlbinlog, Executing the queries from a binary log for more information on the mysqlbinlog utility and how to use it. If you are using the update log (which will be removed in MySQL 5.0) you can execute the content of the update log like this:
    shell> ls -1 -t -r hostname.[0-9]* | xargs cat | mysql
    

ls is used to get all the update log files in the right order.

You can also do selective backups with SELECT * INTO OUTFILE 'file_name' FROM tbl_name and restore with LOAD DATA INFILE 'file_name' REPLACE ... To avoid duplicate records, you need a PRIMARY KEY or a UNIQUE key in the table. The REPLACE keyword causes old records to be replaced with new ones when a new record duplicates an old record on a unique key value.

If you get performance problems in making backups on your system, you can solve this by setting up replication and do the backups on the slave instead of on the master. See section 4.10.1 Introduction.

If you are using a Veritas filesystem, you can do:

  1. From a client (or Perl), execute: FLUSH TABLES WITH READ LOCK.
  2. From another shell, execute: mount vxfs snapshot.
  3. From the first client, execute: UNLOCK TABLES.
  4. Copy files from snapshot.
  5. Unmount snapshot.

User Comments

Posted by Ed McGuigan on Friday May 17 2002, @6:24am[Delete] [Edit]

I found that when using the "SELECT * FROM %
TABLE% INTO OUTFILE '%filename%' " syntax, my
access privileges to the directory structure
where based on the user under which the mySQL
database server was running, not the mysql client.

A filename without a fully qualified path was
saved in the /usr/local/var directory - the
location for my database files.

Also, if you connect to the database server from
a remote host, the files are created on the
server as opposed to the connecting client.

Posted by [name withheld] on Saturday June 1 2002, @6:52am[Delete] [Edit]

Remember - MySQL operates over sockets and so
sending the entire database to the client is what
SELECT is for. If the client wants the data then
they should SELECT it and save it.

Posted by [name withheld] on Friday May 17 2002, @6:24am[Delete] [Edit]

Isn't backing up to a local drive what you would
usually do? Imagine dialing in from your laptop to
manually run a backup and having mysqld trying to
dump many mb of data across the dialup connection.
If you want to backup to a different machine's
drive, just mount the intended destination drive via
SMB/NFS/AFS/whatever and then backup to that drive.

Posted by Kevin Fleming on Wednesday December 18 2002, @5:28pm[Delete] [Edit]

We use the following script to backup
all databases on our Linux/MySQL server to a
remote Windows machine using Samba. It works well
and run as a cron job requires minimial
administration.
------------------
<PRE>
#!/bin/sh

# Define some variables.
BackupServer="SMBServer"
BackupShare="SMBShare"
BackupUser="SMBUser"
BackupPW="SMBPassword"

# Create the temporary copy directory
mkdir /tmp/mysql

# Get the list of MySQL Databases & copy them

for var in `find /var/lib/mysql/ -type d | \
sed -e "s/\/var\/lib\/mysql\///"`; do
mysqlhotcopy -q "$var" /tmp/mysql
done

# Tar/gzip data

date=`date -I`
tar czf /tmp/mysql-$date.tar.gz -C /tmp/ mysql/

# Move the data to FS1
smbclient //$BackupServer/$BackupShare \
$BackupPW -d0 -W WORKGROUP -U $BackupUser \
-c "put /tmp/mysql-$date.tar.gz \
mysql-$date.tar.gz"

# Delete temp files

rm -fR /tmp/mysql/
rm -f /tmp/mysql-$date.tar.gz
</PRE>

Posted by Jeremy Tinley on Friday August 30 2002, @12:09pm[Delete] [Edit]

The command above, "mysqldump --
tab=/path/to/some/dir --opt --full"
doesn't work with .52.

--full is not recognized
--all-databases which is part of --opt, can't be used
with --tab

Posted by Christof Kaelin on Friday October 25 2002, @10:11am[Delete] [Edit]

The "mysqlhotcopy" does not seem to work with
innodb-tables, as it needs *MYI or other table-files for
copying. only the *FRM of my innoDB-tables were copied.

Posted by jeppe on Wednesday February 19 2003, @2:49am[Delete] [Edit]

On large databases, use -q when doing a mysqldump. Otherwise, the mysqldump process will buffer the data, and you'll end up with a larger process than need be.

Posted by jeppe on Wednesday February 19 2003, @2:53am[Delete] [Edit]

One more thing. You can speed up restore by dumping table definition and table data into different files. When your database needs a restore, drop the database, recreate table definitions, drop all indices, load data, recreate indeces using "alter table" syntax. Be sure to add all indeces for a table in one alter table statement, since altering indices involves temporary tables.

Posted by Subhasis Ray on Wednesday April 9 2003, @10:30pm[Delete] [Edit]

For Moderate sized databases, with tables > 1.5 G, A mysqldump usually tends to put all other queries on that table in a locked state. A better option might be to backup the MYD and frm files, and compress them. For restore, unpack the files and write "restore table %tablename% from '%directoryname%'" at the mysql prompt. This is quite fast.

Posted by John Lin on Saturday April 19 2003, @1:15pm[Delete] [Edit]

On the top of this document, it claims that:
1. Do a full backup of your databases:
shell> mysqldump --tab=/path/to/some/dir --opt --all

This is not true. When I did this, I got the error message:
mysqldump: --databases or --all-databases can't be used with --tab.

Instead, do something like:
shell> mysqldump --tab=/path/to/some/dir --opt my_database

(I am using version: mysql Ver 11.18 Distrib 3.23.52, for pc-linux-gnu (i686))

Add your own comment.