Search the MySQL manual:

7.5.12.3 Defragmenting a Table

If there are random insertions or deletions in the indexes of a table, the indexes may become fragmented. By fragmentation we mean that the physical ordering of the index pages on the disk is not close to the alphabetical ordering of the records on the pages, or that there are many unused pages in the 64-page blocks which were allocated to the index.

It can speed up index scans if you periodically use mysqldump to dump the table to a text file, drop the table, and reload it from the dump. Another way to do the defragmenting is to ALTER the table type to MyISAM and back to InnoDB again. Note that a MyISAM table must fit in a single file on your operating system.

If the insertions to and index are always ascending and records are deleted only from the end, then the file space management algorithm of InnoDB guarantees that fragmentation in the index will not occur.

User Comments

Posted by Ganesan Sriram on Wednesday March 5 2003, @6:37pm[Delete] [Edit]

DROP INDEX followed by CREATE INDEX should defragment
the index (and help with fast index scans).
The procedure described seems to be for "defragmenting" the
table data itself and benefits table scans.

Posted by Sebastiaan Kamp on Monday March 10 2003, @12:49am[Delete] [Edit]

In a perfect world, mysqldump would export the 'CREATE TABLE ...' definitions in such order that tables with a foreign key constraint referring to them are created before the table containing the foreign key constraint. This way, error 1005 / errno 150 would be avoided.

Since we don't (yet) live in a perfect world, you should use "SET foreign_key_checks=0" before importing a dump to avoid problems.

Add your own comment.