Here is an explanation of what is supported and what is not:
AUTO_INCREMENT
,
LAST_INSERT_ID()
, and TIMESTAMP
values.
USER()
and LOAD_FILE()
functions
are replicated without changes and will thus not work reliably on the
slave. This is also true for CONNECTION_ID()
in slave versions
strictly older than 4.1.1.
The new PASSWORD()
function in MySQL 4.1, is well
replicated since 4.1.1 masters ; your slaves must be 4.1.0 or above
to replicate it. If you have older slaves and need to replicate
PASSWORD()
from your 4.1.x master, you should start your master
with option --old-password
.
sql_mode
, FOREIGN_KEY_CHECKS
and table_type
variables are not replicated.
--default-character-set
)
on the master and the slave. If not, you may get duplicate key errors on
the slave, because a key that is regarded as unique in the master character
set may not be unique in the slave character set.
BEGIN/COMMIT
block, as
the slave will later start at the beginning of the BEGIN
block.
This issue is on our TODO and will be fixed in the near future.
FLUSH
, ANALYZE
, OPTIMIZE
, REPAIR
commands
are not stored in the binary log and are because
of this not replicated to the slaves. This is not normally a problem as
these commands don't change anything. This does however mean that if you
update the MySQL privilege tables directly without using the
GRANT
statement and you replicate the mysql
privilege
database, you must do a FLUSH PRIVILEGES
on your slaves to put
the new privileges into effect. Also if you use
FLUSH TABLES
when renaming a MyISAM
table involved in a
MERGE
table, you will have to issue FLUSH TABLES
manually on the slave.
Since MySQL 4.1.1, these commands are written to the binary log
(except FLUSH LOGS
, FLUSH MASTER
, FLUSH SLAVE
,
FLUSH TABLES WITH READ LOCK
) unless you specify
NO_WRITE_TO_BINLOG
(or its alias LOCAL
)
(for a syntax example, see section 4.5.3 FLUSH
Syntax).
STOP SLAVE
, check Slave_open_temp_tables
variable to see
if it is 0, if so issue mysqladmin shutdown
. If the number is
not 0, restart the slave threads with START SLAVE
and see if
you have better luck next time. We have plans to fix this in the near future.
log-slave-updates
enabled.
Note, however, that many queries will not work right in this kind of
setup unless your client code is written to take care of the potential
problems that can happen from updates that occur in different sequence
on different servers.
This means that you can do a setup like the following:
A -> B -> C -> AThanks to server ids, which are encoded in the binary log events, A will know when the event it reads had originally been created by A, so A will not execute it and there will be no infinite loop. But this circular setup will only work if you only do non conflicting updates between the tables. In other words, if you insert data in A and C, you should never insert a row in A that may have a conflicting key with a row insert in C. You should also not update the same rows on two servers if the order in which the updates are applied matters.
START SLAVE
.
master-connect-retry
(default
60) seconds. Because of this, it is safe to shut down the master, and
then restart it after a while. The slave will also be able to deal with
network connectivity outages. However, the slave will notice the
network outage only after receiving no data from the master for
slave_net_timeout
seconds. So if your outages are short, you may want
to decrease slave_net_timeout
; see section 4.5.7.4 SHOW VARIABLES
.
slave-skip-errors
option starting in Version 3.23.47.
BEGIN/COMMIT
segment updates to the binary log may be out of sync
if some thread changes the non-transactional table before the
transaction commits. This is because the transaction is written to the
binary log only when it's commited.
COMMIT
or not written at all if you use ROLLBACK
. You
have to take this into account when updating both transactional tables
and non-transactional tables in the same transaction and you are using
binary logging for backups or replication.
The following table is about problems in 3.23 that are fixed in 4.0:
LOAD DATA INFILE
will be handled properly as long as the file
still resides on the master server at the time of update
propagation.
LOAD LOCAL DATA INFILE
will be skipped.
RAND()
in updates does not replicate properly.
Use RAND(some_non_rand_expr)
if you are replicating updates with
RAND()
. You can, for example, use UNIX_TIMESTAMP()
for the
argument to RAND()
. This is fixed in 4.0.
Posted by [name withheld] on Wednesday December 18 2002, @5:28pm | [Delete] [Edit] |
FYI
I could not get replication to work between
3.23.53 (master) and 3.23.41 (slave)
The symptons where:
Very very slow replication. Like 30 seconds
before a simple update got replicated.
About every 30 seconds, I got these entries in the
replication slave's log file:
020820 13:46:51 Error reading packet from server:
(server_errno=1159)
020820 13:47:51 Slave: Failed reading log event,
reconnecting to retry, log 'dragon-mount-bin.004'
position 137
020820 13:47:51 Slave: reconnected to master
'repl@192.168.1.11:3306',replication resumed in
log 'dragon-mount-bin.004' at position 137
By simply upgrading to 3.23.53(latest) for the
slave it worked just fine
Posted by [name withheld] on Tuesday September 10 2002, @2:05pm | [Delete] [Edit] |
Got exactly the same error as above (error reading
packet every 30 sec), using v3.23.51 for both
master and slave.
Switching to v3.23.52 (latest for Solaris) solved
the problem.
Posted by [name withheld] on Wednesday November 13 2002, @5:43pm | [Delete] [Edit] |
I created a table, loaded part of it from local
files, and part of it from update statements.
When I looked at what was replicated, of course,
only the lines that were manually inserted
propigated. There were about 130 rows from
load, and 4 from insert. When I read the
limitations, I realized what was going on - I had
used local files and not copied the files into the
mysql directory. So I created a second table
manually, and I populated it with an insert into
xxx (f1, f2) select * from oldtable;
And the same lines were replicated. The 4 rows
that were propagated were the same ones. I
can only assume that the update / select was
propagated as a command and not the data.
I tried altering the data (integer+0, string concat
(field,'')) - again, the propagation was via
command, and not via value, so the same 4 lines
ended up in the target table.
I ended up having to propagate the data via a
file dump from source/ftp(scp)/restore via piping
the dump into mysql.
I did not actually try it, but it is possible that I
could have fixed the issue simply by dumping
and restoring the tables (and recreating the
data) on the source machine. Recreating the
data with update statements should have
worked.
Posted by [name withheld] on Tuesday April 1 2003, @2:20pm | [Delete] [Edit] |
With replication, LOAD DATA INFILE does not replicate using strictly the filename, e.g.
mysql> load data infile 'a_table.txt' ...
(a row of default values usually appears on the slaves)
Use of the full path, however, does replicate, e.g.
mysql> load data infile /var/lib/mysql/a_db/a_table.txt' ...
Either way, data is loaded on master properly. No cross database access is involved.
Master is 3.23.49. Some slaves are 3.23.49, some are 3.23.54.
Posted by shashi kumar on Sunday May 18 2003, @10:31pm | [Delete] [Edit] |
I AM GETTING THE FOOLOWIN GERROR MESSAGE ON THE SLAVE .ERR FILE I HAVE GIVEN REPLICATION SLAVE PRIVELEGE ON MASTER CAN ANY BODY GIVE ME THE DETAIL STEPS OF REPLICATION AND SOLUTION FOR THIS ERROR.
030519 0:13:43 Slave I/O thread: connected to master 'SHASHI@203.196.160.20:3
306', replication started in log '1.001' at position 179
030519 0:13:43 While trying to obtain the list of slaves from the master '203.
196.160.20:3306', user 'SHASHI' got the following error: 'Access denied. You ne
ed the REPLICATION SLAVE privilege for this operation'
030519 0:13:43 Slave I/O thread exiting, read up to log '1.001', position 179
THANKS
Posted by netoli netoli on Thursday May 22 2003, @6:50am | [Delete] [Edit] |
I had the same error. my replication user had all rights (at least for testing) but complains about insufficient priviliges
Posted by Peter Burns on Wednesday May 28 2003, @2:09am | [Delete] [Edit] |
Note that the GRANT command always replicates, regardless of whether you're replicating the mysql privileges tables.
Posted by Vasu Narasimhan on Wednesday June 4 2003, @8:24pm | [Delete] [Edit] |
I initially got the same "Access denied" error with replication, but running the mysql_fix_privilege_tables script fixed it. (Please note that in my case I had upgraded the master from 3.23 to 4.0.13)