MySQL Replication (Master to read only Slave)

We are using XWiki 11.10.4 and MySQL 8.0.21 each running in Docker containers on Ubuntu 18. We have configured one Master system and one Slave system on servers in different physical locations (eventually there will be 6 or more Slaves) with the intention of the Slave to be a read only copy of the Master. Both systems were installed based on the Docker XWiki image instructions and each system was brought up (initialized) independent of the other then a MySQL backup was performed on the Master, copied to the Slave, and restored on the Slave and then replication on the Slave was started. MySQL is configured with GTID replication enabled on just the XWiki database which is working properly with no errors.

Our problem is that when changes are made on the Master (ie: editing the Sandbox page) the content is being replicated to the Slave MySQL database but does not show in XWiki on the Slave. A restart of the Slave XWiki container resolves the problem and we can then see the updated content on the Slave.

This appears to be a caching issue on the Slave XWiki container. How do we configure XWiki so that the cache is refreshed as pages are replicated from the Master to the Slave?

Hi, you need to use the clustering feature of XWiki. See https://www.xwiki.org/xwiki/bin/view/Documentation/AdminGuide/Clustering/ and https://www.xwiki.org/xwiki/bin/view/Documentation/AdminGuide/Clustering/DistributedEventClusterSetup/

This feature allows invalidating document caches when changes are brought in one wiki instance.

If you wish to also cluster the MySQL DB, you can but from the POV of the XWiki instances they all need to point to the same DB.

Thank you for the reply.

I looked at clustering but ruled it out as an option because it utilized the same database for all members of the cluster. Our goal is to have each remote XWiki system to be 100% standalone since the remote sites (ships) have frequent network (satellite) outages but we must have the XWiki content available 100% of the time regardless of network access or not.

After reading a bit more about clustering are you saying I should let MySQL replication handle moving the data from the master to the slave database but use XWiki clustering (event distribution) to notify the cluster members when the data has changed so they refresh their document caches?

If that is what you are suggesting then how do I keep the MySQL replication and XWiki event distribution in sync? IE: What if the XWiki event distribution happens before the MySQL replication. It would notify XWiki to refresh a page that hasn’t yet changed but then changes later so the changes are not in the cache.

Is there a simpler approach I’m not seeing with clustering or using the embedded hsqldb database? Am I making this harder than it should be by using MySQL replication?

You must use xwiki clustering, there’s no other way (except doing hard code development). All xwiki instances must connect to the same “virtual” MySQL instance (i.e. a DB change must be visible to all xwiki instances). Then you need to cluster MySQL somehow. I’ve never done it so I don’t know MySQL capabilities, maybe it doesn’t exist (I’m pretty sure it exists for Oracle for ex).

MySQL replication will not help in this scenario, as it’s primary purpose is to keep you running in case a DB server fails. E.g. it will synchronously or asynchronously replicate the data from the >master< to slaves. Slaves are read-only. There are concepts of active-active masters, but this is intended for systems with very high speed and reliable links and for a completely different use case. This is at least what I know about. The predominant solution is Percona for MySQL/MariaDB (also from what I know, there might be others).

Anyway, similar thread:

HTH

I want the slave to be read-only so MySQL one-way replication suites my use case perfectly well. The issue we are facing is the caching in XWiki on the slave is preventing the replicated page updates from displaying because the cache contains the old pages.

There are many challenges with this design. If you point the satellite site to the local instance, they will never be able to update anything or they would need to use a different URL for the local read-only and the writable main instance … which is usually difficult with users … also I don’t know how xWiki behaves if it has a read-only DB underlying! You might use a reverse proxy with HA capabilities to solve the first challenge to point to the main wiki as long as there is a link and then fall back to the local if the main is no longer there … if you monitor this, you could restart/start the Java Servlet Container once you have to fall back to the local (read-only) instance. Most of the HA proxies have hooks or triggers in case of such an event. You could have those start/restart/stop the local xWiki.
Sure, this will cause an short downtime until the Java Servlet Container has started. But this is usually a matter of 20 - 40 Seconds … again, there will be the problem of the read-only database at this time … but you have probably tested this already.
PS: I >assume< the caching is part of Hibernate, e.g. this is difficult to address