I’m currently looking at backing up all of our pages via HTML but it appears I can only use the URL whilst logged in. Is there anyway I can automate this process?
We currently backup our wiki using VMware snapshots but we have a requirement to browse our documentation in a disaster recovery situation. Ideally avoiding a restore process and something that will give us direct access to the raw content.
Short answer is no since I don’t think we have a REST end point for doing an HTML export.
One idea might be to add a scheduler job that would execute regularly and perform the export. It would call the export API. The problem is that it won’t work since it’ll be missing things in the context.request.
For fun, example of the Export API in a wiki package:
Sorry this is the first time I’ve ever used Groovy scripts, so please bear with me. If I use the scheduler application can I specify where the file is output?
Thanks Vincent for all your help but unfortunately that doesn’t seem to work with our setup, I just get a broken 1kb zip file that is created by the curl command.
Curl does allow me to use the export commands that I originally mentioned thought along with a username and password parameter, I have managed to download some of our pages but I can’t download them all. I just need to get a better understanding of using the commands and I will figure out some way of doing it.