We plan to use XWiki in the context of “Electronic Lab Notebooks” (ELN) and would like to add a service with our own code (using an additional VM) to hash and timestamp (RFC 3161) pages that have been newly created or modified. We have just written a simple tool in Python (using the request package) that allows us to retrieve a page “alpha.beta.Webhome” as “…/rest/wikis/xwiki/spaces/alpha/spaces/beta/pages/WebHome” (setting the “attachments” and “objects” option to true). The “content” part of the returned XML seems to correspond to the XAR version of the page. This way, we get a list of attachments (as URLs) - other than having the attachments embedded in the response (the XAR way). So, iterating over those attachment URLs and individually downloading the attachments is the best way of dowloading all data of one XWiki page (apart from some object data we are currently not interested in)? Thanks!