I’m working on setting up the LLM Application on a 17.9.0 XWiki instance. I’ve installed
- LLM Application (BETA) 0.7.2
- Token-based authentication for the LLM Application (BETA) 0.7.2
- LLM Internal Inference Server (BETA) 0.7.2
- Index for the LLM Application (BETA) 0.7.2
I’m using gpt-4o as the model and have a collection for most of the wiki’s content pages.
I was able to get wiki-specific responses, including sources that referred to wiki pages, for a few prompts but when I returned later this RAG behavior was gone.
Any thoughts on how I might debug this?
Thanks!