I’d like to start this thread to gather ideas about what XWiki could with some AI (machine learning, rule engine, etc).
Do you have some ideas of places in XWiki where you can imagine how AI could improve XWiki?
Let me start:
Idea 1: Have a Panel that would recommend pages to check out based on your past navigation history or maybe better on the topics contained in pages you edited.
Idea 2: Parse page content (the text) and extract data present in a set of pages, and automatically create an application based on that. For example recognize that in a given space, all pages have a date mentioned, or a user name, or an image, etc and propose an xclass for it. Basically a kind of wizard for AWM that would pre-create an AWM app by giving it a set of pages and maybe some hints about what it should find.
Any other ideas?
The goal is to list as many ideas as possible (brainstorming mode) and then see if there’s some good idea. Maybe even propose some of them as research projects (European research projects for example).
Let’s start!
In your wildest dreams, what would you like to see related to AI in XWiki?
Idea 3: (similar to idea2): When you’re on a page with a largish list, have XWiki suggest to transform it into structured content and generate a small app for it, replacing the list with a LiveTable. In short this is AWM but applied on existing content so that the structure is pre-filled to the maximum and entries are automatically created based on the existing content.
Idea 4: Based on how the user navigates the wiki, edits content, write scripts, etc, compute a proficiency level of the user (from 0 novice to 100% expert). Display it in the user profile. Allow the user to see on his profile how to gain proficiency (listing things he can do to increase it). Then with the proficiency level we can do imagine several things:
Tune the Tips Panel to display tips matching the proficiency level of the user
Hide/Show UI elements depending on the proficiency level of the user
Idea 5: Suggest interesting extensions to install based on existing page content/layout and based on existing installed extensions.
Yes I like that one too. To elaborate, I worked on this more than 10 years ago using AliceBot (using ProgramD: GitHub - noelbush-xx/programd: AIML-based conversational bot server). AliceBot works with content definition in AIML. It provides some preexisting AIML files for conversational language and you could add your own expert grammar on top of it. My idea back then was to be able to add AIML content through structured content written in wiki pages.
The next step (what you suggest) would be to not have to create the structured content and extract it automatically from wiki page content. Nice and challenging, I like it!
Idea 8: Help Assistant: Depending on how much time an user spends to do a task, provide ways to help the user complete it: like showing a tour about the feature or tip & tricks that would simplify / or make the task much faster.
Idea 9: Suggest Applications: Depending on the ‘usage’ of the wiki, identify the flavor / use case of the wiki and suggest applications that the user would be interested in. We could compute some usage percentages per wiki, like “development:” wiki is composed of: 30% of the “Development Flavor”, 40% of the “Documentation Flavor” and 30% of “Public Website”, and maybe the admin would be interested in a “Release Notes” or “Social Login” application.
Also depending on what the other wikis of a particular type install on top of the Standard Flavor, the definition and recommendation would self adapt according to the preferences of the users in applying the suggestions. This would make XWiki the most adaptive extensible wiki out there
Also, not used applications could be recommended to be uninstalled.
Idea 10: suggest things to watch, for example propose to watch a whole space when the user is watching most of what it contains already (it’s not only for the user, it’s also good for performances)
Idea 11: similar pages panel, in a panel indicate pages which seems very similar to the current page (a bit like SuggestiMate on Jira). Very useful to cleanup the mess and find duplicated content.
Idea 12 Migration Wizard related to Idea 2 + Idea 3 (+ a bit of 5 + 9): It would be nice to give XWiki an URL (the current website of your organization) and XWiki to run through a ‘scanner’ and list the structure (wikis, spaces, pages) and the applications (depending on what it finds in the current content, it will match with existing apps) needed by your organization to move and start using XWiki. It would detect the flavor of the website, if there are matching existing apps or propose to create new apps that will hold your content. After this ‘report’ the users could follow a wizard, create and transform (import and migrate) their current content into a new XWiki wiki, but where the information is structured and better organized
Idea 13 (close to Idea 1): show automatically notifications that should interest the user, without the need for her to configure anything, just by knowing her history (view, edit, etc…).
Idea 14: Applications Connector I would like to have some basic structures (properties / data types) that are used by all the applications, but that they somehow “connect”. For example: If I create a meeting with the ‘Meetings Application’, this app will use the ‘date’ field type and will automatically be added in the ‘Calendar Application’ that is managing the ‘date’ field types. If in that meeting I mention an user, if I will go into that user’s profile, I will be able to see that he will have an event next week. Also on that profile I will be able to see that he did lots of revisions on my wiki pages. Anyway, the use cases are limitless, but my whole point is that our applications are kind of isolated and we need to manually make them “work together”. An AI enabled XWiki would be able to make some abstractions by itself and connect / relate entities from multiple applications.
Hi! This is a topic I’ve thought about in the past. I’ve seen some very nice ideas on this thread and it would be really nice if we could implement some. Here are some ideas I’ve thought about, although I’m not a good judge of their feasibility, so some might indeed fall in the “wildest dreams” category
Analytics
Idea 15, building up on idea 4: apart from user proficiency, calculate user engagement.
on a individual level this will incentivise users to contribute more to the wiki
on a global level wiki admins will be able to better see the user engagement/disengagement and understand the wiki adoption across the organization.
Personalisation
Idea 16, building up on ideas 5 and 9: build an AI assistant that asks questions about your project, your collaboration, data management and integration needs when you first try out XWiki. The assistant then proposes an XWiki personalized flavor:
with a mix of extensions tailored to your particular needs
the rights already configured out of the box for the level of privacy you need.
Content
Idea 17: Identify popular topics across the wiki and:
understand content performance: see what content is overrepresented; which content is missing
empower decision-making about creating new content that has proven popular in the past
Idea 18: auto-tagging of pages based on content
Idea 19: automatic blog summaries
Idea 20: automatic table of contents based on page content
Idea 21: automatic detection and flagging of sensitive and personal identifying data (for GDPR), triggering a notification to the administrator
Idea 12 (or is it 22?): When doing a search include pages/contents with terms “similar” to the search terms. Maybe with an optional Button “Search also for X,Y?”
However this normally includes some kind of dictionary / taxonomy to know which words are “similar”. Not sure if this can be extracted from the contents of the wiki. Maybe just some named entity recognition works; e.g. figuring out that “AWM” = “application within minutes”.
About idea 2: The use case that I often see are data sets stored in large tables instead of several pages. In that case one does not need much AI to figure out the structure, however …
Idea 23 (inspired by Idea 17): Evaluate the quality of the data (the knowledge) and generate questions to ask to the users so the content can be improved thanks to the answers of the users. This new content could be automatically generated, in a natural language, by a very powerful IA. The idea is that the system is responsible for the quality of the knowledge it contains, its obsolescence, its pertinence, and its completeness.
Idea 25: Upgrade search by showing results not only based on matched words from your query, but also words that are related to the ones you searched for. Example: Search for “Obama” and also show results including “White House”, “USA”, etc., the score varying on the level of “relation” to the searched word(s). This is building a bit on the older Scribo project (scribo - XWiki SAS and https://systematic-paris-region.org/fr/projet/scribo/) which was focusing on extracting the Named Entities from the unstructured text. The next level would be to index these entities in the search database next to the relations between them, that can be used when producing the results and the scores of those results.
Idea 26: Automated SPAM Detection for public wikis: Compute a percentage representing the likelihood of a user’s change is actually SPAM and be able to make an action (like dropping the change, reverting previous changes, deactivating the user, etc.) when the likelihood is above a specified threshold. Previous changes could also be considered, to cover the case when the malicious user would break his intention into numerous small changes, in order to avoid detection.
Idea 27: Automatic template creator: Given a number of similar pages, selected by the user, the app should be able to generate a template (with the common content and structure) to be used when creating new pages. This would be similar to the AWM idea (when it would generate an AWM app, with structured content), but in some cases templates might be a more practical approach.
Idea 28: Index image attachments: Extract visual information from an image and add it to the search index. (Similar again to what Scribo did, that can be used directly to find the image; Tika already extracts metadata from images, but does not do any OCR or computer vision-like analysis)
Idea 30: Content tone analysis: analyse how content comes across to readers and then see which tone resonates with audience based on engagement with the content (e.g. views, edits, comments, annotations).
I really like this idea but I think this is something pretty hard to do. Some research have been made around this idea : Text summarization with TensorFlow.