Impact of modern front-end technologies on non-functional properties

Hello,
I’d like to start a discussion about technical problems we might get when we switch to new front-end technologies, like web components or Vue.

I’m going to list the one I thought of to begin the discussion. But I would like to hear as many opinions as possible about those and the one I missed. I’ll probably start separate discussions later to discuss the identified issues separately.

When rendering pages server-side, the process is — roughly — the following:

  • Gather data from various sources (e.g., databases, rest endpoints, etc.).
  • Feed them to a template engine (i.e., Velocity in our case).
  • Send the produced HTML to the client.
  • The HTML is rendered in the browser.

When rendering pages client-side, the process is as follows:

  • Directly return a minimal HTML page along with JavaScript dependencies.
  • The JavaScript is evaluated and loads 1) a front-end rendering framework (i.e., Vue), 2) the business logic, and 3) the dynamic content, usually from REST endpoints.
  • The rendering framework takes the dynamic content and produces the HTML of the actual page.

The main advantage of the client-side rendering is that it is easier and faster to update parts of the UI based on dynamic data than it is to reload the whole page.

But the initial process of loading the JavaScript dependencies and data can be slow and involves a JavaScript interpreter.

Initial page load performance

The first obvious limitation is the time to the first page load, especially when none of the resources are already in the browser cache.

Bad SEO

This is an indirect consequence of the initial page load performances. Web indexers can now interpret JavaScript, and based on what I could read, it seems to be reliable (see, https://www.reddit.com/r/vuejs/comments/18m9tzc/google_seo_for_vue_spa_research/, https://www.reddit.com/r/TechSEO/comments/103u6jz/vuejs_and_seo_will_clientside_rendering_destroy/, and Vue.js SEO Tips | DigitalOcean).

Though, I already identified limitations in the context of Cristal (see Loading...).

Therefore, there is a risk, and we need to define best practices to avoid hurting the SEO abilities of XWiki

Routes definition

So far, page url resolution is done client-side. With client-side rendering, and the absence of full page reload, url resolution must be done client-side.

Accessibility

Similarly, some user-assisting technologies might expect a full HTML to be rendered straight away, and cannot interpret JavaScript.

I’d be interested in @CharpentierLucas input about this.

Legacy support

Recent front-end frameworks can depend on new APIs that are not supported by older browsers.
While our browser support rules (https://dev.xwiki.org/xwiki/bin/view/Community/SupportStrategy/BrowserSupportStrategy) is pretty permissive as we support only the latest version of Edge, Chrome, and Firefox. This is leaving a lot of users behind, by summing up browsers with a small user base. See Browserslist

WDYT? Thanks

Technical documentations:

I went and looked a bit online. I did not see this issue often before when reading web accessibility blogs.

[…] screen readers encounter source code after it has been manipulated with JavaScript and CSS, similar to what’s visible in the Chrome or Firefox Developer Tools (as compared to View Source, which is pre-render). That made sense considering screen readers are affected by JavaScript and some CSS properties. When Shadow DOM nodes, or subtrees, are appended to a parent document, they are read aloud as “one happy tree” to a screen reader.

from Accessibility and the Shadow DOM | MarcySutton.com

element IDs are scoped within a shadow root, so a reference from outside of shadow root can’t refer to an element with that ID inside a shadow root. However, it’s also a logical quandary: if the elements inside the shadow root are implementation details which are intentionally opaque from the point of view of any code outside the component, how can they also be a part of a semantic association mediated by code?

from How Shadow DOM and accessibility are in conflict (similar idea discussed in Shadow DOM and accessibility: the trouble with ARIA | Read the Tea Leaves )


As far as I could read online, it should be okay. Screen readers (and other assistive techs) already have a live view of the content through the accessibility tree. They work on the rendered HTML, and they get updates :slight_smile: .

The only issue is for references. A few ARIA attributes use node IDs to identify another element, just like the for attribute on labels. And just like the for attribute, you cannot expect things to work nicely if your reference is not contained in the main tree or a subtree but goes across multiples… There’s a reported issue for the specs for this.
This is not a really common drawback. It’s technically possible to do weird things, but from what I remember, such references should be close in the DOM anyways and they’ll end up in the same tree naturally in most cases.

On the side of accessibility, IMO the improved consistency provided by a front-end rendering framework would outweight this minor drawback.

Thanks for starting the discussion!
Lucas C.

PS: We already use Vue in XS for livedata. Isn’t it close to the model we’d want to follow for the future? AFAIK, there’s no specific accessibility issue with this model.

1 Like

I’m not sure how you arrived at the conclusion that client-side rendering is not a problem based on these links. For example, the last link contains:

Last but not least, an SPA is, by default, at an SEO disadvantage, because all URLs are handled by a single route, and crawlers will need to be able to run JavaScript to render the full page (an iffy process).

Also, the Reddit discussion is very negative regarding client-side rendering, some quotes:

“will client-side Vue.Js tank SEO” - yes.

I worked at a huge enterprise company that switched to Angular JS because someone said Google would be okay with it. Google was not okay with it. […] Don’t do client side JS

To me, full client-side rendering is only an option for the intranet use case but not for a public wiki. And my understanding is that with XWiki we want to support both.

Therefore, for me going with full client-side rendering is no option.

However, to me, this is not a contradiction to using web components. Web components support full server-side rendering of the initial content without JavaScript, as your last link explains.

To me, it is important that we keep the main content and at least some navigation options rendered on the server (I know that this doesn’t apply to the navigation panel today, but it applies to the menus we support). I don’t mind if this doesn’t apply to other navigation options like Live Data or also other features like comments that are beyond the basic use as a public user.

That’s something we’re already using today, e.g., with in-place edit for object properties and I see no problem with extending this further. We can have an initial view rendered on the server and then based on a document model on the client side that receives real-time updates we can update the rendering, either with direct client-side rendering or by requesting an updated rendering of parts of the page from the server if the property can’t be rendered on the client-side yet. And we should ideally have a framework that supports writing templates like displayers for properties that can be executed both on the server and the client side.

1 Like