Web Development At Scale: Composable Applications With Web Components

Pawel Uchida-Psztyc
13 min readAug 25, 2020

--

See a video version of this article on YouTube

Background on modularity

From the beginning of web development, we had a problem with modularity. You see, in Java, you can publish a JAR file, that is a library that can be natively used in the Java platform. You can then use Gradle or Maven to manage project dependencies. There was nothing like that for the web platform. Then, in January 2010, the first version of NPM or Node Package Manager was released. NPM was initially used to manage dependencies for server-side applications written for NodeJS. This platform is very similar to the web platform we know from a browser environment. However, there is a significant difference between a browser and a NodeJS environment: it only has APIs defined in EcmaScript specification. It does not offer APIs defined for the web. For example, the atob function that decodes a base64 encoded string is defined in HTML specification and therefore it is not available in NodeJS. Furthermore, it uses its own set of native APIs that has nothing to do with the web platform. Such APIs are for example filesystem access which is not available for web applications just yet (there is a proposal for the API at the moment), or networking APIs (like net, HTTP, HTTPS) that would be allowed to use low or high-level functions to make HTTP requests. This is still included in the web platform, but the APIs are different, and browsers have additional limitations imposed by the security model aka Cross-Origin Resource Sharing or CORS.

Web developers quickly started using NPM to publish “browser” ready libraries. Such a library shouldn’t be using NodeJS native APIs as these APIs just don’t exist in a browser. There is a way to overcome this problem by using another great project: Browserify. This project “translates” NodeJS native APIs into its browser version. It is not always possible, though. In any case, this solved a huge problem of how to manage dependencies in a web application, but accidentally created a new problem: which dependency can be used in a web application and which in a NodeJS only application. The NPM owners did very little to help with this situation. You see, NodeJS has its own way of including dependencies, called Common JS, but it is not a web standard. At the time the only way to include a dependency in a web application was to use the <script> tag. Developers then started experimenting with new ways of including dependencies in a web application. These were called CommonJS for what NodeJS is supporting and AMD which was based on CommonJS, but supported asynchronous dependency loading. This, however, requires loading additional support libraries, meaning more code that has to be downloaded, parsed, and executed. At the time a browser still wasn’t able to process dependencies natively other than a simple <script> tag.

In 2010 Alex Russell, a software engineer at Google, and his team of engineers from various fields, had spent months working on new ways of fixing the web platform if it would be built today with backward compatibility. After weeks of arguing and prototyping, they proposed a new concept at Fronteers Conference in 2011 that has a simple idea in the center: to subclass the HTMLElement interface and build the web from there. Many libraries at the time were trying to do the same thing: building component-like interfaces to build UI libraries. That was the beginning of the Polymer project — the first web components support library. In a way, it was a proof of concept, and it’s still around. But today we have much more options, like LitElement or Lighting Web Components. Over the years new proposals of web specifications emerged from a concept of a custom element that extends the HTML element, a template, or a shadow root. There were many more concepts included like HTML imports. Some of them survived, and some of them are now long forgotten. In 2019 browser vendors agreed on the final version of specifications that are now known under a common name as web components. We waited so long for the final specification because the vendors couldn’t agree on a way how the new modules system would work in a browser. The original proposal from the Chrome team was to use HTML imports which allowed to import an HTML template and logic as an HTML file. This didn’t last.

Today every browser natively supports a system where a developer can load a script using EcmaScript modules. They are similar to the CommonJS system, the one supported by NodeJS, in the way that they resolve and load dependencies. There is a proposal to add support for dynamic loading of the EcmaScript modules. Now we can create parts of application logic and treat them as 1st class citizens. Meaning, we can publish this logic as a separate module and reuse it in any web project with the native support of the web platform. This gave us new possibilities of how to architect web applications. Particularly the way how the NodeJS environment works with the management of dependencies.

Architecting web ecosystem

Now, let’s talk about the architecture of web applications that are highly scalable thanks to componentization. Don’t get me wrong. I’ll be talking about web components as today this is the web way of programming web applications, but you can apply it as well, for example, to the React ecosystem of components. To move forward, I’ll be talking about: base components, design system components, composite components, and a concept of a shell application.

Base components

Base components are the fundamental parts of your ecosystem. These are the low-level components that build the basic UIs. You can include in this all kinds of inputs, lists, dropdowns, tabs, carousels, and so on and so forth. You can already apply styling to these components, but here’s the problem. If you ever decide to use a different design system, or your organization uses different styles for different applications in separate departments; then you may end up recreating the entire base UI library for each time you change the design system between the teams. It doesn’t sound like a scalable way to build UIs. Instead, the best practice when building your own components library is to create a base that is completely unstyled. I mean it can use general layout styling like applying the flexbox model, but it should never include positioning styling like paddings and margins, nor any other property that would be considered a theme property.

To give you an example. Consider building a custom spinner HTML element. In the base class, you don’t really need to apply any styling other than telling the up and down buttons to be stacked on the right-hand side of the value label. The styles apply the flexbox model to the component and the stacked buttons on top of each other.

Unstyled spinner — a base component

The spinner has very basic styling. You would say it’s dull. But that’s exactly the point of a base component. Base components are only to add a functional layer to base components logic. Visual comes with something I call the:

Design system components

This step is when you start adding the actual styles to the components. You extend the base classes, and you add your own styles to the local DOM. Because styles are encapsulated there’s no problem of the styling leaking out to the rest of the application. You can create as many different styles as you need. You then put the components into a bigger component or an application to build an immersive and compliant with your organization’s design system UIs.

Going back to the spinner element example. Now I can extend the base component and apply very specific styles to it. This way, the buttons are not just regular buttons anymore but can have a specific colour, padding, and font. The same for the label.

Styled spinner — design system component

This component now looks more compelling, and you do not have to recreate the entire logic again and again just to apply a different design system.

This example is not the only thing you can do with styling. Even within a single design system, you may have to support different themes. Consider material design. Right now, it defines schemes for six different themes, I believe. It would be very inconvenient to create components for all these themes. I assume that in most cases, we would have to create at least two themes: light and dark. So how can we make styling scalable across a design system? One option is to add a “theme” property to each component. Then depending on the selected theme, a different style is applied. This should be reasonable and is something that developers are used to, and In fact, I did this with my set of themed Anypoint Web Components for Mulesoft’s open source applications. Because some of the design resources for Anypoint are copyrighted, I couldn’t use them in an open-source application that I was building. I decided to support two completely incompatible design systems: material design and our internal Anypoint styling. I ended up creating a different set of stylesheets, and I was applying them depending on a property I set on the components. But there might be a different way that may work with your design system and give you much more flexibility. It’s the use of CSS variables. The CSS variables allow you to create a completely different API for your components: the styling API. Technically you can define all your styles in the components with CSS variables. Later on, it is a matter of applying a different master stylesheet to the application with a different set of CSS variable definitions to restyle the components completely and therefore, your application.

body.dark {
--primary-color: #ffffff;
--primary-background-color: #000000;
}

To give you an example. I made a dropdown for the Mulesoft’s OSS ecosystem. This dropdown element is by default, styled for the light theme. Just by altering the class name on the body element (adding “dark” in this case), I can apply a dark theme to the component. So instead of white background value with a different CSS class name, the variable tells the component to render the background colour as dark grey.

After you build the design system components, you are almost ready to start building applications on top of them. You can do this right away, but this could potentially limit the scalability of your development. Let me explain.

Composite components

Historically most of the front end was always fairly siloed. We built UIs for specific use cases, and it was quite alright. But then we move to another project, and possibly we recreate the same or very similar UIs again and again. This repetition is because it is so hard to decouple the code from your previous application to fit a more general purpose. Eventually, we end up just copying the codebase to another application and altering it a little bit for the new use case. But this is not scalable. A lot of duplication is made this way. Also, the organization has much more codebase to maintain. Then you have a serious problem with consistency. While one application gets an updated UI, another application may wait months before someone upgrades the codebase to support a new design system. Now imagine that instead of copying the codebase to another siloed application, you build general high-level components that represent a specific logic of your application. I mean per UI region logic. I call them composite components. It is not a base component as the basic building blocks of your ecosystem but rather a component that creates a higher level UIs that consists of the base components and adds a specific logic.

To give you an example. Consider my other application, the Advanced REST Client. In short, it is an API playground and testing tool. When you open the application, you see the HTTP request editor. The request editor has a part of the UI that is responsible for editing HTTP headers. This component has two main UI states: a form-based editor that renders inputs for each header and so-called the source editor where you can edit headers in a single text area. Both editors are loaded into a single composite component.

Headers editor composite component

It is made for the Advanced REST Client, but it was designed to work with any web application that has to render an input for HTTP headers. Because of this versatility, I was able to use the same editor in other projects like API Console, which is an API documentation tool that has a UI to test an API endpoint. Normally I would most likely copy the codebase from ARC to API Console to recreate the same functionality. With web components, I can design, build, and publish a single component responsible for producing HTTP headers value from the user input and reuse it in different projects. Now if I would have to update the UI for the component, say to fix a bug in the input element, I would just update the dependencies on the application project instead of each component separately. So long as the components accept minor and patch versions automatically, your components become highly scalable machinery for web development.

Now, you may ask why to use custom elements instead of a regular javascript library that can be just included into a web page and do the same work? With custom elements, we can enclose the logic, the view, and the model, represented by respectively the JavaScript, CSS, and HTML, into an encapsulated entity. This element inherits directly from the HTMLElement or any library you are using for web components. This means that you get a lot to start with. First of all the HTMLElement inherits eventually from the EventTarget interface that gives you access to the DOM events system. In HTML it is the native communication system. This way, the component tells the outside world that its state has changed. In the web platform, you don’t pass functions to children so they can be called from them. Instead, you register an event listener to be informed about something. As an example, let’s go back to my HTTP headers editor. It informs the parent that something has changed, the current value, by dispatching the “change” event. It doesn’t matter which editor was enabled at the time of dispatching the event. The component’s internal state handles this, and the outside world should not care about it. When a parent element handles this event, it just reads the value property of the target of the event. Notice that this is how the HTML input element works. In this case, however, I have defined something new on top of it.

Now we know that components can communicate with the outside world via events and not by passing functions as callbacks. This is one huge difference between custom elements and libraries. Libraries can mimic this behaviour by registering events on a particular node dispatching them, or by using some other ways of communication like using a massive state storage with mixed properties and callbacks which makes it painful to maintain. Another reason for using components to store your logic is semantics. HTML and the web platform is declarative rather than imperative. It makes it so easy to work with. You can technically use the web platform imperatively, but it’s just a waste of your codding time. When declaring a custom element and using it in your web application, you are declaring a semantic for this specific piece of logic. When this element appears in the document, the browser and the developer understand the meaning of it. You can’t achieve it with javascript libraries.

Custom elements have no limitations that a regular javascript library wouldn’t have. You can enclose an application logic in it and share it with others. Take another component I made as an example. It is a custom element that renders a UI and contains a logic to list and search for assets in Anypoint Exchange — a portal for API developers to discover web APIs.

Exchange search component

Everyone who needs a functionality like that can use this component like a regular library, add it to their web application, and use it like it would be a regular, say, <select> element. The component has the entire logic to make queries to list assets from the Exchange and to make queries to search for an asset. It even works with another component that allows users to log in to Anypoint Exchange and list private assets. As you can see, building a component is not only for sharing UIs but also a logic that comes with them.

A shell application

So at this point, we have our unstyled base components, then styled design system components, then we have designed high-level composite components that serve a specific purpose that can work in different environments. We can build the composite components higher and higher until we reach the point when we have reusable components that represent entire screens of an application. It may seem like a little bit of an overkill, and you might be right. Still, I reuse entire screens when rebuilding the same application for different platforms like the Web Platform, Electron, and other environments that are not plain web, but are able to run HTML content. The API Console project works in the web, in Electron application, and in an Eclipse project which is the Anypoint Studio native application.

This all ends with putting the components to an application that is finally distributed to your users. This is a shell application. It is a shell because it does not contain logic related to the UI or specific application flow. You instead decouple your application into separate screens that themselves become components and you install them as a dependency of your application. The application adds routing and platform-specific bindings like a persistence layer. It should only be responsible for binding the components together and to support platform-specific functions. This is a truly de-siloed application which gives you the most scalability. Even if you don’t see a use case for it today, it may be a requirement for your application tomorrow.

Web components surely won’t fix scalability problems for you without a proper architecture. They will help you achieve the goal after applying the right patterns.

--

--

Pawel Uchida-Psztyc

Design, APIs, front-end, strategy, product, and educator. I work for big tech but I share my personal views and opinions.