Skip to main content Accessibility Feedback

Modern Best Practices

(And why they’re bad.)

If I had to sum up modern web development in one sentence, it would be this: JavaScript all the things.

So much of modern web development is built around JavaScript.

If you like this book and talk, you'll love the Lean Web Club. Join today for free!

Best Practice: JavaScript Frameworks

At the heart of modern web development best practices are frameworks like Angular, React, and Vue.

They’re listed on almost every front end developer job description I see. “What framework did you use?” is a pretty common question when someone announces an awesome new project on Twitter.

What’s the appeal of these tools?

One of the big attractions to frameworks is how they create user interfaces. Let’s say you wanted to build a todo list app, and you have some data about the user and their todo items.

var data = {
    todos: [
        {
            item: 'Adopt a puppy',
            completed: true
        },
        {
            item: 'Buy dog food and a dog bed',
            completed: false
        }
    ],
    username: 'Chris'
};

And you want the UI to look something like this, with a field to add todos, and a list of their todo items.

Completed items should be checked off and struck through.

<h1>Chris's Todos</h1>

<form id="add-todo">
	<label for="todo-field">What do you want to get done today?</label>
	<input type="text" id="todo-field">
	<button>Add Todo</button>
</form>

<ul>
	<li class="completed">
		<label>
			<input type="checkbox" checked>
			Adopt a puppy
		</label>
	</li>
	<li>
		<label>
			<input type="checkbox">
			Buy dog food and a dog bed
		</label>
	</li>
</ul>

Whenever they add a new item, or complete an item, you need to update the UI.

With traditional DOM manipulation, that would require getting the last item in the UI, creating a new item, and adding to it. Or adding a class when an item is checked off.

And if the user has no todo items, maybe you want to show a message asking them to add one, which requires you to target the UI differently when they add their first item.

<h1>Chris's Todos</h1>

<form id="add-todo">
	<label for="todo-field">What do you want to get done today?</label>
	<input type="text" id="todo-field">
	<button>Add Todo</button>
</form>

<p>You don't have any todos yet. Please add one.</p>

State-Based UI

Frameworks use an approach called state-based or data-based UI.

You create a template for a piece of UI, and tell the framework, “If my data look like this, do this. If it looks like that, does something else.”

if (data.todos.length > 0) {
    // create list items
} else {
    return 'You don\'t have any todos yet. Please add one.';
}

Instead of updating the UI directly, you update your data and the framework handles the rest. It uses a process called diffing to look for differences between the current UI and how it should look, and updates only what’s needed.

It’s really smart!

Performance at Scale

The big well-known frameworks (React, Vue, etc.) use something called the virtual DOM. This is a JS-based representation of the real UI that it uses to figure out what needs to be updated in the UI when the data changes.

When working with really large data sets, it can be more performant than trying to query the actual DOM. For apps that need to perform “at scale,” this is an often cited benefit of frameworks.

Fewer Bugs

And because established frameworks are used by thousands of developers at hundreds or thousands of companies, you have more people testing the code, finding bugs, and pushing fixes.

The code is more resilient.

So… what’s the problem?

You may be thinking: that all makes sense and sounds good. What’s wrong with this?

One of the main arguments for using frameworks is performance. But frameworks also cause a lot of performance problems.

We talk about performance at scale a lot.

But these tools were designed by companies who deal with a level of scale that most of us will not. Most of the sites we build don’t have Facebook or Twitter sized data bundles. Most sites never get there, either.

We like that frameworks have tons of bug fixes baked in, but many of them are fixes for weird edge cases we’ll never encounter and features we don’t need.

We’re inheriting solutions to other people’s problems, not ours.

And all of that JS that we load for the scale issues and edge case bugs we don’t actually have has a huge impact on performance—particularly first page load.



We’re sacrificing initial page load performance—our user’s first impression with our site or app—for the hopes of faster page loads later.

I’m not so sure that tradeoff is worth it.

JavaScript is the most expensive part of the stack

Because of how browsers work, JavaScript is much worse for performance than HTML and CSS are.

Addy Osmani wrote a great article on why this is the case.1 He noted:

Byte-for-byte, JavaScript is still the most expensive resource we send to mobile phones, because it can delay interactivity in large ways.

A few years back there was movement called “1 Less JPG.”

The argument was, rather than worrying about 200-300kb of JS, just use one less JPG—about the same file size—and get on with it.

The problem is, it’s not just about download size.

JS is a lot more demanding to parse and execute. It blocks the page from rendering. It blocks other files from downloading. It can’t just be run after the browser receives it.

In that same article, Addy Osmani wrote:

[It needs to be] parsed, compiled, executed —and there are a number of other steps that an engine needs to complete.

All of this JavaScript is devastating for front end performance.

In September of 2019, Zach Leatherman tweeted:2

Which has a better First Meaningful Paint time?

  1. a raw 8.5MB HTML file with the full text of every single one of my 27,506 tweets
  2. a client rendered React site with exactly one tweet on it

(Spoiler: @____lighthouse reports 8.5MB of HTML wins by about 200ms)

Take a moment to wrap your head around that. It’s perceivably faster to load 8.5 megabytes of HTML than it is to load a single tweet with a client-side React app.

Best Practice: Package Managers & Module Bundlers

Because these large JS bundles create a performance problem, we started using package managers and module bundlers like Bower, Yard, Webpack, Parcel, and Rollup.

They handle dependency management, figuring out what JS files to include on any given page based on what else is there (rather than loading all the things on every page).

Newer package managers also offer a feature called “tree shaking,” in which code that isn’t used and doesn’t need to be included is dropped out.

Dependency Hell

This isn’t a terrible idea, but the tradeoff for this approach is high setup and maintenance costs.

Because of all of the moving parts and deep dependency trees with these modern approaches, getting setup in the first place and maintaining that setup can be expensive and time consuming.

Getting new developers setup with everything they need to work with your code base becomes a complicated, delicate balance. You sometimes spend your first week on a new job fighting with terminal and debugging setup processes.

And if one of the dependencies somewhere in your chain gets out-of-date, your whole build can come crashing down on you.

Complexity has cost

The tools that are supposed to help us and save us time often end up costing us more time because of tiny “gotchas”.

Kyle Shevlin tweeted:3

My workflow today: 15 minutes of writing code that works and does what I want. 2 hours of trying to appease the static type gods.

Here’s another example from the insanely talented Brad Frost:4

That depressing realization that the thing you’re slogging through in a modern JS framework could have taken you 10 minutes in jQuery.

Nicole Dominguez shared similar thoughts about Webpack:5

ah webpack, javascript –– turning a 1 hour project setup into an 8 hour affair

JR Cook learned the hard way:6

I made the mistake last year on my team to push one of the apps we were developing from just using JQuery/Vanilla JS to use Angular because I assumed it would make our lives easier. It did not, it just added complexity.

Best Practice: CSS-in-JS

CSS-in-JS is supposed to address some of the challenges of working with CSS in a component-based design system on a team of developers (who are sometimes more comfortable with JS than CSS).

Let’s say you had a component styled like this.

.callout {
    background-color: slategray;
    font-size: 2em;
};
<div class="callout">
    👋 Hi there!
</div>

There’s a chance that other components in your UI might also have a background color of slategray or a font-size of 2em. Over time, you end up with a lot of redundancy in your stylesheet.

There’s also a chance you might create a component with some styles that later gets dropped out of the project, but forget to remove the styles. This adds even more bloat.

In CSS-in-JS, you might do something like this.

var callout = {
    backgroundColor: 'slategray',
    fontSize: '2em'
};
<div class={callout}>
    👋 Hi there!
</div>

It looks pretty similar, but uses JS conventions instead of CSS ones.

That gets rendered into something like this.

<div class="jzrps nibae lupp mihax">
    👋 Hi there!
</div>

The idea is that each property gets broken up into its own class and added to the element, so you can reduce that redundancy.

Only what’s needed for the page is loaded, and you avoid issues with “the cascade” trickling unwanted to styles onto other elements.

Better for performance

Nicholas Gallagher, formerly of Twitter engineering, shared this data7 on Twitter’s move away from their legacy codebase to a CSS-in-JS solution.

Legacy site downloads ~630 KB CSS per theme and writing direction…

PWA incrementally generates ~30 KB CSS that handles all themes and writing directions.

The results are impressive but… 630kb feels like an unnecessarily large amount of CSS for what the Twitter UI was.

As we’ll see in a little bit the resulting code from their PWA is effectively the same as some other purely CSS-based techniques, but with a ton of extra tooling and fragility around it.

You end up with classes that are completely unreadable to humans, which makes debugging UI issues harder, too.

Gatekeeping

This also has another a consequence: it excludes people without JS expertise from the process.

And in my experience, people with deep, specialized CSS expertise don’t always have the same level of comfort or proficiency with JavaScript (nor should they be expected to).

These tools exclude people who have important, essential contributions to make from participating in the process.

Alex Russell is a developer on the Chrome team. In his 2018 article, “The Developer Bait & Switch,”8 he talks about the straw-man argument people make around using frameworks.

Here’s a straw-man composite from several recent conversations: These tools let us move faster. Because we can iterate faster we’re delivering better experiences.

It probably improves the experience for some developers—specifically people who are more comfortable in JS than other parts of the stack.

But for people who specialize in CSS and semantic HTML, or web accessibility, or user interaction patterns, it can leave them shut out of the development process in ways they were not before.

Gatekeeping has business consequences

In 2018, A11Y consultant Rian Rietveld resigned from her position as the WordPress accessibility team lead, and documented why in a detailed article.9

The tl;dr: Gutenberg is the new WP editor, and it’s built on React.

Because of that, and because no one on the team has React experience (nor could they find volunteers in the a11y community), they couldn’t effectively work on improvements themselves.

This made it very difficult for Rian and her team to do the work they were tasked with doing.

In May of 2019 a detailed A11Y audit of the new Gutenberg editor was conducted.10 It was a 329 page report detailing various accessibility issues. The executive summary alone was 34 pages, and it documented 91 accessibility related bugs in quite a bit of detail.

So much of this could have been avoided if Rian and her team hadn’t been locked out of the process because of technology choices.

These tools may let some developers on your team work faster, but I’m not convinced they result in better experiences for your users.

Best Practice: Single Page Apps

There’s another way we’ve attempted to get at this performance issue: single page apps.

With a single page app, the whole site or app exists in a single HTML file. JS renders the content, handles URL routing, and so on.



Only content refreshes, which is theoretically better for performance than having to redownload all of the JS and CSS needed for a particular page in your app.

 It also allows you to create fancy page transitions.

But this also breaks a bunch of stuff the browser just gives you for free out-of-the-box, which you then need to recreate in JS.



You need to…

  • Intercept clicks on links and suppress them,
  • Figure out which HTML to show based on the URL,
  • Update the URL in the address bar,
  • Handle forward/back button clicks,
  • Update the document title,
  • And shift focus back to the document.

This is all stuff that the browser just does out-of-the-box by default. This feels like a vicious circle.

We’re literally breaking the features that the web gives you out of the box with JavaScript—to fix the performance issues with created with JavaScript—and then reimplementing these features with even more JavaScript… all in the name of performance (which again, we ruined in the first place with all of the JavaScript).

Like I said earlier… bonkers!

Fragility

Our over-reliance on JavaScript has created a front end that’s incredibly fragile. The smallest mistake can cause the whole thing to come crashing down.

We run into things like the white screen of death.

This happens because the HTML the server sends is nothing by an empty div, and “real markup” is rendered entirely with JavaScript… When that file fails for some reason (or just hasn’t loaded yet), you get nothing.

And yes, it’s 2019. JavaScript is an integral and important part of the web. Most people don’t disable it.

But CDNs fail. In July 2019 a bad deploy took down Cloudflare,11 a CDN provider used by 10% of Fortune 1000 companies. No CDN, no JS.

Firewalls and Ad Blockers get overly aggressive with what they block. The absurdly large JS files that we send timeout on slow connections.

Ian Feather, an engineer at Buzzfeed, shared that about 1% of requests for JS on their site fail.12 That’s 13 million requests a month!

People browsing on mobile devices while commuting go through tunnels and lose the internet.

JavaScript is the most fragile part of the stack

Obviously no one sets out to write code with bugs, but it happens. And when it does, JavaScript is the most fragile part of the stack.

If you fat-thumb the keyboard and type dvi instead of div, the browser ignores what you wrote, treats it like a div, and moves on.

<!-- Browsers render this... -->
<dvi></dvi>

<!-- Like this... -->
<div></div>

If you mistype a CSS property—for example, writing bg-color instead of background-color—the browser just ignores the property and moves on.

.hero {
    bg-color: #f7f7f7; /* The browser ignores this */
    width: 100%;
}

But let’s say you misspelled a variable name in your JS—writing num as nmu in the example below.

The JavaScript file would throw an error and just… stop. The whole thing would stop working.

var doubleIt = function (num) {
	// This will break all the things
    return nmu * 2;
};

HTML and CSS fail gracefully. JavaScript does not.

How did we get here? →

Chapters

  1. Intro
  2. Modern Best Practices
  3. How did we get here?
  4. Lean Web Principles
  5. What now?