The subject of minifying CSS—by which I mean the removal of all whitespace and comments, not optimizations like removing duplicate rules and combining selectors where possible—comes up occasionally on the Eleventy Discord. I always advise against it. For me, a large part of learning HTML and CSS back in the late ’90s and early 2000s was peeking at the source code of the webpages I saw, and I’ve never understood the desire to add an extra step to your pipeline that does nothing but prevent that.

A few people have suggested that you should minify your stylesheets because the extra whitespace and comments worsen performance. I agree that there is non-zero overhead in downloading and parsing the extra data. I strongly disagree that it can have any perceptible effect whatsoever.

Let’s tackle the size first. Per CSS-Tricks, just gzipping Bootstrap takes it from 147 to 22 KB, while minifying it before gzipping brings it to 20 KB. Given that this is a difference of 2 KB on even a large library—contrast with the 38 KB of compressed HTML on that CSS-Tricks page, to say nothing of its 1.80 MB compressed total—I believe we can dispense with any concerns about size.

Next, we come to parsing. I didn’t believe it was possible for the cost of skipping whitespace to have any visible impact on the performance of the page: I can’t imagine even skipping the whitespace around a hundred declaration blocks could be as time-consuming as parsing a particularly complex selector, let alone applying a single rule from a single block. However, this was hard to quantify.

I therefore built a tool attempting to test different scenarios. The URL includes a seed for a random number generator and the desired number of declaration blocks to generate. The page allows requesting a randomly-generated stylesheet with the specified number of declaration blocks, with or without random whitespace, and measures only the time from a link being inserted into the document to its load event firing.

I tried it on my desktop and phone, using large numbers. A 10,000-block stylesheet was parsed in 9–10 milliseconds depending on the absence or presence of whitespace. A 100,000-block stylesheet was parsed in 95–105 milliseconds. A more realistic 1,000-block stylesheet was parsed too quickly to measure any difference.

I was ready to draw my conclusions, but dwkns on the server suggested a different approach using WebPageTest:

Here is how I would go about it:

  • Large CSS - with comments — raw and minified versions
  • The same Large CSS - without comments — raw and minified versions
  • Small CSS - with comments — raw and minified versions
  • The same small CSS - without comments — raw and minified versions

8 identical HTML files to host the above which apply a subset of your CSS rules. These should all be served from the same server.

Then run against them multiple times (probably want to use the API) simulating a number of devices, connections and browsers.

You'll be able to get the start and end times for the CSS load and measure the time from CSS load to Start Render. Assuming you have exactly the same HTML for each of the 8 pages this would be a good proxy for how quickly the browser can parse and render the css.

I took a stab at this and made a spreadsheet of the raw data (mirrored here in CSV format). To quote my summary on Discord:

With these two scenarios (1,000 blocks and 100,000 blocks), the worst difference I’m seeing on this underpowered device on a slow connection is 54ms for the smaller stylesheet. It’s significantly less with the larger one. The pages can be seen and tested for yourself (id doesn’t do anything in this scenario, but it’s mandatory):

Regardless of the approach, the figures show that minifying CSS merely obfuscates it with next to no benefits. I know we have tools in modern browsers to render such code more comprehensible; I just don’t see the need for all the extra steps. Unless you’re a massive company earning millions for every millisecond you shave off your load time and your stylesheet has an egregiously low signal-to-noise ratio, gzip (or better yet, Brotli) is more than enough.