tl;dr: I recently updated my Portfolio and applied a lot of performance-related techniques. This is about the technologies that I used to achieve better loading and compression behaviour. Tools used: TS, Shell, terserJS, ImageOptim, SQIP, filewatcher, SASS, Network Throttling, DevTools. The average performance of my page is rated with 99/100 on Google Pagespeed. I’m still working on the last bits to get the 100/100 total.

First of all I’d like to point out that in the last 16 years of coding I’ve become Fanboy of only one thing: Do not be a Fanatic nor talk down any technology. There is no such thing as fits-all-the-use-cases. These 2 articles are a metaphor for what I am saying: This and this.

Good, now that we’re over that one we probably can skip the technology fanatics discussions as well as justification like “Why didn’t you [put whatever you want in here]…”.

My requirements were simple:

My portfolio should be prepared to be viewed on a mobile phone with GPRS connection within an expected waiting time — EWT .

WTF is expected waiting time?

If you are going to IKEA at Saturday 1pm you might be pissed off to wait 30mins to reach to the cashier but at the same time that was exactly what you could expect and plan for — this timespan I call EWT.

Take any website nowadays. I’d say — aside of google startpage — you probably won’t get far to anywhere in reasonable amount of time. You expect it to load but it doesn’t even show up so the EWT was not fulfilled (you might expected it to take 30s but not waiting for a minute only to see a blank screen — yeah thanks for nothing).

I don’t have the expectation that anything should work with GPRS and 2G. However I do have the expectation of at least telling me “Experiencing bad connection. Please try again later”. Totally fine.

The GPRS EWT for my portfolio was roundabout 20s total as I felt that this is a timespan that I would actually wait when I am aware of the fact that I am currently on the worst connection possible and I could see things starting to load still.

Now this was about setting the basis.

Let’s drill down the technologies:

1. The package.json:

You can skip section 1 and 2 if you only care about the performance part.

I installed 2 @types packages so that my editor and the compiler helps with proper type hinting.

browser-sync was used to run an instant live-reload server with throttle capabilities (see further below).

With filewatcher I was able to recompile whenever files changed. With html-minifier and terser I created minimal --production build and I used include-tag to be able to include html in html with simple variables without a full-fledged templating system.

{

"devDependencies": {

"@types/es6-promise": "^3.3.0",

"@types/node": "^10.0.3",

"browser-sync": "^2.24.4",

"browser-sync-spa": "^1.0.3",

"filewatcher": "^3.0.1",

"html-minifier": "^3.5.21",

"include-tag": "^1.1.0",

"node-sass": "^4.9.0",

"sass": "^1.3.1",

"terser": "^3.14.1",

"typescript": "^2.8.3"

}, "scripts": {

"build": "./_dev/watch.sh --build-only --prod",

"watch": "./_dev/watch.sh --watch"

}

}

2. watch.sh:

The Shellscript isn’t really surprising. It runs filewatcher , typescript etc. based on the flags. For example:

echo 'Starting TS watcher'

./node_modules/.bin/tsc $WATCH_PARAM -p ./tsconfig.json

3. JS Architecture and Build:

It’s a small portfolio so I wanted one JS bundle and I did not want to overcomplicate things with magic. That is one of the 2 reasons why I used TypeScript. The TS compiler allows you to build bundles and the tsconfig.json let me choose the build target so I didn’t even have to think about a bundling system whilst still being able to structure my files — abusing TypeScript as build system is awesome ❤️.

/// <reference path='history.ts' />

/// <reference path='work.controller.ts' />

/// <reference path='routes.ts' />

/// <reference path='breakpointSetups.ts' />

/// <reference path='scrollwatcher.ts' /> /**

here goes code from my index.ts file

*/

In each of the files I defined global functions and global variables. Why? It saves the additional closure function of scoping the variables. If you are scared of doing that then just wrap the resulting code in a function after compilation. TypeScript helped me not loosing the overview.

Example:

//source version index.ts:

const query = (selector: string, rootElem?: HTMLElement) => {

return Array.prototype.map.call(

(rootElem || document).querySelectorAll(selector),

(d: Element) => d

);

};

To get the best overall minification it is necessary to abstract everything to function calls when used more than one time (see query example above. Don’t call .querySelector manually as its an object property which cannot be minified).

4. Critical and non-blocking JS:

A critical part of JS is inlined into the bottom of the page (bottom = ensure that DOM is available first), the rest is loaded like this:

var s = document.createElement('script');

s.async = true;

s.defer = true;

s.src = '/index.min.js';

s.onload = function() {

init_resolve();

};

document.head.appendChild(s);

5. Instant images; Lazy loading and LQIP/SQIP:

LQIP (Low Quality Image Placeholder) is a way to preload an image with extreme low quality to be a placeholder for the actual image and provide basic image information such as average color. SQIP is its successor only using SVG.

sqip inputImage will output an <img> with data:image/svg+base64 definition which is ~1–2kb and can be easily inlined as compared to 100kb+ raw images.

You then only lazy load the actual visible areas. E.g. when the user navigates through my previews there is a good chance that the user will see the low quality image first whilst in the JS requests the raw image and then fades in. 🔥

6. Inlining CSS vs HTTP2:

Lately I am hearing this too often: HTTP2 solves it. It’s true to the point where you are requesting multiple resources at the same time. But the browser cannot request what it doesn’t even know. I mean how is your partner going to cook your favorite dish if you’ve never talked about it.

Long story short: HTML loads first then comes CSS. This is bad if you want to improve initial rendering behaviour. Inline critical CSS if possible.

Do not inline all CSS for bigger pages. It’s a misconception that it will definitely improve performance. I tried that and the and the actual performance went down. Also my lighthouse score went down. The additional parsing effort on the HTML slowed down rendering. Also you have that additional parsing effort then on every page. So it’s important to properly balance out and really only provide the necessary inlined CSS.

7. Optimizing CSS usage via Chrome Coverage Tools:

After some progress of your page you will definitely not remember which CSS is actually used and which not.

Use the Coverage Tools from Chromium:

Press ⌘ + Shift + P in Chrome DevTools, type “Coverage” and press Enter

Open the Coverage Drawer in Chrome Press the Reload icon inside of the Drawer (make sure your Cache is deleted before reloading!)

Reload Icon in Coverage Tools

3. Click on the CSS file you want to check the coverage for. The green lines are the ones that indicate usage — up to this point of time. Everything else in red. However: The red ones are not yet confirmed to be unused!

A first glimpse at the coverage only after reload

4. You now need to resize your page and click through all the routes so that the coverage detector can fulfill its work. You will see that many things that were red suddenly become green (They get green when the HTML is rendered and therefore the according CSS is used. Rendered != In DOM. If it sits in the DOM but it is hidden then it is not rendered.)

5. After having done Step 4 thoroughly you might want to check the remaining red ones.

It might be that Chrome tells you that there is still a lot of unused bytes but you have to understand that the current implementation shows everything as red that is not actively parsed and part of the painting. So e.g. the inside of a @media query can show green but the query itself @media shows red. This doesn’t mean its unused! The same btw happens if you have vendor prefixes in it. Obviously a Firefox vendor prefix will not be parsed in Chrome but might still be necessary. So be careful 🤓

The media query is parsed but the query itself shows red. This is the current behaviour of coverage tools.

6. Go ahead and remove the red ones from your CSS that proved to be unused.

8. Mobile-first? Slow connection first!

Cool so we go mobile first but the initial load takes like 10s on optimal connection speed because we are loading assets as big as the chronicles of Lord of the Rings.

Calling it a mobile-first approach therefore is sometimes misunderstood. Call it Slow-Mobile-First and we are fine.

In fact the perceived performance is the one you should care first about. If that is fine then you can focus on even improving the performance from a statistical point of view — beautifying the numbers.