Most web applications require URL parsing whether it’s to extract the domain name, implement a REST API or find an image path. A typical URL structure is described by the image below:

You can break a URL string into constituent parts using regular expressions but it’s complicated and unnecessary…

Server-side URL Parsing

Node.js (and forks such as io.js) provide a URL API:

var urlapi = require ( 'url' ) , url = urlapi . parse ( 'http://site.com:81/path/page?a=1&b=2#hash' ) ; console . log ( url . href + '

' + url . protocol + '

' + url . hostname + '

' + url . port + '

' + url . pathname + '

' + url . search + '

' + url . hash ) ;

As you can see in the snippet above, the parse() method returns an object containing the data you need such as the protocol, the hostname, the port, and so on.

Client-side URL Parsing

There’s no equivalent API in the browser. But if there’s one thing browsers do well, it’s URL parsing and all links in the DOM implement a similar Location interface, e.g.:

var url = document . getElementsByTagName ( 'a' ) [ 0 ] ; console . log ( url . href + '

' + url . protocol + '

' + url . hostname + '

' + url . port + '

' + url . pathname + '

' + url . search + '

' + url . hash ) ;

If we have a URL string, we can use it on an in-memory anchor element ( a ) so it can be parsed without regular expressions, e.g.:

var url = document . createElement ( 'a' ) ; url . href = 'http://site.com:81/path/page?a=1&b=2#hash' ; console . log ( url . hostname ) ;

Isomorphic URL Parsing

Aurelio recently discussed isomorphic JavaScript applications. In essence, it’s progressive enhancement taken to an extreme level where an application will happily run on either the client or server. A user with a modern browser would use a single-page application. Older browsers and search engine bots would see a server-rendered alternative. In theory, an application could implement varying levels of client/server processing depending on the speed and bandwidth capabilities of the device.

Isomorphic JavaScript has been discussed for many years but it’s complex. Few projects go further than

implementing sharable views and there aren’t many situations where standard progressive enhancement wouldn’t work just as well (if not better given most “isomorphic” frameworks appear to fail without client-side JavaScript). That said, it’s possible to create environment-agnostic micro libraries which offer a tentative first step into isomorphic concepts.

Let’s consider how we could write a URL parsing library in a lib.js file. First we’ll detect where the code is running:

var isNode = ( typeof module === 'object' && module . exports ) ;

This isn’t particularly robust since you could have a module.exports function defined client-side but I don’t know of a better way (suggestions welcome). A similar approach used by other developers is to test for the presence of the window object:

var isNode = typeof window === 'undefined' ;

Let’s now complete our lib.js code with a URLparse function:

var isNode = ( typeof module === 'object' && module . exports ) ; ( function ( lib ) { "use strict" ; var url = ( isNode ? require ( 'url' ) : null ) ; lib . URLparse = function ( str ) { if ( isNode ) { return url . parse ( str ) ; } else { url = document . createElement ( 'a' ) ; url . href = str ; return url ; } } } ) ( isNode ? module . exports : this . lib = { } ) ;

In this code I’ve used an isNode variable for clarity. However, you can avoid it by placing the test directly inside the last parenthesis of the snippet.

Server-side, URLparse is exported as a Common.JS module. To use it:

var lib = require ( './lib.js' ) ; var url = lib . URLparse ( 'http://site.com:81/path/page?a=1&b=2#hash' ) ; console . log ( url . href + '

' + url . protocol + '

' + url . hostname + '

' + url . port + '

' + url . pathname + '

' + url . search + '

' + url . hash ) ;

Client-side, URLparse is added as a method to the global lib object:

< script src = " ./lib.js " > </ script > < script > var url = lib . URLparse ( 'http://site.com:81/path/page?a=1&b=2#hash' ) ; console . log ( url . href + '

' + url . protocol + '

' + url . hostname + '

' + url . port + '

' + url . pathname + '

' + url . search + '

' + url . hash ) ; </ script >

Other than the library inclusion method, the client and server API is identical.

Admittedly, this is a simple example and URLparse runs (mostly) separate code on the client and server. But we have implemented a consistent API and it illustrates how JavaScript code can be written to run anywhere. We could extend the library to offer further client/server utility functions such as field validation, cookie parsing, date handling, currency formatting etc.

I’m not convinced full isomorphic applications are practical or possible given the differing types of logic required on the client and server. However, environment-agnostic libraries could ease the pain of having to write two sets of code to do the same thing.