There’s just been some pretty startling research published by a university in the UK which could herald the biggest change to imaging since the switch from film to digital.

Indeed before it has even begun, 4K may become obsolete along with resolution itself – killed not by 8K or Super-Hi Vision but a completely different kind of technology. A vector based video codec has been developed at the University of Bath.

Until now vectors have been good at wireframe objects but not photorealism. The team at Bath have developed a new photo realistic fill method to ‘paint’ in the areas defined by the vectors.

Unlike bitmap pixel images, vector shapes can be scaled up with no loss of quality, since the mathematics behind vectors simply defines the point-to-point coordinates of an object. One example of existing vector based images is the typeface on a computer – when a font scales up it doesn’t become pixelated.

Video games also use vector based 3D engines but use bitmap textures to fill shapes.

The team at Bath have found a vector based way of filling and texturing, according to the Bath press release:

“Until now there has not been a way to choose and fill between the [vector]contours at professional quality. The Bath team has finally solved these problems. A codec is a computer programme capable of encoding or decoding a digital video stream. The researchers at Bath have developed a new, highly sophisticated codec which is able to create and fill between contours, overcoming the problems preventing their widespread use. The result is a resolution-independent form of movie or image, capable of the highest visual quality but without a pixel in sight.”

Technical research: 2012 whitepaper at Bath University

The codec is resolution independent and can be scaled with minimal loss of quality. It may herald a new way of measuring resolution in vector complexity (polygon count for example) rather than in megapixels.

The codec uses a lot of complex mathematics and is likely very CPU intensive compared to bitmap pixel based codecs like H.264, but I can see it coming to video cameras in the future. Imagine having an image where resolution is basically a none-issue and a codec which takes up the same amount of space whether delivering a VGA image or a 8K image.

The question is – could a new type of image sensor technology be developed to ‘see’ in vectors, to generate vectors from light onboard the sensor itself? That could be the future. My guess is that it is still very early days for vector imaging technology. I expect the commercial demand for this early codec to be based around web video and streaming TV on demand – I doubt that the first version will deliver the quality of H.264 but it may be able to deliver video at very low data rates over the internet.