That’s because plants ‘consume’ most of visible light for photosynthesis, so you can’t see a whole lot. Graph below shows how a potato plant changes as it gets sicker. All the interesting stuff happens in the >700 nm region, just beyond what human eye can see.

How ‘colour’ of potato plant changes as it gets infected. Image is from a research paper, colour coding is arbitrary.

In the image above, the bump between 500 nm and 600 nm is what makes plants look green. That bump reflects back only 15% of the light at it’s peak.

If we could see the infrared light that’s bouncing off the plant, they would be incredibly bright, like snow on a sunny day.

The Market failure

We wanted a camera that can capture this information, but the market for infrared cameras is an absolute failure. The ones that exist cost a fortune, and they aren’t even good: most use old 1–2 megapixel sensors, meaning you would have to fly the drone really low a spend a lot of time just to cover one field.

There is absolutely no reason for it to be this way. The normal camera sensors that we produce by the billion and stick in every mobile phone, can detect near infrared, along with red, green, blue, and UV all at the same time. The only reason cameras produce colour pictures instead of a strange mess is because we put filters in front of these sensors.

One called Hot Mirror is a piece of glass placed between sensor an the lens to block infrared light.

is a piece of glass placed between sensor an the lens to block infrared light. Another, called Bayer filter, separates remaining light into three main colours. It is applied directly to the sensor, and cannot be removed.

On the graph, the grey line shows what kind of light an image sensor detects ‘naturally’ and shows what the filters do.

Bayer filter forms a pattern that allows ‘normal’ cameras to produce colour images. Each square is a pixel. Illustration from WIkipedia

All you have to do to image infrared, is to change filters. Because no major company is doing it, a cottage industry has sprung up. They take apart semi-professional photo-cameras, remove the Hot Mirror and use that on drones. You can’t remove the Bayer filter and it still gets in the way, but quality and resolution of image sensors in ‘prosumer’ cameras has gotten so good, that the results aren’t bad at all. I’ve written about this previously

The Idea behind Anywave

Sony and other sensor manufacturers actually sell Monochrome versions of their sensors, that’s the ones without the Bayer Filter. You could buy that sensor and use it to build a proper infrared camera.

One of the smallest machine vision cameras

If you have a few hundred thousand ££ and hard-core hardware developers you would develop custom electronics that controls the image sensor, reads the image, saves it to sd card, etc. That’s how all the ‘normal’ cameras are made, and of-course we could not afford that.

Fortunately there is also a market for something called Machine Vision Cameras. You could think of them as webcams on steroids: a companies like Point Grey, IDS Imaging or Ximea buy sensors, add some relatively simple control electronics and an interface to plug it into a computer.

They also provide an SDK that lets you control all the possible settings. These cameras are usually used for factory automation or robots.

Previously these cameras required a special Camera Link interface, but recently they adopted USB 3, thanks to it’s great performance, and now you can connect a machine vision camera to any computer without any specialised adaptors. At the same time, in the wake of Raspberry Pi there was an explosion of miniature computers. We realised that for the first time you could build a viable imaging system just by writing software, making it cheaper and simpler than ever before. Because software would be doing all the work, you could change model, resolution or number of cameras, the data format, etc. at any time.We went for it, and called the project Anywave.

Proving the concept

Unlike Sony we can’t place a different filter onto each pixel to get colours (more appropriately called channels). Instead we have to get several cameras and place a different filter onto each camera. We got our cameras from Ximea (they seem to have the smallest ones) and filters from Vision Light Tech, as they were kind enough to make us a custom filter of any shape we asked.

We started with two channels, infrared and red, because they are commonly used to estimate leaf cover. Cameras and lenses arrived in the post, and I got to work.

Even though I had little experience, the coding went easy and mostly worked. It was also important to align the two cameras accurately, you couldn’t just tape them to a block of wood, I tried and the images were badly out of alignment.

Initially I taped them to this block of wood. It didn’t work.

I took to 3D printing. Autodesk is kind enough to provide students with their entire CAD suite for free, and I was fortunate to register back when I was still at university. After spending considerable time figuring out what the hell is what in their catalogue, I tried Inventor and got the hang of it.

I struggled with the mechanical side, got lens threads and sizing wrong and things that I thought would fit together didn’t. Much money and duct-tape was wasted.

There are plenty of 3D printing companies out there, and innovative Birmingham was home to zero of them at the time. That added delay to my process of education by trial-and-error. Nevertheless a few weeks later and few hundred quid poorer, we had what we needed.

After a brief foray into 3D pritning we had this masterpiece!

The two cameras where two dozen pixels out of alignment, and turns out that’s close enough to be corrected digitally. Below is a False-colour Image. Infrared light is saved into what’s normally the red channel, so you see it as red. Red light is saved into green channel. The blue channel is all zeroes.

One of the first images we produced

The separation of channels on the window frame is caused by parallax. it becomes negligibly small on distant objects. So now we had a camera that worked in principle, it was time to take it to ‘real world’

Proving it in flight

We wanted to get infrared images of an actual farm. The trouble is, we’ve never seen a drone, knew no farmers, and still needed to put the the cameras and a computer into a package or casing that could be attached to a drone.

How maps are made

A flying camera takes a series of pictures

You can use a “flying camera” to take a series of images and produce a 3D map. The technique is known as Structure From Motion and is widely used to produce 3D models for Google Maps using aerial photography.

Software packages, such as Agisoft PhotoScan process the photographs and produce a 3D map and a 2D map. They can then be exported to ordinary mapping software like QGIS.

To produce a good 3D map you need a lot of images taken regularly. Yellow lines show the flight path.

An example of a 3D map. You can find many more impressive ones online

Design of the camera system

I came up with the following system that you could put on a drone:

A few 18650 batteries and a voltage regulator would provide power to a tiny single-board computer

I wrote an imaging program that would run as soon as the embedded computer starts. It could control the cameras, take images and save them.

The same program would broadcast a Wi-Fi network

A laptop would connect to the network

I wrote a second program that acts as a ‘Control Panel’. You could use it to issue commands to the embedded computer, start imaging, etc.

I made a point of finding a computer, where the flash-drive was protected against data corruption due to power loss. That was annoyingly hard because few manufacturers specify this.

Getting the right people

First order of business was to get an aircraft and a pilot. We would not get by just with some king of DJI phantom, we needed a big industrial drone to lift all the gear. If we bought one, that would be all the money we had and most likely we would get it stuck in the nearest tree.

Before graduating we took part in a ‘business idea’ competition amongst students. One of the judges told us about a developer hangout in Birmingham. There, in an incredible stroke of luck, we met a guy that knew a guy that knew Manuel. At the time Manuel was burning through his savings, creating a drone startup. He had piloting skills, high-end gear, and experience. Previously he worked with a charity in Brazil, using drones to map the Atlantic Forest to detect illegal logging. We hit it off right away.