Okay: work with me. Suppose there was a digital camera that had two separate CMOS chips or whatever all the cool kids are using these days (image sensors; a.k.a. digital film). So there are 2 separate, independent image sensors and behind the lens there was a prism that split the incoming light (picture information) into two and sent it to the separate image sensors. So you got the two independent image sensors getting the same amount of light/picture information but the difference between the two is that one of those sensors doesn't have an infrared (IR) sensor and one of them does. SO: when you press the shutter button to take a picture, instead of a regular, bright, obnoxious flash going off there is a invisible burst of IR light provided by the camera-embedded IR LED bank. The light information (what constitutes a picture) is then split by the prism and sent to the separate image sensors. The one without the IR filter will pick up the reflected light emitted by the IR flash and use that information as a luminance guide while the other sensor will capture whatever visible spectrum (color) there is available, and then a simple photoshop-esque algorithm would use the luminance information provided by the IR-filter-free image sensor and overlay that upon whatever color information provided by the image sensor with an IR filter. The result would be a picture that should, for all intents and purposes, look like a picture taken with a flash on, but without the bright, makes-you-blink, annoying flash found on every camera today.

Yeah.