I haven't professionally worked with camera sensors much (I have messed with them for fun though), so you might be right.



But generally speaking in regards to other sensors, when trying to run classification algorithms, the edge cases are a pain. And I'm aware of one of two approaches to fix it -- stitch the sensor data together, or create an ignore section at the edges, which requires an overlap from the next sensor. Though we were stitching, in part, for visualization to the operator.



So you're right, you don't have to do it. But you will need non-trivial overlap, even in a moving car scenario. In the above example, the objects it detects are stationary [parked cars]. Imagine a moving car at the edge of two cameras driving towards you. With no overlap, you may not be able to classify it as a car. If you keep all the edge data, you significantly increase your false alarm rates too.





So while I do agree with you, I somewhat stand by my original assessment.

Click to expand...