The 'rolling shutter' is a problem I had often heard when it came to video on DSLRs - CMOS sensors captured their data from top to bottom, so if the camera operator moves from left to right too quickly (especially if there are vertical lines in the shot), the bottom part of the frame can't keep up with the top part, and it essentially creates a very weird effect. But here's the thing, now that electronic shutters are being implemented on CMOS sensors, we're now starting to see this problem infiltrate photography in addition to videography.

Back to the Flash:

So how does the rolling shutter affect the flash? As it was explained to me (link), and further explained in about a billion other posts online, the flash's duration is incredibly short. It can vary from a full-power duration of about 1/250 sec, down to 1/128th power of about 1/23,000 sec. You read that right, 1/23,000. Since the electronic shutter cannot store the information directly on the chip (unlike a CCD sensor, I'll get to in a minute). In most cases, the top part of the frame would be illuminated by the flash's light, but the bottom would never get a chance to see it because by the time it's being exposed to light, the flash has already completed its firing sequence. As some might know, one of the most popular DSLRs that had an electronic shutter option that DID work with flashes was the Nikon D70s - one could sync their flashes in Auto FP Mode with the D70s and achieve flash sync speeds above 1/500 sec with ease. But that's because the image sensor on the D70s was a CCD sensor, not a CMOS sensor. CCD sensors are able to store the captured images directly on the sensor, so they're able to all be exposed to the same light at the same time.

So What Do We Do Now?

We wait. There's really not much to do than wait for technology to catch-up to our needs. The main hurdle that needs to be cleared is the ability for the CMOS sensor to store the captured light in one take vs a rolling scan of the sensor. That can be done, but it takes up precious physical space on the sensor and would reduce the amount of light sensitivity the sensor has in order to accomplish this one, very specific task. I'd argue that light sensitivity is more important than being able to capture the entire frame all at once - we only notice it at very high shutter speeds and when we can't sync a flash, and that's about it. I'd rather give up those two abilities than make all of my images have a lower IQ. I can be patient and wait for technology to catch up, but I can promise you that I'll be the first in line to try it out once it reaches the market.