“Google’s self-driving car gathers 750 megabytes of sensor data per SECOND! That is just mind-boggling to me. Here is a picture of what the car ‘sees’ while it is driving and about to make a left turn. It is capturing every single thing that it sees moving — cars, trucks, birds, rolling balls, dropped cigarette butts, and fusing all that together to make its decisions while driving. If it sees a cigarette butt, it knows a person might be creeping out from between cars. If it sees a rolling ball it knows a child might run out from a driveway. I am truly stunned by how impressive an achievement this is.”— IdeaLab founder/CEO Bill Gross.

Add to that: real-time data from street view, GPS, and Google maps — as shown below from a recent Google patent award — and you’ve got one humungous graphics processing system on board.

Now what if some elements of all this data could also be projected on a special windshield display or on Google Glass for driver override, when needed? Add to that: weather and traffic reports ahead, police-scanner data (to avoid a road chase in progress, let’s say), news reports mentioning local events, oh, and Yelp reports, and Find My Friends or Latitude popup pics, and throw in a live HD action cam (I’m experimenting with a Contour+2 with live HDMI streaming — I need one more for rear-view pics — more on that later) and what about a panorama cam and …. OK, you get the idea. (Would adding an Oculus Rift be over the top?)