We’re starting to get somewhere. At a very small scale, this is beginning to look like the magic that Disney’s animators concepted.

IV. Introducing Smart Dust

Now that we’ve got the building blocks out of the way, and shown how evolution and biology is already using similar structures for one of Earth’s most important animals, we can move onto the fun stuff.

The concept for Smart Dust began in the early to mid 90s as, you guessed it, a DARPA project for the potential military applications. Note the implications of this are also included in Big Hero 6.

The core concept of Smart Dust is very small, millimeter-sized sensors or, further into the future, computers that when pulled together can create more complex machines. A fractal computer, if you will.

In mid-2016, the University of Stuttgart released a paper showing how very small sensors can be combined to create ultra high resolution imagery, essentially Smart Dust as a combination of very tiny cameras. Imagine that for the Snap Spectacles of the future.

These micro-lenses, about the size of a grain of sand, can be 3D printed. That means no matter what kind of lens system or design you can imagine, you can make it with a simple CAD drawing.

Single lens (left) and multi-lens (right) showing the tiny scale of these new micro-vision sensors.

Of course you could imagine all sorts of use cases for these things, from recording the reality around you like Snap’s Spectacles, security and monitoring, or imaging inside the body for future medical devices, all the way to future self-driving cars that might have a layer of cameras embedded within the car paint itself in order to find its way around the world.

And, you could imagine how if you put these smart lenses around a 3D object and combined with a high-resolution display, you could essentially make that object appear invisible. By projecting what’s shown behind the object to people viewing it from the front, it would appear that the object had disappeared.

Transparent glass iPhone 7, anyone?

V. 3D and 4D Printing

What you’ve seen above can be made using a comparatively low-cost 3D printer. The fancy term for that is a Nanoscribe laser lithography 3D printer. Here’s a video of that in action (check out that detail!).

As you can see we’re talking about 3D printing something close to the molecular scale, down to the precision of about a single nanometer. Reviewing what we described above as a 1 nanometer transistor, you could imagine a very real leap where we have embedded SoCs as part of these imaging sensors, combined in a graphene honeycomb pattern to give us something like we’ve seen in Big Hero 6.

Of course, your next question is going to be, how do we make something that can change shape, combine, uncombine, and move after we’ve already built it?

For that we need to extend 3D printing to 4D. The 4 refers to the object moving into new shapes after it’s already been printed. There are two videos to show this interaction taking place from our friends at the MIT Media Lab.

But first, back to our graphene-honeycomb shape. You will see this shape a lot in the coming decades so you should probably start getting used to it now.

And the second video, showing the fractal nature of this. Print one honeycomb, then let it combine with other honeycombs automatically.

Eventually, you want to program exactly how these things come together so it happens much more quickly and for a specific purpose. The “job to be done”, in product parlance.

For that, you’re going to need an operating system to control all the subsystems. Something lightweight and low-power enough but also parallel enough to run artificial intelligence programming to make sense of all the information being sent in by the sensors. Essentially, a very scaled down sensory-input-to-motor-output robot.

Luckily, there’s been one such project that’s been under active development for years, called simply TinyOS. It’s now hosted on Github and still under active development. Sadly, not enough over the last few years, but its time will come.

Put all these pieces together and you’re talking about building a Robot Explorer that can change shape to handle any situation or environment. The robots of the future will not look anything like the way Robin Williams portrayed it in Bicentennial Man. Rather, it will look more like Big Hero 6’s Microbots that can shift into humanoid form, or fly along the wind like Michael Crichton’s Prey, or transform into 3D letters or shapes for communication with alien species like the more recent Arrival.