

I always use to chat with a great friend of mine about how often times when discussing the “Internet of Things” we don’t talk so much about creating digital tools that adapt themselves to an “analog” world, so to speak. I explain:

When talking about a “smart home”, usually comes to our mind a house full of sensors, motors, etc. A house with robotic vacuum cleaners, like the one on the right side of the picture above. A house that can open and close its curtains automatically, with sensors in the fridge that can detect which foods are missing, and so on…

That would be the classic approach: you create relatively simple digital tools, such as an RFID chip that you put in the packaging of a product for the detector in the supermarket cashier to be able to recognize how many objects you placed in the cart, calculate the value of your purchase, and finally, debit the amount in your account. This is the classic approach.

However, a supermarket full of high-resolution cameras connected to a computer running an image recognition algorithm with simple artificial intelligence could achieve a similar result. It would recognize you in the video, recognize the different products you bought, check the price of these products (which would not have any chips on them) in a database, calculate the price, etc… All in an “analog” way, in the sense applied here.

I think the main point is that there are other ways of analyzing and getting data, and large amounts of data, “Big Data”, through audiovisual inputs from situations that traditionally it would have been thought using other methods instead. Such as putting disposable chips in the packaging of each product.

Well, let’s return to the example of the intelligent house mentioned at the beginning of the text. Imagine a house with vacuum cleaners robots, fans and air conditioners capable of turning on alone if the temperature was high, etc, etc… The typical futuristic scenario of the “house of tomorrow”. In this reality you have clearly “adapted the analog world to digital devices”.

However, we can assume that we would be able to also adapt “the digital devices to an analog world”. Imagine that instead of having a home full of smart objects, you have a humanoid robot servant who sweeps the house, does the dishes, and cleans the floor, using all the simple analog tools we humans have used to do the housework. The effect of both technologies, in a practical sense, is very similar:

“Al of sudden these objects are going to be endowed with agency, they’re going to be endowed with the ability to give us feedback. The world is going to become intelligent and responsive and going to anticipate our needs…”



Quote of the video “What Is The Internet Of Things?”, in which Jason Silva addresses the aspects of the internet of things, although the video approaches the idea through the traditional concept of it, it is curious that the final result of both is often similar.

Imagine that, instead of having a smart fridge, and products with chips inside it, you have a smart contact lens that records your whole day, I recommend the episode “The Entire History Of You”, from the fantastic series Black Mirror that tackles this. And that contact lens is connected to a computer with an artificial intelligence in it, and again, relying on audiovisual information input it could relatively effectively monitor the products that ran out or are about to run out on your fridge.

If you lived alone the effectiveness of its deductions would increase even more, since theoretically no one would open your fridge when you were not at home.

And if you were at home, and some visitor opened your fridge to get a can of soda, for example, even if you hadn’t seen the exact moment of the fridge opening, your virtual assistant could deduct such actions indirectly. When you, for example, glanced at the trash can, and there was a can of soda there. And if you lived with someone, the different audio-visual input records of the different smart contact lenses of people living in your house could be combined to create a more efficient and complete narrative regarding entry and exit of fridge items (privacy issues aside).

And of course, if we had a robot with artificial intelligence it could just open the fridge and check the missing products, in that case wouldn’t need as much of audiovisual but rather inputs of smart contact lenses.

If I had to define in a simple way what I am proposing, it is that this type of technology would allow cataloging things in an indirect way, and relatively efficient, everything that we see without necessarily needing sensors in things. And without necessarily needing to pay attention to the details, because a virtual assistant would analyze all the audiovisual data that our eyes see at some point (but that we do not analyze in depth).

Imagine that you took a product from the supermarket, and your eyes saw for a fraction of a second the expiration date of the product. However, you were not focused on that part of the package, and therefore, you simply are not aware that the product had spoiled. An artificial intelligence that monitored everything you saw could simply warn you of that fact. Notice, all kinds of information in this situation were obtained indirectly.

Plus, we could, for example, use this technology to remind us where we left things. Imagine that you don’t remember where you left off the phone. You could just ask,

“Cortana, check out my audiovisual database. See the last time my smartphone showed up.”

It would be as if we all had an eternal assistant following and advising us 24 hours a day. By the way, I recommend this other video of “Shots of Awe” that talks about anticipatory design. Which ends up getting into what I just said.



Some final considerations that I found interesting to comment:

It is obvious that these digital technologies, which “adapt themselves to an analog world”, have certain limitations. Because they depend on analog factors, they don’t quantify themselves, and you don’t always have the control or knowledge about them. You are creating “Big Data” indirectly.

We can also say that a humanoid format, for instance, is a general purpose technology, that uses existing tools. I mean, one has to take into account the fact that there are tasks that prove themselves to be very hard, until now, for the sake of convenience, to be adapted to digital technologies. You get this feeling whenever you see some contraption as a “cook robot” which is basically a giant cabinet with two arms coming out of it.

When the activity of cooking in the real world requires a domain of agility, movement and use of objects and items much more extensive items than such a cabinet with two arms would be able to perform.

(A closer approach to the “digital approach” to automating cooking, I believe, would be things like replicators and 3D printers capable of printing food, by the way.)

There is also the question that they have different scopes, in many cases, I would say. Using computer vision and artificial intelligence technologies to replace the need for sensors in some circumstances is just a use of much deeper technology, although it is not as efficient as sensors would in fact be.

So we’ll probably have a combination of these two philosophies of “Internet of Things”, I believe so…