Domesticating Intelligence

An attempt to rethink how to design future intelligent products at Copenhagen Institute of Interaction Design

As part of Visiting Faculty at CIID together with Joshua Noble , we had the chance this year to rethink our previous course called “The secret life of connected products” and push it a bit further in near future. We asked James Auger to join us for the start of the course to bring his experience around futures, smartness and domestication of robots and Churu Yun to step up prototyping and industrial design craft.

In June we ran a 3-weeks class to explore, discuss, research and design how Smartness and (Artificial) Intelligence is making the transition from kickstarters, visions and laboratories and becoming part of our everyday life.

Honda Asimo and the family that he never had

When labelling a product “smart”, we charge it with assumptions that change the way we interact with it and we charge it with expectations that influence the way we experience its flaws. Experiences with “smart” products seems to converge into a passive taking over of tasks that hides all the complexity and control behind “simple” and hidden interfaces.

With a growing awareness towards the implications of algorithmic decision making (ehm where to start…tesla? Nest?) and the huge amount of tools that allows AI-like functionalities to leak even in very mundane objects, it was the right time to reflect on some of these trends and challenge the notion of ‘smartness’ and ‘intelligence’.

While there is a big buzz around chat bots and conversational UIs and personalities being one of the next main material of interaction design, in this course we pushed the students to go beyond the existing metaphors of talkative butlers, the goals of a quantified and efficient life and the fears of robotic takeovers.

We also wanted them get deeper in some of the “black boxes”, to understand the processes of computer vision and machine learning , to play with them and understand these new tools that will have to become part of the interaction design lingo and materials.

We put them in a real and mundane future, where things don’t work and these intelligences are being inserted into situations which they may not understand and which may not understand them. Where both user and object will need to adapt to one another, make one another familiar and comfortable with one another, a process we can equate to domestication.

On Dogs, bots and domestication

Domestication is an interesting lens to use to understand the successes and failures of technologies that were introduced in our lives, you can read more about this topic here on James’s PhD thesis at RCA.

In brief, Domestication means a shift in habitat, where an organism is adapted to the new environment via human agency, adapting its function, its form and its interaction with us.

The dog perhaps represents the best example of domestication (for a natural organism) — it evolved from a hunting and dangerous animal, that could only be handled by few into something…completely different. Its function evolved beyond utilitarian needs, its form and interaction shaped and mediated by living together with humans and understanding their language, signs and needs.

‘If you could read the genome of the dog like a book, you would learn a great deal about who we are and what makes us tick.’ — Micheal Pollan

Looking at some of the technologies that tried to become part of our daily life, some of them also had to evolve in form, function and interaction to be fully ‘domesticated’, while some are still failing at this process.

Computers failed first to be accepted in our homes when sold as tools for making more efficient tasks like planning a dinner menu or printing invitation. They became welcomed later when their environment changed and due to digitization of media, their main role and function shifted and they became a central hub of our homes.

Robots are an example of a something that was never truly domesticated, a recurrent ‘technological dream’ living mostly in movies, conferences and ads. They were never really accepted in homes in their anthropomorphic form to automate our daily life, but more as “robotic” cleaners and other types of appliances. Instead they became extremely successful in their ‘arm’ evolution in the industrial context, where repetition and automation is a great value.

In a similar way, when we look at some of the incarnation of “smart” in today’s products we can see a similar pattern of recurring dreams and push backs from people(i.e. the smart fridge…). Most of these products represent a view of the world, where more automated, efficient and optimized tasks promise a life that not everyone is necessary looking for and try to sell a future of “generic users in his perfect glass cage” where everything works smoothly, but what would be actually smart for a more “real” and imperfect future?

As with the early examples of “intelligence” that we can see now in some of our homes (nest, echo and google home), while they have abandoned an anthropomorphic shape, they are still based on the metaphor of a talkative butler, but what if we could explore different metaphors of interactions like horses, centaurs, puppies, shepards and teachers?

To push the beyond what smartness and intelligence means today and find new and even weird meanings and incarnations, we started the class with this pretty broad set of questions.

What would be a new notion of smartness and intelligence that goes beyond the automated dream? What new roles and motives would it serve beyond making our life more streamlined and efficient? What new forms of intelligence and ecosystems can we take inspiration from? What new interactions, forms, metaphors can we explore to design more domesticated intelligent products?

The process of domestication of intelligence

In the first week of the class we focused on challenging the main meaning “smart” and “intelligent”. However as a first step we had to agree ourselves on what smart meant or at least a version of that. This is the one that James, Josh, Churu and I got to:

It can sense its environment (through time and space)

It can compute that sensory information (with specific goals)

It can act in some way on the world (with personality or behaviour)

It’s part of an ecosystem (of people, products, processes and companies)

An example we used to break the ice was a previous example we had with a discussions with James. Imagine a ‘smart’ lift in a company building with too many people on it, having to mediate who should step down. What information does it have about them? what logic and motive will it use to decide? and how will it communicate? Will it choose fit/unfit/important/hurried/premium people?

We used this as a very loose framework to get the students to explore the new potential functions and interactions of a smart product or system and come up with their own definitions to go beyond smart=automated and intelligence=human.

Each team looked into what information could be part of the “environment” of a product, thinking of sensing even beyond the human ones and start to map the complexity of sources that can influence its computing with some simple bayesian networks visualizations.

We talked a lot about whose goals products might serve and about the biases in decision making of algorithms. Thinking of profit based or humanitarian self driving cars that have to deal with crashes or products that have as a main goal their own survival, helped the students find new scenarios.

We looked at different forms of intelligence that live around us (dogs, cats, birds,…) and how by looking at home/supermarket/farms/… as a complex ecosystem of people and products, we could design new rules, relationships and even complete new ‘services’.