You trust Siri. Siri does improve your life as she gets to know you. But no, you are no free spirit. Being a mammal means you have built-in programming for social cohesion or else you would not survive (and would have been weeded out long ago, dodo!) but the tradeoff is having certain highly exploitable cognitive backdoors (“hacks”) to your decision-making processes and belief systems.

Personal assistant A may have a business alliance with Ford but not Chevy, Pizza Hut and not Dominos, A AND NOT B. This addresses two of the three viewpoints.

The third is society’s viewpoint which is the collective outcome of all these decisions which is, once again, the definition of TIS which, in the 0D case, is:

[Technology + Message → Neurobiology → Society A → Society B] where: (Technology = media vehicle (0D personal assistant)+ finely-tuned language (i.e. dialogue content between you and the assistant)).

Google Assistant → Samantha is a long way off.

Google Assistant → TIS is not far off if organized correctly.

TIS (Samantha):: TIS 1.0 = ~Samantha 0.1

Technological hurdles are enormous to build Samantha 1.0, but how about TIS 1.0 = Samantha 0.1?

Google’s DeepMind is the world leader in AI deep learning and I consider Demis Hassibis (Twitter: @demishassabis) the most important person in AI. Their AlphaGo defeated the world’s best human Go player. TIS would draw from the deep learning repertoire in the service of embodied cognition and other disciplines.

There is only one major technological hurdle besides TIS-specific stuff to building the world’s first TIS, TIS beta; what is required to build a TIS beta is the systems engineering process of integrating the components into a deep learning model given a game with different rules than chess, Go, Pong, bond trading, etc. Rules from the propaganda playbook (and other sources) are good to go. The biggest technical hurdle is an ongoing one; a better natural language interface so that communication between personal assistant and the user becomes frictionless like you and I having a chat without the current frustration:

Natural language capability: [useless → crappy → bad → ok → very good → human-human competent]

Right now, the best of the lot, Google Assistant(or Google Home), has progressed from crappy to bad+ and is well on its way to ok- but not yet.

Apple iPhone 7 became wireless not to sell more headsets and piss you off but because the future is the Her interface; AirPods ↔ Siri is the interface and the smartphone is the mothership you eventually forget about collecting dust in your purse which will open up a big opportunity for iWatch to go from useless to essential for both access to immediate (relevant) visual information and as a multi-functional physiological sensor device.

By cross-pollinating the fields of neuroscience (specifically: neuroplasticity of learning and sleep research) with chron0biology and linking that into your chronotype (lark, owl), you can better synchronize communication with neurological susceptibility to messages. Don’t laugh. Just a small % change in sales conversion rate equates to billions at scale. In other words, Tim Cook could decide to give away the iWatch to reward (only) heavy Siri users just to get physiological data for Siri’s cloud to chew on; then refine the language and better target the timing of the semantic payloads via those elegant AirPods. But isn’t that evil? Not necessarily. Like all powerful technologies it is neither intrinsically good or bad.

TIS is morals-blind: it can be Hitler on steroids or the anti-Hitler on steroids. But rest assured: both are coming, they are inevitable, a bipolar singularity will engulf us — like it or not.

So, as for Apple, if done with the right intensions it is a win-win. Tim Cook should do it. Steve Jobs would have, me thinks. I like the chances of technology being developed by the major companies to be in the greater good of society simply on the merits of sustainability; it is the smaller ones lurking in the shadows on the margins that bear the most concern as predators. Apple has always been true to a wholesome moral compass and there is no reason to expect otherwise going forward.

In any case, at the end of the day, TIS beta is just another complex game like learning how to control a driverless car which is a very hard problem. With TIS, you need a giant simulation where it is fed lots of live data from real people in an unsupervised format starting from scratch and it learns how to get people to choose A instead of B better than 50/50 (one application for TIS = A AND NOT B), passing the Turing Test is irrelevant. You must crawl before you walk and walk before you run: start crawling now. The TIS lab prototype figures it out by making lots of errors (those little cars crashed a lot in some rubber-bumpered parking lot for awhile before it could parallel park!) and then correcting them until it moves the needle to 55/45 which = mammoth pay-dirt.

Benedict Evans is a highly respected smart cookie that is kind of like Varys in Game of Thrones with “little birds” throughout the tech kingdom. That’s how he stays ahead of the curve. It is wise to follow him. I do.

60/40. 70/30…90/10? 98/2? This happened with Pong, Go, autonomous cars, lip reading, and recognizing a mostly hidden cat in crappy lighting conditions better than a human can. Now it is about influencing a human’s decision making process. This will happen for one trillion reasons straight away right off the top of my head: follow the money.

Considerations

magnitude of opportunity is vast and uncharted

“stickiness” = extreme barrier to exit because you lose the relationship (almost like a death or divorce so you stick with Google Assistant because Siri would be a “first date”)

vulnerability of the human organism to sophisticated propaganda is extensive, constant, and refined but still is in blue sky phase(epigenetics and other sub-disciplines, etc. ↑↑)

rate of change of efficacy to exploit human vulnerability is rapid given current trend-lines in background machine intelligence improvement (Moore’s law, architectural efficiencies, algorithms)

severe conflict of interest between the objectives of the personal assistant’s creator and user needs reconciliation in ethical and legal dimensions

Question: Instead of displaying goofy ads that are maybe relevant why not create what is relevant directly and communicate it through simple conversation that even a twelve-year would get excited about?

Image source: Gnosis Online

{computer science dominant (old) → TIS:(synergy of computer + many biological sciences)}

{programming computers (obsolete) → TIS:(programming computers to program biology)}

{displaying advertisements (obsolete) → TIS:(building relationships [“gopher” bot → trusted “friend”)}

{compute relevance (obsolete) → TIS:[influence beliefs → create desire]}

{predict what somebody wants with search autocomplete (obsolete) → TIS:(determine wants)}

Note: Much harder than accelerated learning but they overlap. Doing a TIS beta for learning will scaffold (“baby step”) for A AND NOT B: [What would you like to: learn → believe → believe emphatically]. Embodied cognition = “the brain is an organ for adaptation to the the unknown.” What if my program/algorithm is a brain cheatsheet where answers to “the unknown” are given obliquely[2a] and then reinforced directly at (what I will call) a high “semantic-perception signal:noise ratio” with other evidence or soft data?