I received this thoughtful, and may I say highly accurate, email from reader EN, which I include in full below. I removed the age and name to protect EN’s identity. I also added links where relevant, and corrected a couple of typos my enemies managed to slip into the email as it made its way from EN to me. You will agree by email’s end EN will go far in life.

It gives me great pleasure, as well as excitement to write to you. I am a huge fan of your work, and your philosophy in general. I especially admired your lecture at the DDP 33rd annual meeting, where you talked about why probability and statistics cannot discover cause, and how the p-value refutes itself. This is my philosophy too, and I was amazed when I knew that you had articles and lectures explaining this philosophy in details. Firstly…I am a researcher in Artificial General Intelligence. I read your article which was in response to the Quanta Magazine interview with Dr. Judea Pearl, and I agree with all your points regarding how you don’t think machines can have a brain-like system. While it may seem like the nature of my work must disagree with what you’ve stated, but it actually doesn’t, I believe that the best of what we can achieve in that field is a human-like intelligence simulation. Nothing more than that. I have already commenced in building certain blocks of the system. I would definitely love to talk to you about that one day! I was wondering if you’d have any free time to talk about over-hyped ML and DL systems that are called “AI” these days, and how they are merely a glorified fail when it comes to talking about intelligence. I mean, how can people think that a model that studies the relation between the frequency of a word appearing in a large dataset in relation to another word an intelligent system, and even label it as “Natural Language Processing”? And the same goes to computer vision, since when does studying the label of an image in relation to a vector of features or pixels count as intelligence? Even when they do say it is AI, do they not understand statistical distributions? Do they not understand that they need huge labeled datasets to study one single distribution and fail all other predictions on same domain but different distribution? It is very ironic, they test their models on the same distribution as that of the training, and call that learning! It is like me teaching a kid to add up single and double digit numbers, and saying that I’m teaching him addition. The dataset would include something like: 22+11=33, and then 22+9= 31, and then asking what is 22+10? And the model has to approximate an answer between 31 and 33. This is exactly what curve fitting addicts are doing, but calling that, teaching a machine how to add numbers, when in fact, asking the machine what’s 300+200 will yield a garbage answer, because it is outside its data distribution. Dr. Yann LeCun argues that neural nets can reason. How is that exactly? He also argues that NLP models do understand what they’re doing in “some sense” to quote him. Says that neural nets are capable of capturing causes. My work is at the intersection of philosophy, psychology, cognitive science, computational neuroscience, knowledge representation, and knowledge-based systems, etc. I am a firm believer in connectionism. Although I am a firm believer in my own philosophy, and despite being very confident in what is AI and what is not, I sometimes get really discouraged by comments or podcasts like these from people like Dr. LeCun, when I can see clearly the limitations of what they’re doing, and yet people still take their word over mine. At the end of the day, when it comes to prominent figures like Dr. LeCun, or Dr. Andrew Ng, or others for that matter, it is their world, against mine, and no matter how actually talented or good I am at what I do, I am a nobody. These are just some little random thoughts, I would definitely love to talk to you more about my views on their work, and my own work, and listen to your views, and feedback. I am writing this, without expecting an answer or a feedback, but, a guy can hope. A friend of mine actually calls me Mini-Briggs, because of the same philosophy we adopt! Thanks again for your time, and for reading this! Your biggest fan, EN

Everybody wants in on the AI hotness, so even linear regressions are being called “AI”. And, in truth, they are. They are just as much AI in spirit as any “deep learning” algorithm. They are fitting curves; that, and nothing more. Machine “learning”, AI, neural nets, all the same thing, albeit with more and less clever computer processing. See Statistics Vs. Artificial Intelligence.

It’s been several years since I’ve update this, but it’s still in reasonable shape: Machine Learning, Big Data, Deep Learning, Data Mining, Statistics, Decision & Risk Analysis, Probability, Fuzzy Logic FAQ.

Now LeCun. His favorite movie, it’s claimed in this podcast, is 2001: A Space Odyssey, which I regard as a well-photographed butt-numbing bore, with clever-for-its-time elements. HAL was interesting, but incoherent. The thirty- or forty-minute kaleidoscope shot at the end—does everybody somehow forget this?—is the best anti-drug advertisement ever invented.

This is relevant because LeCun thinks HAL freaked because it was insufficiently guided. I claim any computer, baring electrical short and the like, can only do what it’s told to do. And nothing more. Ever.

LeCun says we put in place laws “to prevent people from doing bad things because [otherwise? fun to new shoe?] they would do these bad things. So we have to shape their cost function, their objective function, if you want, through laws to correct…for those.”

False. Laws don’t stop people from doing anything. Respect for authority and fear of punishment do, to name two. Laws are merely the reference point for both. People do not have cost and objective functions in the utterly and necessarily simplistic way computers do. Most human desires and motivations are, as regular readers know, unquantifiable, and therefore unprogrammable. There is no way to account for the non-material intellect, the appetite of the intellect, i.e. the will. You cannot program a computer to ignore its programming—no matter what word games you might play trying to get around this unhappy fact.

Nevertheless, LeCun says “designing objective functions for people is something we know how to do.” No it isn’t. This sounds like that Cass Sunstein nudging nonsense set in binary.

Of course, much behavior can be controlled, guided, directed in broad ways, through all the classical means we know of. But this is not the same as writing code which explicitly says “When this happens, do this.”

That is all that computers can do: when this, do that. There are two considerations.

First, the list of “when this” can grow long and unmanageable, such that the extent of the “do this” can become unpredictable. To the human mind tracking the operations, that is. Think of finding every possible chess move. Chess is trivial to code with only a tiny number of allowable moves, but the number of possible positions (says one source) is about 10^27586. Dat’s a lotta zeros!

The problem with an “AI” system that allows more moves than chess and has more complex rules will be that nobody will ever know with certainty what the computer is capable of doing. Which is to say, we will always know the list of allowable “do this”, but it will be hard to predict the path to any particular “do this” from the swelling inputs of “when this”. Which makes the idea some scientists have of hooking up all the nukes to an AI system allowed to “push the button” is insane. Ours is an age which specializes in insanity, however.

Second consideration. The computer will always be a dumb beast deterministically carrying out its instructions. This is so even for quantum computers. It is still “when this, do that”, though with quantum computers you get lots of “when this”s at a time. Piling up the lists, or making the whole shebang go faster, does not turn dumb into intelligent. The links (and links within links) above about Pearl make this argument.

The hope some AI researchers have is that we are dumb beasts ourselves. That we, like computers, operate wholly deterministically, albeit with much more complex code. That we are naught but meat machines. That this can’t be so has been proven time and again. But the proofs don’t stick. The desire the proofs are in error is too strong.

Funny, that. That the proofs can be cast aside shows, i.e. proves, that if we were meat machines we can cast aside our programming, and if we can cast aside our programming, we are not meat machines.

The other hilarity, also proving we are not meat machines, is on full display with LeCun, who, as many before him, in effect says, If we can convince people they can’t make free choices, they will make better choices. And LeCun will be there to show us what those better choices are. How he alone is free to jump beyond his own meat-machineness to offer this salvation to mankind he never explains.

To support this site and its wholly independent host using credit card or PayPal (in any amount) click here

Share this: Facebook

Reddit

Twitter

Pinterest

Email

More

Tumblr

LinkedIn



WhatsApp

Print



