Neural networks are programatic paradigms inspired by biological models that allow programs to learn from observing what has gone before and what is happening currently. To put this simply; they are ways that software can learn from observing data. We look to do exactly the same things in day to day life. It is not scary, creepy or even mildly threatening. It uses a set of terms that are emotive because the paradigms they are derived from not just biology, but the biology the mind and thought. They allude to the messages and connections that is intelligence. That image of electric signals popping off around the inside of our brain and a program doing something similar is what rattles us. All the stuff of good horror and science fiction from the gothic period onwards, Shelly comes to mind. Or is it really just a bit like giving the car (or any other inanimate machine) a human characteristic or worse still a personal name?

Computer systems are quite similar to living organisms, as I have noted before maybe not such advanced organisms, but organisms none the less. The part of the system that is living is not so much the hardware or even the software; it is the data. All those who have worked along side me will know I am prone to statements such as, ‘data takes on a life of its own’, ‘when the data chooses to eat itself’, ‘this data is dumb’ ‘here we have an example of data that has run away with itself’ and so on. It is not simply the closeness of a computational system to a living system that is behind the motive for connecting the two at a conceptual level nor is it just the need for a vocabulary. There is and always has been, a need to model something that does not yet exist on something that we know does exist. We seek to build computer systems based on ourselves (or our dogs) and we look to these as models.

They may think I am mad and just given to anthropomorphising at the drop of a hat but let us consider the microbiology of the neurotropic herpes simplex, it becomes latent and hides from the immune system in the cell bodies of neurones, this hiding is a very data like trait.

I am quite comfortable with the concept of data being alive, as opposed to just live and the concept of a biological element such as the neurone being used to describe how computational system works is equally comfortable. Looking at non computer science based behaviour patterns and structures provides at least a language for discussing the patterns, behaviours and structures of systems. So let us imagine a small collection of data elements a little like a virus in that they they live inside something else; something analogous to a cell. Before we pop off into a scene from a good SF movie let us define the cell in question. Let’s view a transaction as our cell, the transaction comprises a number of processes and elements. The body of information we are looking at is comprised of these cells, it is not a sea of atomic data, but a collection of number groups (cells) of connected data. Small amounts of non conforming process can reside in each cell, as an example a rounding issue on a financial calculation. The rounding calculations done at the line level do not match the calculation done at the aggregate level, if the line contains a series of cells that each have a rounding calculation one of these could be non-conforming and could successfully hide amongst the others.

As another simple example of data hiding in other data, consider a performance monitoring system that grades three generalise scores, failing, actionable, and pass, defined by grade results against KPI’s. 80% is the pass mark. A branch consistently scores well and regularly attains 90 to 95% scores. All looks good. At a certain point performance in one of the KPI’s drops (scores of 90 falling to 70) to remedial, this is measured as a 25% drop when compared to the average for the period, another 2 key points measured have a massive improvement with increased scores 10 to 15 points up. This branch is to all accounts doing well as the overall results are still well above the pass mark, and have indeed increased during a period when an abject failure has been occurring. The consistent high performance masks the overall drop and this is compounded by two outstanding areas of improvement. Now anyone looking in detail at the results would see this but the whole point of the setting up the analytics in the first place was to give simple view of performance indicators and a quick opportunity to identify when remedial actions would be a benefit. In any enterprise that has a very large number of transactions good performance may mask very bad performance for a long time, the bad performing elements remain latent within the overall generalised view only to break out when the overall health is a bit run down, A bit like cold sores really.

As information scientists or software engineers or analysts or data architects we consistently look to human states such as language, intelligence, learning, the brain, etc as a way of describing the structures we build and deploy. We describe the technical world in terms of the natural world, we borrow terminology from natural sciences and more often than not it provides a very good fit.

Neurons are another system like trait, so much so that certain systems are named after them, as in Artificial Neural Network (ANN). An artificial neural network is ‘an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurones) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurones. This is true of ANNs as well.

First Published 23rd February 2016