My latest project that has been taking most of my “hobby time” for the last six months has to do with pushing Machine Learning and A.I to its theoretical limits as concepts of creativity and radical innovation come to play. During this time I’ve read and experimented with many novel ideas but most of them are pretty related to each other. So I started looking for smart people who are doing cool stuff in biology. And that’s when I stumbled on something new.

Trending AI Articles:

Let me introduce you to the flatworm

Image from Laura Sanders

Flatworm is this little guy that scientists cut in pieces to figure out how it can regenerate itself. The cool thing about flatworms is that we can train it to remember stuff (mostly places).

It remembers where to go because it actually has a brain (you can see its eyespots in the photo). It is not a complex brain and it is something that we can easily model using Machine Learning algorithms. We can create some neurons and some layers, train them and bam! It’s so easy to do that because the architecture for Machine learning is identical to how our brains work.

Now prepare to be amazed. What if we decide to cut off its head? There are three amazing things that will happen:

We’ll have two worms (ok, that’s not that amazing) The headless one will grow a new head and the other one will grow a new body

The third amazing thing is that both flatworms will retain their training and memories. Both. Even the one without a head that just grew a new one. Which raises the question:

where are memories really stored?

Memories survive brain regeneration — Michael Levin

Now that I got your attention (I should have, else you wouldn’t be still reading this), this epicness doesn’t stop there. Somehow, the flatworm will grow to be of the same size as the original one. How? I thought maybe it has to do with some instructions that are stored in the DNA and if you thought the same, you are in for another surprise.

Hacking signals not DNA

Every organism has bioelectric signals — “signals carried by the voltage gradients, ion flows, and electric fields that all cells receive and emit”. For our purpose, let’s imagine that it is an electric current that makes cells communicate.

Surprisingly there are patterns in those currents. What the Levin Lab in Tufts University found after experimenting with flatworms and frogs and is that instructions regarding the cells structures are stored in the bioelectricity of organisms.

This means that the instructions (memory?) of what a cell should do, is dictated by its bioelectic patterns. Change the pattern and you can instruct a group of cells to do something else. Let’s see an example.

There is a bioelectric pattern in the head of the flatworm. What if we cut the flatworm in two and just apply the “head pattern” to the other tail? Can we create a flatworm with two heads? Sure we do!

Two heads — Michael Levin

But what about applying these patterns to creatures that don’t have regenerative properties: like frogs?

We can cut it’s leg off, apply the pattern for 24 hours and bam! New leg will re-grow. And if we cut it off again (poor frog) a leg will grow again — which means that the organism retains the bioelectric pattern.

It will grow up to a full functional leg — Michael Levin

Bioelectric circuit editing > CRISPR

If CRISPR is hacking our hardware, bioelectric circuit editing is hacking our software: faster, scalable and easier to test.

The genome doesn’t dictate the long term anatomy - the bioelectric pattern does, and it remains across generations. The pattern is the code how a group of cells can (and should) perform an action.

Hack the code and you can instruct the cells to do whatever you want. Forever.

Non-neural Machine Learning Architectures

For me, bioelectricity means that ideas, memories and thoughts might not be stored in one place (the brain) but might be shared across the whole organism. Every cell doesn’t have one and only one function but obeys to a pattern and works with other cells towards a goal. It is a true holistic approach.

Read the resources below and you can see that architectures which use neurons are not the only architectures to learn and remember (including non-neural networks like KNN classifiers, SVMs etc).

That means that we can start exploring Machine Learning architectures that instead of trying to copy how our brain works using neurons, these architectures can rely on dynamical system theories.

In the next posts, we’ll explore how these architectures can be applied in practical problems and compare them with existing (neural) structures.

Oh, by the way, flatworms are essentially immortal. You can cut them as many times as you want (275 pieces is the record) and each piece will grow back to the exact size and shape as the original one. Retaining the same training/memories.

Resources:

A huge thanks to Michael Levin for providing info, the core of the research that triggered this obsession of mine and for answering emails on a Saturday evening.