What's to stop intelligent algorithms, programmed to make a profit, from learning to collude with one another in ways which bend market rules? Such a scenario would require regulatory oversight from the very cutting edge of computer science.

The idea of artificial intelligence manipulating outcomes in the real world and then exploiting these on the markets is bestseller material. But there's fascinating scope for this to actually happen as computing power increases and algorithms get smarter.

Someone who has thought about this a lot is Anthony Amicangioli, CEO and founder of Hyannis Port Research. His company developed Riskbot which has been described as "a supercomputer that watches supercomputers"; a (black) box that sits between the trading firm and the exchange to prevent erroneous trades going through to market.

Amicangioli explains that the detection of excessive spoofing (placing orders to create the appearance of interest that the market is going to move in one direction or another) using artificial intelligence (AI) could potentially have some bewildering consequences.

He told IBTimes UK: "There are a few behaviours that are really interesting to consider. With machine learning (ML), you write code that acts on data. I distinguish advanced ML (from things like big data) where one begins to treat code in a manner similar to data; creating code that can change itself through intelligent morphing.

"Detecting spoofing is relatively easy. I'm looking for someone who sends a bunch of deceptive orders on one side of a given stock's book, abruptly pulls back, and benefits from trading the opposite side of the book shortly thereafter.

"But what about the case when an AI algorithm modifies itself so that rather than benefit for itself, maybe it gets even smarter than that and begins to collude with another trading entity with a similar algorithm.

"So I spoof but you benefit - we sort of have this unholy implicit contract to do the same in reverse. If that sort of hypothetical scenario is possible, then a lot of fascinating regulatory questions arise."

The regulator would have to adopt a scientifically rigorous approach to spot this kind of activity, notes Amicangioli. And then how would they deal with it?

"Is it possible that AI could be written where the author of said code is not intending to spoof or layer but an algorithm somehow inadvertently does? Or maybe even more complexly, begins to collude with other algorithms."

Amicangioli said the probability going forward that there could be unintended AI outcomes is high. From a regulatory perspective, the net behaviour as analysed, could be deemed to be bad, but the author could be deemed to be completely blameless.

Much was made of the fact that market regulators were eons behind the complexity that precipitated the 2010 flash crash. So what can regulators do to evolve in a ML markets environment?

"I don't believe the government or regulators could ever just hire teams of scientists and compete. These hypothetical problems I have outlined may be around the corner; I don't think they exist today; I could be wrong.

"I think it will be an absolutely Herculean challenge for regulators to meet this on their own, and even measuring the performance of companies like Hyannis Port Research will be a very interesting challenge," he said.

His company has evolved into a surveillance role using ML techniques, but the initial product had a formulaic approach, searching vast numbers of liquidity destinations and asking if traders have used too much of their buying power, or hold a concentration of risk in one area.

The current generation of technology developed by Hyannis Port Research focuses on activities such as "layering". This is interesting because it occupies a sort of regulatory grey area. Layering happens when there is a line of buyers and sellers for a given stock and the market sits in the middle: say you are 20<sup>th or 30<sup>th in the queue and there are a million shares in front of you, then it's unlikely you are going to have a relevant place in that market. Excessive layering can be a tactic to keep others out of a market or to exercise unfair control over a given security.

Amicangioli said: "A lot of algorithms will send orders. Sending orders is like a brain stem function, it's like breathing, you don't think about it. You are just getting in line; the buy or sell decision doesn't really happen at the time you sent the order.

"This is where it gets a little grey from a regulatory perspective. You still have to intend to have that order filled at the time you sent it. But the real decision to buy the stock or sell the stock is actually whether or not you take yourself out of the queue before you arrive at the front of the line.

"I define it as excessive padding or layering of the queue without the intention of actually executing the order, a tactic where you are blocking other people from participating actively in the market.

Riskbot can also spot runaway algorithms, where order placement rates look erroneous, or exceed a given frequency, and shut them down - a kind of "market aware firewall".

"Through Riskbot, we can say, turn off 'buy Microsoft' for any account, at any exchange, and subsequent attempts to send such trades will be disallowed while all other traffic is allowed. This allows us to police behaviour on a very granular basis."

Markets may reflect the humanity that operates within them but are they possible to predict with any accuracy?

"It is very difficult to detect with precision what will happen tomorrow. On the other hand, regarding what might happen in the next 30 seconds, I think the market is closer to predictable, which gives rise to the market making and HFT markets.

"But you could ask, is the market rigged - are humans controlling the market? I believe very strongly right now that the answer is 'no'."

Amicangioli added that the competitive aspect of ML algorithms could provide powerful prediction capabilities, in the same way as human prediction yields remarkable results in the futures markets. "Just like the wisdom of the crowd, if you get a lot of people to wage their opinion on a given subject, they tend to give you very remarkable predictions on the future.

"I think very similarly, the wisdom of crowds of intelligent machines does tend to yield a fair result. If someone decides to game things, to spoof, other machines can also intelligently pick up on that behaviour and counter it," he said.