Biochemists have had some success at designing drugs to match specific targets. But much of drug development remains an arduous grind, screening hundreds to thousands of chemicals for a "hit" that has the effect you're looking for. There have been several attempts to perform this grind in silico, using computers to analyze chemicals, but they've had mixed results. Now, a US-Canadian team is reporting that it's modified a neural network to handle chemistry and used it to identify a potential new antibiotic.

Artificial neurons meet chemicals

Two factors have a major influence on the success of neural networks: the structure of the network itself, and the training it undergoes. In this case, the training was pretty minimalist. The research team did the training using a group of 1,760 drugs that were previously approved by the US FDA, along with another 800 or so natural products. Most of these aren't antibiotics; they target a variety of conditions and are made up of largely unrelated molecules. The researchers simply tested whether these slowed down the growth of E. coli. Even though many of them were partially effective, the researchers set a cutoff and used that to provide a yes or no answer.

This approach does have some advantages in that it shouldn't bias the resulting neural network for any particular chemical structure. But with a dataset that small, it's likely that some specific functional chemical groups were left out of the training set entirely. Success was also very rare, with only 120 molecules coming in above the cutoff. And, since the cutoff was a binary "works" or "doesn't work," the network had no way of identifying trends that could help it project what chemicals might be more active.

If that part of the experiments seems a bit underdone, it stands in sharp contrast to the work put into structuring the neural network. Normally, the individual functional units of a neural network perform a set of simple tasks: taking input from other "neurons," performing their own calculations, and communicating the results to the next neurons down the line. In this case, the neurons were set up to match a representation of a molecule, and each passed messages representing its chemistry to any neurons it was linked to via a chemical bond.

With sufficient message passing, the final output messages of the network was a representation of the entire molecule, and the messages are combined to create a vector representation of the molecule's chemistry. This representation was augmented with the output of a simpler algorithm that evaluated the chemistry of the molecule in question. The neural network then used these values to compare the molecule to what it learned from its training.

Just to be sure it was working, the authors compared its evaluations to those produced by a variety of other algorithms, including other neural networks trained using the same training data. All the promising-looking chemicals were also evaluated using an algorithm that predicts their likely toxicity in humans.

But does it work?

Apparently! After using the network on a small library of chemicals, it identified 99 molecules that looked promising. Testing of these revealed that over half inhibited the growth of bacteria. And, perhaps more significantly, there was a nice correlation between the score for the molecule generated by the neural network and its performance when tested against actual bacteria.

After a few more tests, the researchers tackled a big one: a selection from a giant database with over 100 million molecules (107,349,233 to be exact). Going through them took the system four days, which is a lot faster than the "probably never" that it would take to screen that number of molecules in real life. Not surprisingly, a number of molecules came out of that screen, and the authors describe a few tests of two of them. Both were broad spectrum, killing a large variety of bacterial species—one of them brought growing bacterial cultures to a halt in only four hours.

But most of the attention was given to a molecule they're calling halicin (according to a press release, in honor of 2001's AI, HAL 9000). Halicin was originally developed to target a human protein in the hopes it would help treat diabetes. Given that background, we shouldn't be surprised that halicin doesn't look anything like known antibiotics. (This was true for most of the molecules identified in the various screens.)

Halicin was effective against a wide variety of bacterial species (although not all) and is effective against known drug-resistant strains. The researchers also created wound infections that they successfully treated with halicin. It also cleared up C. diff infections, a common cause of drug-resistant digestive-tract problems. Critically, halicin also killed cells that weren't undergoing cell division—going quiet is a way that many bacteria manage to survive antibiotic treatments.

The researchers decided to find out how halicin worked by evolving a resistant strain. Amazingly, they didn't manage to do so, which is obviously a positive. So instead, they looked at the genes that were active in bacteria exposed to halicin. These provided a hint as to how halicin works: by interfering with the balance of protons within the cell. Bacteria normally use their energy to pump protons out of the cell, using their return to drive the production of ATP and move the flagella that propel them through water. With halicin present, the protons make their way back inside the cell without doing anything useful.

Intelligent chemistry

This approach is obviously extremely promising. We're rapidly running short on antibiotics, and the methods we've used to produce new candidates haven't been coming up with anything actually new of late. Not only is this a different approach, but it contains none of the biases that would normally influence human-driven discovery. In addition, the same general approach could be taken with a huge variety of diseases, notably including cancer. And things should only improve, as researchers who are manually screening drug panels are regularly publishing new data that could be used for further training or redirecting the system to new disorders.

That said, it's important to emphasize that, even if this system continues to work, it's only a partial solution. Not all of the molecules in these databases are going to be free from toxicity or off-target effects, and some simply won't work. Then there's the issue of whether they can be produced with standard reaction techniques and done so in a manner that works with both industrial practices and health and safety standards.

But, to an extent, it's astonishing that this limited a dataset can produce such useful results. Hopefully, the authors have used a neural network that's auditable, so we can get some idea of what chemistry this one is looking at.

Cell, 2020. DOI: 10.1016/j.cell.2020.01.021 (About DOIs).