Here is a proposed rating of laboratories in artificial intelligence for drug discovery. Included are labs from both academia and industry, startups and big Pharmas.

It’s a rough draft, very debatable (join the Telegram group for debating), and even editable (on Google docs).

The idea is to produce a simple indicator of lab performance. The goal is to give more visibility to research efforts, and to reduce the influence of brute-force marketing campaigns (like IBM Watson).

This rating only covers some parts of AI for drug discovery. It would be cool to generalize those ratings to other industries.

The rating scale is inspired by credit ratings, like those from Standard & Poor’s or Moody’s agencies. They got popularized by the 2008 financial crisis. By the way, this crisis was partly provoked by the overrating of subprime junk bonds. Let’s not repeat the same mistake when rating AI research. So here it is:

E: Empties . Labs with no visible innovation, besides empty marketing. They typically bluff their customers by pushing their secret-and-unique proprietary solution.

. Labs with no visible innovation, besides empty marketing. They typically bluff their customers by pushing their secret-and-unique proprietary solution. D: Dinosaurs. They sometimes had a glorious past, but now, they lag behind the pack. Wait more than 2 years without publishing, and your lab will likely decay in this category.

C: Copycats. They follow the pack from behind, and avoid taking too much risks. They have a fast-follower strategy. Their work still contributes to research reproducibility, and demonstrates their ability to keep up with fast-paced research.

CC and CC+: Creative Contributors. Their work is quite original, but remains exploratory. At the moment, it’s unclear how their contributions are useful.

Their work is quite original, but remains exploratory. At the moment, it’s unclear how their contributions are useful. CCC and CCC+: Core Community Contributors. They lead the pack. Their contributions have a more promising potential than CC-rated research, but often lack rigor. Additional work is needed to demonstrate value.

They lead the pack. Their contributions have a more promising potential than CC-rated research, but often lack rigor. Additional work is needed to demonstrate value. B-grades: Brilliant contributions, meeting market expectations.

A-grades: Awesome contributions, exceeding market expectations.

NR: Not Rated.

In the full version of the document, there are 3 additional columns:

evaluation reports .

. works evaluated .

. works remaining to be evaluated. They can raise the lab grade.

Overall, grades are not brilliant, reflecting my opinion that this industry is overrated. I explained this perception in a previous blog post.

To discuss classification categories, and specific ratings, join the Telegram group.