The Guardian wants to understand more about the types of below-the-line comments we receive from readers on the site with a view to fostering the best discussion possible and limiting abuse.

This analysis, which looks at the patterns of moderation on articles by different authors and on different topics, is a first dive into the data.

We analysed 70m comments recorded between 4 January 1999 and 2 March 2016 (only 22,000 of these were recorded before 2006). We worked with these comments in a Postgres database running on Amazon Web Services (AWS), which is a clone of the Guardian’s production system.

Comments are typically only open for up to three days after an article has been published; once the thread is closed, comments are viewable but new comments can’t be added. We only included comments made on the Guardian’s website, not on Facebook or other social platforms.

There are two reasons why comments made by users may not appear on our website: either because they have been blocked, or because they have been deleted.

If a comment is blocked, this is because it violates our community standards, a set of guidelines that aim to keep the conversation civil, constructive and legal. A small minority are blocked for legal reasons, the vast majority are blocked because the moderators regarded them as abusive or disruptive to some degree. If a comment is blocked, it is replaced with a message:

This comment was removed by a moderator because it didn’t abide by our community standards. Replies may also be deleted. For more detail see our FAQs.

Comments are deleted for one of two reasons: they are either replies to blocked comments or they are spam. Deleted comments are completely removed from the page. This FAQ goes into more detail on how the Guardian’s moderation team operates.

In our analysis we took blocked comments as an indicator of abuse and/or disruption. Although mistakes sometimes happen in decisions to block or not block, we felt the data set was large enough to give us confidence in the findings.

Our list of authors contains the approximately 12,000 individuals who have written at least two articles for the Guardian, where we are only including articles that are viewable online (3,000 articles before 1998, 2m after). This data was obtained by running a SQL query on our Redshift datawarehouse, which contains the data in our Content API.

To classify our journalists by gender we first used this process, which allowed us to assign genders to 11,098 names and left 1,268 not coded. We then wrote a Perl script to pass the remaining names to this service. There were still a few names left unclassified, and we went through these manually. We stored these genders in a csv, and uploaded this to S3 in AWS.

To perform our analysis we had to join together three data sources: our Postgres comments database, our article information in our Redshift database and our csv of author genders in S3. Ideally this data would all be in one place, and our data technology team are working towards this by creating a data lake using Presto, but at the time of our analysis this was not the case.

We needed to find a tool that would let us query very large amounts of data from multiple sources. For some time we have been wanting to find a test project to try out Apache Spark, and decided that this seemed like a simple problem Spark should be good at solving. We wrote the code in Scala, and deployed it to an Elastic MapReduce (EMR) cluster on AWS. The code reads in the data from the various data sources, manipulates it and writes out the results to S3. Our source code is available here.

Our current work looks at blocked comment rates for various subsets of the data and other top line figures. In the future, we would like to explore the words used in the comments, using standard and bespoke natural language processing algorithms.

If you have questions about the methodology used in the research, or the research itself, please do ask in the comments below, where Mahana Mansfield and Becky Gardiner will answer them