In this penultimate version of this blog on the resistance to the effective use of instrumentation Oliver Grievson probes into why the fact that instrumentation in general is a lack of trust in instrumentation in general which prevents instrumentation, control & automation taking a step forward and treatment works becoming more instrumentally based.

The modern instrument is a marvel in what it does and as I have said in the past in this blog it is the eyes and the ears of the modern treatment works, if it is installed, maintained and calibrated properly it is a very reliable part of any treatment process. However in potable treatment you tend to see double or triple validation (two or three instruments running in parallel) and throughout the water industry you see operators wasting thousands of man hours a day in manual sampling, add to this external verification of water samples in laboratories and the daily operating costs of checking the product whether it be the quality of the water that we drink or the purity of the water we discharge runs into the hundreds of thousands of pounds (dollars, euros, take your pick). Why?

Well there is the argument that it depends upon the criticality of what you are doing, for drinking water it is important that the product is always safe to drink and with wastewater it is important not to pollute the waters that we discharge to. However in drinking water that is why we have double (or triple) validation. When an operator takes a sample he (or she) is taking a sample of something that is correct in the exact second that the sample is taken, this is the principle of grab sampling. So the question that has to be asked is why is there a need for continuous validation of the data that an instrument gives with a chemical test that actually gives a poorer quality of information and actually globally waste hundreds of thousands of pounds per day in doing it, because of the lack of trust in what an instrument tells us

Some comments that I have typically heard throughout my career in the water industry:

We (the utility) are regulated in that way. All are samples get sent to a certified laboratory and we sample manually to ensure that we get the correct results

Laboratory analysis is more accurate

The results that the instruments give us are only an indication

Our instruments are constantly breaking down, we can’t rely on them

Let us analyse these comments and see where the flaws in the arguments are.

The first comment about the fact that the industry is regulated on analysis of water or wastewater samples based upon laboratory analysis is certainly true but again this shows a lack of trust in instrumentation. In the UK the wastewater side of the water industry is sampled a certain number of times per year for measuring the compliance with the conditions of the environmental permit using a grab sample and additionally a certain number of times per year using a composite sampler under the Urban Wastewater Treatment Directive (UWWTD) located on the inlet and outlet of the wastewater treatment works. If we realistically look at this the treatment works is monitored (for a 12 sample per year frequency) about 3.5% of the time. The humble instrument allowing for breakdowns would be in the region of 95-100% of the time. There is the argument that laboratory analysis is more accurate and instrumentation is more of an indications, let us analyse this point.

In general laboratory analysis is more accurate, however there are several points to analyse. The first of these is whether or not the modern water industry really needs the accuracy of analysis that the modern day laboratory gives. In general the answer to this is, no.

For operational decisions to be made the data and information that is required needs to be available within a few minutes to enable an operator to make a decision of how to operate the treatment works. This is mostly done using field test kits or in the case of some parameters on-site laboratories. The way that water companies will operate is with a trigger base system and if an operator sees a sample that he has taken close to the trigger point then action is taken. The fact that a sample reads 2.99mg/L with a trigger point of 3 will be satisfactory for a regulatory point of view but not operationally. Basically the importance is in the speed of analysis not particularly the accuracy.

However this is not to say that the modern instrument is not accurate. It may not have the same accuracy as a test in the laboratory but it has the accuracy that is good enough for day to day operation and arguably so for regulatory purposes as well. This is usually proved when an instrument is commissioned and brought into service. In my past when I commissioned wastewater treatment works part of the reliability testing of an instrument was to compare it to onsite laboratory tests. The accuracy of the instruments had to be within an acceptable percentage, in practice the instrumentation that I commissioned was all within 5% of the laboratory analysis, more than accurate enough.

Laboratory analysis is not infallible either. Again an example from my past was a laboratory analysing a sample of wastewater for mercury. The result 72ug as opposed to a consent of 1ug, the result was of course called into question and a spiking trial undertaken. The results of the spiking trial showed that the three laboratories that split samples were sent to yielded recoveries of between 50 and 200%. The point is that even laboratories can make mistakes.

The last point about the reliability of instruments has already been dealt with within the blog. Instruments do require maintenance, instruments do drift and they do require calibration. However if an instrument is maintained, serviced and calibrated, which should take less time than the regular site testing that is undertaken every day of the week (at least for larger wastewater treatment works) then the burden of sampling is removed from the operator and allows an operator to operate rather than analysing samples.

In summary, there is a lack of trust in instrumentation mainly because of problems that have been encountered in the past either due to bad installation or a lack of maintenance and calibration. This has led to the current state of the industry where the burden of sampling and operation (at least in the UK where the number of operators is relatively low) has lead to a state where the operator is sampling for a large proportion of the day rather than operating the treatment works that he (or she) runs. The quality of this analysis, simply because it provides a snapshot of the situation is poor.

With the correct instrumentation properly installed, maintained and calibrated there is no reason why it cannot be used for operational and regulatory purposes which will lead to the industry increasing the efficiency of the way it operates.

In the last of this series of blogs I will summarise the whole series and look to the future on instrumentation and how we can make best use of it to assist in the day to day operations.