With Mozilla’s Telemetry system, we have a powerful way to collect measurements in the clients while still complying to our rules of lean data collection and anonymization. Most of the measurements are collected in form of histograms that are created on the client side and submitted to our Telemetry pipeline. However, recent needs for better understanding user interaction with the browser has led to the introduction of a new measurement type, the scalar probe. This article compares the two measurement tools and provides guidance on how to submit and analyse the scalars.

Why aren’t histogram probes enough for measuring?

Historically, we’ve been using histograms to collect, among the other things, flags, labels and counts. At that time, it was a pragmatic decision: the existing histogram implementation was proven and a pipeline already in place.

However, histograms are not very well suited for data where one is interested in single data points, especially when serializing and sending the measurements. For example, reporting the number of unique pages that contain a CSP looks as follows:

The serialized CSP_DOCUMENTS_COUNT histogram "CSP_DOCUMENTS_COUNT": { "range": [ 1, 2 ], "bucket_count": 3, "histogram_type": 4, "values": { "0": 5, "1": 0 }, "sum": 5 } 1 2 3 4 5 6 7 8 9 10 11 12 13 "CSP_DOCUMENTS_COUNT" : { "range" : [ 1 , 2 ] , "bucket_count" : 3 , "histogram_type" : 4 , "values" : { "0" : 5 , "1" : 0 } , "sum" : 5 }

Having to apply a histogram to report a single number makes this structure very complex. With bug 1276195, we fixed this problem by landing a patch that allows Telemetry to collect scalar data without overloading our histogram mechanism. For comparison, here’s how the same data would be serialized using scalars:

The serialized csp_documents_count scalar “csp_documents_count”: 5 1 “ csp_documents _ count” : 5

All scalar values are measured within a subsession, a chunk of the browsing session lifetime that we use as a reference for interpreting the measured data. We designed the scalar measurements to cover the basic data collection needs for a Firefox developer and we currently support different classes of scalars:

Numeric scalars, to perform simple counting.

Boolean scalars, to record the availability of a feature.

String scalars, to store values like OS version strings or graphic card device ids.

Keyed scalars, to record measurements of a single kind (either numeric, boolean or string) indexed by a string key. This kind of scalar should only be used if the set of keys are not known beforehand. There are other options that are preferred if suitable, like categorical histograms or splitting measurements up into separate scalars.

All the registered scalar probes, that is to say measurements that have been reviewed and approved, are described in a YAML registry file.

We designed the scalars API so that the most common operations are easy and straightforward to perform in both C++ and JS. Setting a scalar to a new number if it’s greater than the currently stored one, as done here, is as easy as doing:

Recording a scalar value const scalarName = "browser.engagement.max_concurrent_tab_count"; Services.telemetry.scalarSetMaximum(scalarName, 3785); 1 2 const scalarName = "browser.engagement.max_concurrent_tab_count" ; Services . telemetry . scalarSetMaximum ( scalarName , 3785 ) ;

Once recorded, scalar measurements are piggybacked to our servers on the main ping. However, having the measurements in place is of no use unless there’s a convenient way to query the reported data and perform analyses. In other words, “the simpler it is to get answers, the more questions will be asked” (Vitillo’s Blog).

Having the data is not enough!

To enable convenient access to scalar data for data analysis we recently added support for these measurements to the Longitudinal view (more about it here), which is one of the data sources available for use within our installation of re:dash. This enables analysis of scalar probes with simple SQL queries. There are plans to make scalar data available publicly, in aggregated form, through telemetry.mozilla.org, as we do for the histogram data.

Looking at how many users, within our sample, have more than 100 tabs open concurrently on nightly, we can run a simple query on re:dash as follows:

Querying scalar values using re:dash WITH samples AS (SELECT client_id, normalized_channel as channel, mctc AS max_concurrent_tabs FROM longitudinal CROSS JOIN UNNEST(scalar_parent_browser_engagement_max_concurrent_tab_count) as t (mctc) WHERE scalar_parent_browser_engagement_max_concurrent_tab_count is not null and settings is not null and normalized_channel = 'nightly') SELECT approx_distinct(client_id) FROM samples WHERE max_concurrent_tabs.value > 100 1 2 3 4 5 6 7 8 9 10 WITH samples AS ( SELECT client_id, normalized_channel as channel, mctc AS max_concurrent_tabs FROM longitudinal CROSS JOIN UNNEST (scalar_parent_browser_engagement_max_concurrent_tab_count) as t (mctc) WHERE scalar_parent_browser_engagement_max_concurrent_tab_count is not null and settings is not null and normalized_channel = 'nightly' ) SELECT approx_distinct(client_id) FROM samples WHERE max_concurrent_tabs. value > 100

More examples about querying the longitudinal dataset can be found on our wiki page, here.

For example, here’s what the distribution of maximum tabs opened concurrently looks like, on Nightly, by considering the latest ping for each client (query):

And here’s the same distribution, for the Beta population (query):

Quite different, aren’t they? But even so, please note that these plots were generated just as examples from a fraction of the available data and should not be used to make decision on. (Update 28/11/2016 – The plots were updated to include better bucket labels; added disclaimer)

Want to know more more about the scalar measurements, how to use them or if they could fit your specific use case? Get in touch with me or Georg Fritzsche, or simply ask your question in the #telemetry channel @ irc.mozilla.org.

A special thanks to Dominik Strohmeier and Georg Fritzsche for helping with this article.