This is actually a sequel one of the very first blog posts I ever wrote for blueview. While a lot has changed since 2015 understanding the difference between Compatible Query Mode and Dynamic Query Mode is still crucial. With the addition of data sets running on the compute service (aka ‘Flint’) things are even more complicated. My goal for this article is the run back the clock, peel back the onion and give you a historical, technical and practical understanding of the 3 Cognos query modes. Buckle up folks, this is going to get wild.

There are many query engines hidden beneath the hood of your 2020 model Cognos

Compatible Query Mode

Compatible Query Mode is the query mode introduced in ReportNet (AKA, my junior year of college…). It is a 32-bit C++ query engine that runs on the Cognos application server as part of the BIBusTKServerMain process. CQM was the default query mode for new models created in Framework Manager up to Cognos 10.2.x, after which Dynamic Query Mode became the default. The majority of FM models I encounter were built in CQM and thus the majority of queries processed by Cognos are CQM. It remains a workhorse.

CQM resides within the Report Service

It is, however, an aging workhorse. Query speed is hampered by the limitations of 32-bit processes, particularly as it relates to RAM utilization. CQM does have a query cache but it runs on a per session, per user basis and in my experience causes more problems than it’s worth. Furthermore, Cognos 11 features either don’t work with CQM (data modules) or must simulate DQM when using CQM-based models (dashboards). This almost always works but of course fails whenever you need it most…

CQM works just fine and moving to DQM is not urgent, however I strongly advise you to do all new Framework Manager modeling in DQM (or even better, build data modules) and start seriously considering what a migration might look like.

Dynamic Query Mode and the Query Service

Dynamic Query Mode is the query mode introduced in Cognos 10.1. It is a 64-bit java query engine that runs as one or many java.exe process on the Cognos application server and is managed by the query service. The terms ‘DQM’, ‘query service’ and ‘XQE’ all essentially refer to this java process. All native Cognos Analytics features utilize DQM only – CQM queries execute in simulated DQM as mentioned above. You can see the criteria necessary for this to work here. DQM is both very powerful and very controversial among long time Cognoids. Let’s take a look at why.

DQM features dramatically improved query performance

What’s great about DQM?

DQM has a ton going for it. As a 64-bit process it can handle vastly greater amounts of data before dumping to disk. If configured and modeled properly, it features a shared in-memory data and member cache that dramatically improves interactive query performance for all users on the Cognos platform. It even filters cached query results by applying your security rules at run time.

DQM is tuned via Cognos administration and by a large number of governors in Framework manager to optimize join execution, aggregation and sorting. It handles extremely large data volumes, especially when combined with the basically defunct Dynamic Cubes feature. It even combines cached results with live SQL executed against a database on the fly. On its own. Like you don’t have to tell it to do that, it just does. Magic!

What’s not great about DQM?

Unfortunately given the list of excellent attributes above, DQM has some problems. It is very complex to understand, manage and tune and requires DMR models to fully utilize the all the caching features – consider that the DQM Redbook produced by IBM is 106 pages. A standalone tool exists called Query Analyzer dedicated to help you understand what the heck DQM is even doing as it plans and executes queries.

Migrating from CQM to DQM is often a complex project to evaluate and execute. I once provided a customer an LEO estimate of 8 – 32 weeks to complete a migration project. I have seen migrations take almost a year. I’ve seen things you people wouldn’t believe…

The purpose of this blog is not to push professional services but this is one instance where I think you really should contact PMsquare for help. But let’s say you have a ton of CQM models and don’t have the time to migrate them all. Is there a shortcut to high performance on large(ish) data volumes? Why yes, yes there is.

Data Sets and the Compute Service (aka ‘Flint’)

Data sets are an in-memory data processing engine first introduced in Cognos 11.0 and greatly enhanced in 11.1. Cognos 11.1 data sets run on the compute service aka ‘Flint’. The compute service is a 64-bit spark-sql process that is created and managed by the same query service that manages DQM, so it’s not really an independent cognos query mode. I will write a more in-depth article about data sets and Flint in the future, but let’s take a super quick look at how they work before we get into why they are amazing.

The compute service is a modern in-memory compute engine

How do data sets and the compute service work?

Data sets are not live connections to the underlying data like CQM or DQM – rather, they are a data extraction that is stored in a parquet file and loaded into the Cognos application server memory when needed for query processing. It works like this:

An end user creates a data set from an existing package, cube or data module OR uploads an excel file (the process is the same!)

Cognos fetches the necessary data and loads it into an Apache parquet file

The parquet file persists in the content store and is available to all application servers

When the query service on an application server requires a data set for query processing, it first checks to see if it has a local and up-to-date copy of the parquet file

If not, it fetches one

In either case, the parquet file is then loaded into the memory of the application server

Data is processed by the compute service using Spark SQL and results are returned to the query service

The query service receives results from the compute service and may perform additional processing if necessary

The results are then passed to the report service or batch report service for presentation

What makes data sets great?

They’re easy to build, easy to join and manipulate in data modules, easy to schedule and the performance is great. Once loaded into memory a data set is shared between users on the same application server. I have done multiple projects where I accomplish weeks or even months of ETL by getting fancy with data sets and data modules. No wonder they are my favorite of the Cognos query modes.

What’s even better is how data sets provide a radically shorter path to high performance, DQM and Spark based queries for your existing CQM models without having to commit to a full conversion. You simply use a CQM FM package as the basis for a data set, then utilize that data set as a source in a data module. Once complete, you’ve unlocked the full set of incredible data module and dashboard capabilities like forecasting without having to do an 8 to 32 week project.

Which Cognos Query Mode is right for me?

Okay that was a ton of data, some of it pretty technical. Which of the Cognos query modes should you choose and how do you learn more?

TLDR

Immediately cease all development of new Framework Manager models using CQM

Consider migrating existing CQM Framework Manager models to DQM models or to data modules (PMsquare can help with this)

Data sets are your ‘get out of CQM free’ card; they vastly improve the performance of most CQM queries and simplify presentaiton for end users

References

Read on to up your Cognos game