Opening Statement

Radiation therapy is a technology‐driven field. The quality, safety and, very importantly, the outcomes of radiation therapy rely heavily on the ability to deliver the radiation dose to the intended location. Over several decades, significant research and development have gone into delivery equipment (linacs, specialized devices such as multi‐lead collimators (MLCs), specialized machines), delivery techniques (intensity modulated radiation therapy (IMRT), volumetric modulated arc therapy (VMAT), stereotactic‐body radiation therapy (SBRT)) and targeting technologies (cone‐beam computed tomography (CBCT), planar imaging, surface imaging, radio frequency (RF)‐based technologies) to enhance the effectiveness of radiation therapy. One of the critical areas that did not keep up with these radiation therapy technology advancements is real‐time dose delivery monitoring. In simple 3D conformal and Co‐60 days, the dual chamber transmission detector, called the monitor unit (MU) chamber, was considered sufficient as there was a relationship between dose and MUs.1 But there is no direct correspondence between dose and MUs for IMRT/VMAT plans as they use a larger number of MUs compared to 3D conformal plans.

Pretreatment IMRT quality assurance (QA) was introduced because of our inability to measure the dose delivered in real time.2 Vendors developed numerous ways to verify the pretreatment delivery to a phantom, primarily because the technology to verify dose delivery by direct dose measurement for every beam, every fraction, and every patient was not practical. Therefore, we started assuming that a pretreatment measurement of the dose was sufficient for the patient QA paradigm, along with machine QA and weekly chart review. In my mind, this unquestioned assumption has resulted in some severe mistreatments as reported in New York Times articles published in 2010.3, 4

Real‐time dose delivery measurement with electronic portal imaging devices (EPID)‐based exit dosimetry has been reported by several groups, but has not been adopted for several reasons.5 The primary reason is that the detector itself is not ideal for dose measurement, especially if one needs to determine dose discrepancy during each patient's treatment. Furthermore, identifying the source of a discrepancy, that is, dose delivery error vs setup error vs other calculation errors, is very difficult. The common sense approach for real‐time dose measurement technologies should be to measure all the discrepancies separately and then combine them to determine the actual reality and eliminate any ghost errors, false positives or true negatives.

Currently, with the development of transmission detectors, it is possible to determine all delivery‐related errors, while the setup and anatomical changes can be determined by the online volumetric imaging to provide dose‐based quality assurance for every beam, every fraction, and every patient.6 Common sense trumps common practice and, as a field, we remain stuck in the pretreatment IMRT QA paradigm, which has shown to be the least effective in mitigating errors, yet it is still our standard practice.7

Finally, I would like to conclude with a statement from The New York Times article “identifying radiation errors can be difficult. Organ damage and radiation‐induced cancer might not surface for years or decades, while underdosing is difficult to detect as there is no injury. For these reasons radiation accidents seldom results in lawsuits, a barometer of potential problems in the industry”.8 Since technologies such as transmission detectors are available and enhance the safety of our field, I have no doubt they will become the primary tool in radiation therapy departments because they are safe, effective, and make sense.