The last clinic I worked for had a big thing about putting allergy information all over a patient's chart. (Like, literally, right under the patient's name on the tag of the chart.)



But I always wondered about that. Allergies are a very specific autoimmune disorder. They're hardly the only sort of bad reaction one can have to things, including medications. Every time I'm asked by a medical provider "do you have any allergies?" and I respond, "well there's two medications which I should probably never take again, but I wasn't allergic to them" they get this look.



I don't know what – or rather even if – any given EHR has some way of supporting recording adverse reactions to medications which aren't allergies. I know that our paper charting at Lagoda was terrible at supporting the information, "Patient could not tolerate Wellbutrin due to precipitous decrease in WBC" or whatever.



This functionality seems so very basic to me. And yet, I really don't get the impression it's well handled.



Personally, I have a slightly weird but not all that weird situation where my chart should flag a certain class of medications, steroids, as not "do not administer" but "there is a special issue with this, confer with patient, other treaters": I have to ration my exposure to them, and a specialist is in charge of my prescription for a steroid. For a kind of trivial medical problem which came up during my annual physical, my prior PCP prescribed me a medication that I didn't realize was a steroid; fortunately I caught it when I read the package insert, but I was kinda pissed. I prefer all my steroid rxs go through my relevant specialist - or at least me - so we can have the "is this warranted? is there a non-steroidal alternative?" conversation.



I have no idea if Epic supports this sort of flagging.

At most hospitals, Wexelman continues, it falls to the lowest person on the totem pole to fill out the death certificate — she did her share as a resident in New York City. Not only had she never been taught how to complete the form, the electronic system, she says, didn’t even list every possible cause of death. In one case, Wexelman recalls, a patient’s immediate cause of death was sepsis, but the system required her to enter what had caused the sepsis. Unsure of the underlying cause, she made her best guess.



“Most doctors don’t wake up in the morning and think, ‘I want to lie on a death certificate today’ … Everyone’s trying to be as accurate as possible,” Wexelman says. “Many times we don’t know why a patient died, but the system sort of forces you to put something, and that may not be the most accurate diagnosis.”

that determining the underlying cause of death for a certificate can be especially difficult if a patient dies suddenly or at a hospital where doctors are unfamiliar with his or her medical history.

Still, Wexelman’s experience spurred her to run a study on what NYC resident physicians thought of the death certification reporting system, and the result was shocking: Nearly half of the residents said they had knowingly reported an inaccurate cause of death. Wexelman says this stems either from being forced to enter a cause of death when the resident simply didn’t know the right answer or entering a cause of death that was the best of the limited choices on the form but didn’t exactly match their understanding of why the person had died.







Facebook



Twitter



Google+



Tumblr





Link for sharing: Link for sharing: https://siderea.dreamwidth.org/1540620.html

0.In the last month, two different hospitals both managed to almost kill D by medication errors.At least, I'm insisting they were errors, though such as they were is, I gather, not what is usually considered a "medication error". She was not administered too much, she was not administered someone else's meds, she wasn't administered something other that what was prescribed.In both cases, she was prescribed medications that she had had prior bad reactions to.In one case, she was prescribed a medication known to be life-threatening to her by the same hospital as handled the medical crisis precipitated by the use of the medication the first time around, the very hospital which had discovered which medication she was reacting to.In the other case, it's possible that they didn't have on record that the medication they prescribed her was known dangerous to her. It's possible they were working from a badly out-of-date record.In both cases, the only reason the error was caught, was because family noticed something was wrong.In both cases, the medical professionals caring for her had no idea that the medications were dangerous to her. In both cases, the medications were prescribed in ignorance of the information that those medications had caused past medical problems for her.I think we probably all agree, both lay people and medical professionals alike, that this is something that should never happen. If a hospital "knows" – collectively, institutionally – that a specific medicine is idiosyncratically dangerous to a patient, it should not "forget" that knowledge. And if one hospital is in possession of the knowledge that a medication is idiosyncratically a danger to a patient, they should pass that information on; and hospitals should generally request that information of patients' other, more regular, treaters.One might ask, "How could this possibly have happened?" But I think I knowhow it happened.D, like pretty much everybody who makes it over the age of ninety, has tried a lot of different medications across her life. Some of them didn't work out so hot for her. So her doctors stopped her prescriptions for them.But they didn't indicate in the record why they had stopped them.Why didn't they?Apparently, the "electronic health record" (EHR) system in use by both hospitals and her PCP's office, which happens to be Epic, To be clear, I don't know if this is a problem intrinsic to Epic, or whether this is a fault in the organization-specific deployments of Epic. (I suppose it's possible that all the involved healthcare professionals who affirmed this are poorly trained, ignorant of Epic's features and affordances, or just dumb. But I doubt it.)One way or another, two vast health systems are using an instantiation of an EHR system thatOh, it can be put in the body of a visit note, of course, but that's sort of like saying it can be buried in your backyard.So what happened, in both cases, was that while D was in the hospital for one condition, she started having symptoms of a different one she had been treated for in the past. For instance, while she was in the hospital for a psychiatric reason, her ankles started swelling up. So the treater looked in her medical record and saw that she had been on an edema medication before in the past, Lasix, so assumed that that was what her PCP treats her with, when she has edema. They assumed, or so I gather, that Lasix was the medication of choice for her, when she had edema.And there was nothing in the med list of the chart – and nothing that popped up when it was ordered – that said that the reason the Lasix had been discontinued the last time was that it was suspected of causing the hyponatremia that almost killed her last year . There was nothing in there that further explained that evidence for that suspicion included that D had become hyponatremic after being on Lasix, and that her hyponatremia started resolving when she was taken off it.And so a physician at the very same hospital where it was discovered just last year that Lasix precipitated a hyponatremia crisis in this patient, put the patient back on Lasix.Precipitating hyponatremia again.When I posted here that this had happened, someone reasonably asked me , "What kind of notation should the hospital have done in records last time about the Lasix, to prevent it being possible again? (I'm curious for my own future knowledge.)"I responded, "You know, I really don't know." I went on to say:I decided to go find out. And I did. I found out just what the "right" way was, according to a bunch of people familiar with Epic, to record in Epic EHR that a medication causes a patient adverse reaction.At the first hospital, after we alerted them to the problem with the Lasix and that she was becoming hyponatremic again, the psychiatric prescriber in charge of her care volunteered that, to keep this from happening again,And someone else who in healthcare IT reached out to me to say, "The way you do this in Epic is to list it as an allergy."So I fired up the MyChart website for my PCP's practice – a third, huge healthcare organization – and logged into my Epic record. And sure enough, the two medications I've had bad reactions to were listed under "Allergies".I don't have allergies to either of them. D doesn't have an allergy to Lasix or the other medication that almost killed her.But, since there is apparently no other way to alert prescribers using Epic that a patient has a dangerous atypical reaction to a medication, that's the only option.That would explain why neither hospital had a meaningful record of D's adverse reactions to those medications; they weren't allergies, so nobody had recorded them as such. Until the second times both almost killed her; then they got listed as allergies, anyways.1.Epic is the market leader in EHRs. Their website brags that "more than 250 million patients have a current electronic record in Epic."Maybe your medical record is stored in an Epic EHR system.2.There is a class of problem in healthcare in the US, or more precisely healthcare technology, that lay people generally aren't aware of.To start with, you need to know that the US federal government has been railroading healthcare providers into adopting electronic medical records. Large healthcare institutions are most vulnerable to this pressure, as they are most dependent on federal money (Medicaid, Medicare, etc), but smaller providers are being pushed to the wall by the financial pressures the government has brought to bear. The federal government can't order healthcare providers to do anything, but it can basically commit extortion. "Nice Medicaid caseload. Be a real pity if something happened to it."The federal government railroaded healthcare institutions into adopting technologies that are, frankly, not at all ready for prime time. A lot of the problems with healthcare technology that make lay people go, "Why on earth is it like this?", the real underlying cause is that the technology is half baked, kicked out of the oven way, way before it was ready.This causes a lot of big, obvious problems, of the sorts that are unambiguously bugs: faults in coding that cause systems to crash, or die, or lose data, or fail to interoperate But it also has been the enabling context in which another sort of software fault has proliferated. They're not bugs, in that the software is not failing to work as designed. Let's call them requirements failures. They're not failures to meet requirements. They're failures to realize or admit that something was a requirement. That is, they're failures to capture all the requirements of the system - or to promulgate them as requirements to the developers. They're failures to capture specific requirements – "Oh, we didn't know that it would have to be able to do that" – or failures to include the actual requirements discovered in the official requirements – "Oh, we heard that the users wanted that, but we decided not to do it." Possibly because the people for whom it was a requirement weren't consulted; or if they were consulted, maybe their concerns weren't considered particularly important.And this is how we have wound up with EHRs and other systems that medical personnel are supposed to use, whichSystems thator otherwise, or, because there's quite simply no way to enter the correct information.3.Some years ago, I got an earful from a psychiatrist I worked with at a clinic. He had just started a part-time gig at another clinic which had an EHR. That EHR required that when a psychiatrist met with a patient, each time he had to log patient suicidality, if any, in the chart, which in and of itself is fine. Unfortunately, the user interface was something like "Suicidal: yes / no", with a forced choice between "yes" and "no".That's fine if the answer is "yes". The problem is that "no" is not a legitimate answer. Not if you're a responsible behavioral health professional who prefers not to get sued for malpractice. We're trained to not ever claim to know a patient isn't suicidal. How could we know such a thing? Contrary to what some people believe and what administrators and insurers and opposing counsel would like to hold us responsible for, mental health professionals can't actually see into people's heads and know what they're thinking. All we know is what the patient tells us and what we observe. So at best we can truthfully record that "patient denies suicidal ideation". That's the technically correct way to express that.So, really, that pair of radio buttons needed to read "yes" and "denies", which is what the psychiatrist telling me about this was pleading and raging for it to be changed to. Or, if a stickler for parallelism, "endorses" and "denies". Or it could be phrased, "SI endorsed: yes / no". But anything that says, "Suicidal ideation: no", is right out.This may sound like a pretty trivial thing to you; I am guessing it seemed like a pretty trivial thing to the people who designed it. It's not. Forcing a BH clinician to (virtually) sign their name and professional reputation to a (digital) chart that says, "No suicidal ideation", sends a thrill of existential terror through us, and frustrated rage. We are held legally responsible for our patient records, even when EHRs – adopted by the people who employ us! – force us to express things in them that we would never have voluntarily done. To my knowledge, you can't actually successfully explain to a judge or a jury, "That isn't what I meant, that's just the only way the EHR lets me enter that information. I know that's not what it says, but I know what it means." There is no "the EHR made me" get-out-of-civil-suit free card.I don't have any hands-on experience of EHRs as a clinician, myself, but I've run into the same problem interacting with insurance company computer systems. For instance, there was this one insurance company website with a system for healthcare providers – such as myself – to file "Requests for Prior Authorization". This is the health insurance thing where, say, a psychotherapist has to ask permission from an insurance company every 12 sessions if they can keep treating the patient. (Yes, this is a thing. It is industry standard.) (I don't remember which any more, but it was a Medicaid vendor, one of Tufts, Neighborhood (Beacon), or MassHealth (MBHP) – because those were the only three I remember ever doing prior auths (PAs) for.)The behavioral health prior auth form asked me what other medical conditions the patient had, besides their psychiatric conditions. Problem is, I'm not a doctor, and frankly have no business handling non-BH diagnoses. So, I might know, because the patient told me, that they have "diabetes", but I have no ideadiabetes, and I certainly don't know which of the several (dozen?) autocompletes for "diabetes" in their system applies.And, while I'm supposed to reach out to the patient's PCP, maybe the patient has one, and maybe the PCP replied with a report about the patient's current health status, and maybe the report uses ICD-10 codes or otherwise has enough specificity that I can find the right condition on that drop-down list. And maybe the person who is responsible for treating – and diagnosing – whatever the specific condition is, is the PCP, or has troubled to update the PCP. And maybe the diagnosing or treating physician knows what the condition's code is, or maybe they have a coding specialist do that for them and have no clue.Diabetes is the easy case. A hard one is "cancer". "Lung cancer? Which lung cancer?"The odds of my getting that information – especially just specifically what the treating physician actually coded it as – are actually really low. So I'm stuck with the awareness that the patient has diabetes, but I can't enter just "diabetes" on the form so I leave it blank. I literally represent to the insurance company that, as far as I know, the patient doesn't have any general medical conditions that might impact their mental health care.Which is just hilarious in light of the existence of diabetes-induced depression.Boy, was I simultaneously nonplussed and validated to find my own PCP frowning at the available diagnostic options in a drop-down list on his Epic system, trying to figure out how to characterize one of my presenting problems. "It's not just me!" I may have chortled.Another example has surfaced recently in death certificates. From the Oct 18, 2018, article at Ozy.com, " A Whopping 1 in 3 Death Certificates List Wrong Cause of Death , by Olivia Miltner (emphasis mine):She also says thatBut that's anecdote. Wexelman decided to get some data: Here's the study she did . "Of respondents who indicated they reported an inaccurate cause, 76.8% said the system would not accept the correct cause".4.I would propose that this sort of fault is rife in all digital systems that medical professionals are supposed to use. Obviously, I'm not in a position to check for myself - it's not like I've made a systematic study of the problem, or have had a lot of experience using different EHRs and other systems for handling medical data. I can't speak authoritatively here.But every single system I've heard of I've heard such stories about.So I'm figuring it's pretty much universal. The fact that there's a bunch of reasons that it might well be true are further suggestive.For instance, it's a known problem in the design of digital systems – in all sectors – that the end users of systems are often low-status within the organizations that commission such systems, and as such (in the West, at least; this is arguably something Japan does differently) their contributions are scorned and their input not solicited. I know it's strange to think of doctors being low-status in any social system, but plenty of hospitals and third-party payers do their damnedest.For another, there's the fact that in healthcare in the US, the feds railroaded healthcare providers into adopting systems without much concern as to whether those systems were up to what we ask of them; the question of whether they functioned well, or at all, wasn't much of a consideration for anybody prior to adoption.And then there's a third thing that I know as a medical professional.This type of fault? It's not just a computer thing.It's rife in thethat medical professionals have been using from before the computers showed up.Let me tell you about therapy notes. The occupational culture of psychotherapists has as part of its psychotherapist self-concept, the understanding that hate doing therapy notes. Therapists, it's widely understood by therapists, hate "doing notes". (That's the idiom; I've literally never heard a therapist refer to "writing notes".) Therapists generally understand that therapists find writing up the documentation of their treatments sessions is the worst.When you interview for a job as a therapist, pretty much the one thing they want to know most is whether you will actually do your notes in a timely way. Notes, see, are predicates for insurance company payment. No note, no money.This loathing of "doing notes" is not attributable to computerized record system, because, I promise you, therapists have been hating writing notes for longer than they've been using computers to do so. The therapists at the first clinic I worked for hated doing notes, and when I started in 2009, we were doing them in pen on paper forms.I've not heard a lot of cogent explanation of why therapists hate doing notes (nor of why it should be so understood that they do). Indeed, I've not heard anyone else attempt to explain it. It is understood – or presented to be understood – to be self-evident; I think a lot of a therapists have a lot of shame about their difficulties with and antipathy to doing notes. I think a lot of therapists blame themselves for being "bad at" doing notes. And that's their explanation of why they hate it: I'm just bad at it.I hate notes too, but I have another explanation.At this point in time, the set of things psychotherapists are required (by insurers and other third-party payers, by legal statute, by the exigencies of protecting from malpractice) to document in their notes and the set of things psychotherapist want to be able to use their notes to document have approximately no intersection. This reality has been implicitly acknowledged by HIPAA, which provides for psychotherapists optionally keeping two different sets of notes: "progress notes", which contain all the stuff other parties want, and to which HIPAA applies (HIPAA is about not having confidentiality), and "psychotherapy notes", which contain all the stuff that therapists want, and which are safe from HIPAA and actually confidential.The default note format – enforced by some BH EHRs – for psychotherapists is the "SOAP" note. The "SOAP" note format bears basically no resemblance to how most psychotherapists think about their cases. So writing a SOAP note is, at the very least, is an exercise in conceptual translation. You have to take your understanding of what happened in the session with the patient, and then translate it into this entirely other – and utterly therapeutically useless – conceptual model. To call it a translation is perhaps misleading, because the therapist has to set aside their clinical understanding of the case, and instead note a bunch of trivia. It is hugely effortful, and infuriatingly stupid.No, wait, it's worse than that. The SOAP format doesn't even capture the sort of information that insurers and other third-party payers really want from you if you're a therapist (though they still demand those things), and doesn't capture a bunch of the things that your malpractice lawyer is really going to have wanted you to capture, in the unfortunate case that there is ever a malpractice lawyer who is in any sense yours. The SOAP format is almost entirely about the state of the patient, and has no place where what the therapist did for the patient belongs. I assure you, health insurance companies are passionately interested in the answer to the question, "Say, what exactly have you done for the patient that we're paying you to treat?"; that's the sort of thing you want to capture in case your notes ever become at issue in a court, along with answers to questions like, "And what did you do about it when your patient told you they were suicidal?" There is no place for that information in a SOAP note, so every exercise in filling out a SOAP note for psychotherapy entails fighting the format itself, to get the critical information into it.So why are psychotherapists using SOAP notes? I gather that psychotherapy adopted the SOAP note format because that's was real doctors use. But the SOAP note wasn't even appropriate to the work of the medical professionals therapists stole it from. The SOAP format, is really a format for continuous assessment, not treatment. SOAP is terrible as a treatment record. It isn't a treatment record. It's like a stack of little brief short-term treatment plans. There is literally no place on it to record treatment that was done - was it completed, did it follow the plan that was previously made, how much did the patient get, etc. The SOAP format has always been terrible for ongoing therapies. It's just that it's less terrible than having no format at all, or so we're told by those who were there then. The SOAP format was developed back in the days of only paper charts to be – to use the technical term from computer science – afor handling some of the frustrating exigencies of keeping medical recordsThere's a joke about a guy who asked his wife why she always cut the corner off the roast. "I don't know, that's just how mom did it." He asks his mother in law. "I don't know why, that's how my mother did it." At grandmother-in-law's he observes her putting the raw roast into a pot that's a little too small, so she wacks off the corner so it will fit.This joke is usually taken to mean that people should evaluate whether what was done in the past actually is appropriate to their present circumstances – ha ha, wife and MiL didn't actually need to cut the corners off the roast, their pots were big enough – but, okay, also? Grandma's pot was never big enough. Not only shouldn't they have been cutting off the corners of their roasts, somebody should have bought Grandma a bigger pot! WHY DOES ANYONE STILL USE SOAP NOTES?! IT WAS BAD ENOUGH THAT GRANDMA HAD TO PUT HER MEDICAL RECORDS INTO SOAP FORMAT ON PAPER!I moved into the healthcare sector from technology, and I have a special place in my heart for data modeling, and I have to tell you: it's been something like fifteen years now of continuous inward screaming at the data models I find implicitly lurking at every turn in healthcare.5.We need a name for this antipattern, and I haven't found one anybody else has come up with, so I propose(Procrustes, you will recall, from Greek myth, was the guy who invited passers-by to spend the night at his house, and then "fit" them to his iron bed by stretching them or chopping off excessive bits like their feet. This was a fatal process.)A procrustean epistemology – or procrusteanEpistemology if you're a GoF cultist who likes your antiPatterns camelCase – is a data model latent in an implementation of a technology (which can be a paper record keeping format!) which is at odds with users' data models, and which forces users to either enter wrong data or leave data out entirely, rendering the data captured by the system a misrepresentation of the user's understanding. In particular, it's when a latent data model is at odds with expert practitioners' data models, and the enforcement by an implementation of the naïve data model on the expert practitioners, discards or misrepresent experts' information.Why am I calling these "epistemologies"? Because they're these whole little theories of knowledge, and who should have it, and how something should be known, and what truths are admissible as knowledge. When a death reporting system will not let a doctor enter simply "sepsis", and requires, if one is to report sepsis, a cause for the sepsis, the system instantiates the principle that there is no such thing as knowing solely that a patient died of sepsis; that either a doctor either knows the cause of the sepsis or doesn't know that the cause was sepsis; that sepsis, by itself, is not a valid cause. These are epistemological ideas: ideas about what constitutes a cause, and what we can know, and what is valid knowledge.Perhaps I am particularly sensitive to this because I am a psychotherapist with an interest in the history of psychiatry. When a psychotherapist uses a system (whether digital or social) that requires they identify the patient's presenting problem as something codeable in the DSM and has no affordance for recording other presenting problems, that system is instantiating the epistemological contention that the disease entities in the DSM are more valid, real, and/or knowable ways of conceptualizing psychopathology than alternatives. This may seem a trivial fact, until you recall how DSM-III was an epistemological coup d'état, whereby psychoanalytic (neo-Freudian) formulations of mental illness that populated the DSMs I and II were purged in favor of a new, neo-Kraepelinian system of nosology, in a clinical-political move to wrest control of the profession of psychiatry away from the Freudians. And it worked, too. In doing this, they didn't just change what disorders where listed in the DSM, they expunged etiologies – thus expunging etiology as a characteristic of mental disorder and something one needed to know to diagnose a mental disorder – and added Feighner criteria, establishing a different standard of how one was to know that a patient had a disorder. It was epistemological warfare.Consequently, the enforcing of the DSM (any DSM) is the enforcing of an epistemology of psychiatric nosology: a set of beliefs about how mental disorder is to be known, thought about, understood, described, and detected. A set of beliefs that, obviously, leaves Freudians out in the cold – and everyone else that doesn't agree with the epistemology latent in the DSM. My branch of the psychotherapeutic family believes that diagnosis is of (at the most generous) limited value, and not very helpful to treatment, even under the best of circumstances – "We treat patients, not diseases" – and I think I'm not at all alone in thinking that the DSM is not any sort of best nosological circumstance for us to be in. There are two other famous major branches of the family that fly without, apparently, any nosological paradigm whatsoever: family therapy and couples counseling, neither of which has any tradition of classifying disease states of families or of couples, respectively, much less reifying them as entities subject to eradication.So every time I sit down at any digital – or written – system that requires I describe my cases in terms of pathologies, and not just any pathologies, but in terms of the particular set of pathologies in the DSM, that being a set that reflects a particular paradigm – and yes, I mean that in the strict Kuhnian sense – of mental illness that reflects a particular school of thought with particular philosophical pre-commitments at a particular historical moment – one which, by the way, is generally understood to have severe scientific problems, i.e. is deeply encrusted in Kuhnian epicycles – and which was fully intended as a political weapon to marginalize other competing paradigms without having to tediously disprove them with science – I can't help but be deeply, keenly aware that that system is enforcing a theory of knowledge on – or against – me that is at odds with my professional expertise.And that's just the bit about diagnosis. Often much of the rest of the system, whatever it is, is just as deeply epistemologically alien, though for less notorious reasons.It is, I think, easier for me to see the political, epistemological nature of these systems because I do know this history and its politics, and thus is easier for me to recognize epistemological conflict latent in the technologies we use in mental health care – easier that it is for most clinicians who aren't familiar with this history.And this history is not generally taught in clinical programs.To be continued....Please leave comments on the Comment Catcher comment, instead of the main body of the post – unless you are commenting to get a copy of the post sent to you in email through the notification system, then go ahead and comment on it directly. Thanks!