A critical step in reducing racial disparity in the child welfare system is having a way of measuring it. The standard methods, including the one used by DCF in its public dashboards, all have documented weaknesses that either under- or over-estimate disproportionality and provide numbers that cannot be directly compared between different subgroups like regions or CBCs. This post uses a standardized measure proposed by Rolock 2011 to show where in Florida’s system we need to focus. There will be numbers and lots of graphs, so here are the main points.

In fiscal year 2018-19, non-white kids in Florida had 1.80 times the risk of experiencing a child abuse investigation as white kids.

as white kids. White kids under investigation were then slightly more likely to have their allegations verified (1.07x) and white kids who were verified were slightly more likely to be removed (1.04x), but not at rates that overcame the initial disparity in investigations. White children also had a higher chance of being discharged from care once in it (1.06x).

(1.07x) and white kids who were verified were slightly more likely to be (1.04x), but not at rates that overcame the initial disparity in investigations. White children also had a higher chance of being from care once in it (1.06x). The result is that non-white children had 1.73 times the risk of being in out-of-home care compared to white children. The disparity was even higher for non-white teens at 2.20 times the risk of white teens.

compared to white children. The disparity was even higher for at 2.20 times the risk of white teens. Disparity rates varied widely by CBC, with risk of out-of-home care ranging from nearly even in one CBC (1.008x) to almost five times as high (4.720x) in the most racially disparate CBC. Those differences were driven largely by disparity in investigations and then an inability to discharge non-white children at equal rates. This was especially true of the CBCs with the highest disparity in out-of-home care.

In terms of placement, non-white children in care were at higher risk of incarceration, Baker Act, and running away. White children were at higher risk of being placed in a therapeutic placement under most CBCs.

The takeaway is that if non-white children were in care in the same proportions as white children, there would be 4,000 fewer kids in the system. That would save $23.4 million per year in board rate payments alone. The National Council for Adoption estimated that in 2010 it cost a state approximately $25,000 per year to keep a child in foster care. If that’s true today, racial disparity in foster care costs Florida $100 million dollars annually.

The chart below shows the disparities by CBC. The values answer the question “How many times greater are non-white children at risk of the action compared to white children when accounting for state demographics?” It calculates each stage of the case using the group of kids who entered from the previous stage — for example, the risk of removal is calculated using the group of children whose allegations were verified. This lets us see how each stage contributes to or corrects for previous disparities. You can see clearly: racial disparity begins at the beginning. The rest of this post explains this chart.

Weighted risk ratios for each CBC at each stage of a case. Data source: DCF Dashboards. (click to enlarge)

First let’s discuss disparity

Conversations around measuring disparity often turn into a debate about how many children should be in foster care or need to be the target of child welfare services. One argument in that debate says that not all difference is disparity. Disparity, in that view, is only “a bad difference,” and some kids need to be in foster care for their protection. Under that view, if more Black kids are in foster care, then that could be reflective of their needs and not necessarily bad thing. A stronger version says that Black kids are in care more because they need to be, full stop.

The argument is rarely that blunt, but it’s always lurking. For example:

One issue that clouds this discussion is that there is no clear standard for child welfare involvement. One cannot say, for instance, that because less than 1% of children in the United States are in foster care that this is the correct percentage—nor is there any evidence that this percentage should necessarily be higher or lower. While it is often assumed that less contact with the child welfare system is good, both under and over representation of specific ethnic or racial groups should raise questions… Rolock, 2011.

As such, much of the literature frames child welfare services as either a neutral or positive public health project that either helps or does not help families. It is rarely posited as something that harms. Under that neutral good view there is some “correct” number of kids in foster care, even if the goal is to get that number as low as possible. For example, Bywaters et al. (2015) describes and then rejects a framework of supply and demand where families have needs that create demand for child welfare services and the government and communities in turn supply those needs. Intead, Bywaters frames the issue as one of the inequities that drive kids into care — and by doing so, the moral and ethical problems of the system become more clear. Roberts, Cloud, Phillips & Pon, Burrell, Cooper, and many more writers have put the child welfare system into the context of people who experience it and the communities that pay for it most heavily.

We also know foster care is a system of inequity because, while there are lots of privileged people seeking to make it unavoidable for other families, they are not simultaneously demanding access to the system for their own kids. Families need community safety, good physical and mental health, social support, material wealth, and political power to create better lives. If you have that, you don’t need DCF. Nobody calls DCF to put their child in foster care for a few days while they go on a business trip, and there is no Operation Varsity Blues for rich people trying to scam their kids into care. That’s because foster care is not a good thing.

Take the responses of Black mothers in Florida’s system, who described the ways the system takes and keeps kids for reasons completely unrelated to parenting:

The sense of powerlessness and helplessness was profound as parents described being trapped by personal limitations and systemic unresponsiveness. Concentrated poverty translated into a series of severe deficits: lack of sound housing, nutritious food, accessible healthcare, adequate transportation, and childcare services. Concomitant with high standards set by the courts, such factors combined conspired to decrease the likelihood these families would ever see their children returned Kokaliari et al. (2019).

When a bad thing happens almost exclusively to poor, disfavored, and marginalized people, the morality and ethics of the situation are clear. Difference in a punitive, inequitable system is disparity. Further parsing of who “needs” to be in the system is no more instructive than debating who “needs” to be poor, unhealthy, or alone. We can discuss what drives more kids into foster care as a question of inequity, but no kid should be there at all. Now let’s get to the numbers.

Measuring difference

The simplest way to measure difference is to just count. In the chart below you can see that non-white children (the sum of DCF’s Black and Other categories) made up about 28% of the population, whereas they made up 42% of investigations and 40% of out-of-home care. Their percentage of discharges was slightly lower at 39%. These patterns are going to play out again and again — racial disparity starts on the telephone.

Before we go further, the three DCF categories Black, White, and Other should be interpreted with lots of caution. Schmidt et al. (2015) found that one state’s system tended to label kids as white at higher rates than school districts labeled the same kids, and even higher than the kids described themselves. The study also showed that 20% of the children’s racial self-identification changed over time. To my knowledge, the race labels in Florida DCF’s system are entered when the case comes in — usually for a baby — and not updated again.

DCF doesn’t limit case managers or investigators to just the three categories. They actually have the six below, plus three extras: Unable to Determine, Declined to Respond, and Unknown. They additionally capture whether the child is or is not Hispanic. (They have a marginally helpful FAQ about it here.) In DCF’s public-facing dashboards, they roll these options into the three categories above. The “Other” category is entirely too broad — an Asian child and a child who is multiracial Black and white will have very different experiences in care, but are lumped into the same group. I can’t undo that here. Future work needs to be done.

Removals in FY2018-19 from Public FSFN Database (Feb. 2020). The numbers are close to the “Removals” line above, but are not exact because DCF does monthly point-in-time counts and the numbers here are every child who came into care during the period.

Back to the measures. To determine disparity, you might next look at proportions, the number of kids per 1,000 in the general population who find themselves at each stage in the system. You can see below that 86.59 per 1,000 non-white kids went through investigations, but only 47.15 white kids did.

Number per 1,000 kids in the population.

There is an important factoid hiding in this chart. If non-white kids were in foster care in the same proportion as white kids (4.52 per 1,000), there would be around 4,000 fewer kids in care. That would reduce our foster care rolls by about 17%. At a board rate of $16 per day, that would save the state $23.4 million per year in foster care payments alone. Racial disparity costs money. If you use the National Council for Adoption‘s 2010 estimate that it cost a state approximately $25,000 per year to keep a child in foster care, racial disparity costs Florida $100 million dollars annually.

Now on to our next measure. Taking out-of-home care as an example, if you divide the proportion of Black children (6.9 per 1,000) by the proportion of all children (5.45 per 1,000) you get a value called the disproportionality index (DI) or the disproportionality representation index (DRI). In this example, 6.9 / 5.45 = 1.27. This is the measure that DCF uses on its dashboards. It has the benefit of being easy to understand: there are 1.27 times more Black kids in out of home care than “should be” as compared to all kids in the community. It has the drawback of making it harder to compare regions or CBCs of different sizes and racial compositions. As an extreme example, 3.0 / 9.0 gives a DI of 3.0, but 43.0 / 49.0 gives 1.14, even though they are both separated by 6 per 1,000. Conversely, I can get a DI of 3.00 by 3.0 / 9.0 or 20.0/ 60.0. The value 3.0 can represent vastly different experiences on the ground.

Below is what it looks like on DCF’s dashboards. Graphing the DI is messy.

DCF Dashboard using the disproportionality index as a measure. You can see giant dips and spikes in the number of kids categorized as “Other/Multi-Racial”.

If you divide two groups’ disproportionality indices by each other, you get a new number that is sometimes called the disparity index (Shaw et al., 2008), and is a calculation of relative risk. A value of 1.0 would be equal risks in the two groups. The chart below shows that non-white kids are anywhere from 1.64 to 1.84 times as prevalent in the system as they are in the general population (and white kids are underrepresented by the same amount). You can see that the disparity is the highest in investigations (1.84), goes down a little in removals (1.71), and rises again as you go through the stages of a case to out-of-home-care (1.73) and being in care for over 12 months (1.80). This is exactly what we saw in our very first chart of measures above. It’s just easier to see the relationship now.

The disparity index, just like the disproportionality index, has its own quirks that make it hard to compare among different CBCs or regions. Specifically, it tends to under-estimate the risk when the percentage of the measured population is small and over-estimate the risk when it is large. (Rolock 2011). To DCF’s credit, it does not provide side-by-side comparisons on its dashboards. But that’s what we need to do, and doing so requires more complicated math that gives simpler measures.

On a final note, we also want to measure how disparity creeps in at each step through the system. That means we should use a step-wise approach, calculating the disparity at each stage based on the population that entered the stage from the previous step. For example, to get a disparity measure for children who were removed, we will use the group of children who had verified maltreatment — not all kids in the general population. This is sometimes called decision-point based enumeration. (Thurston & Miyamoto, 2020.) Now, here we go. This is what this post is actually about.

Harder math, better graphs

To compare CBCs to each other, let’s use a measure called the weighted risk ratio (WRR). The WRR allows you to more directly compare groups that are themselves not homogeneous. Nancy Rolock at University of Illinois at Chicago wrote in favor of using WRRs in child welfare back in 2011. They are also recommended for use in the special education arena to determine differences among lots of different schools and districts, which helps focus resources and scrutiny on places that need it. I’m using the special ed formula.

WRRs answer questions like: “How many times greater is a specific racial group at risk of being removed in comparison with all other racial groups under the same CBC, weighted by the demographics of the state?” The math-magic is that WRRs adjust for the variability between different CBCs to give you a number you can compare even when the CBCs don’t have the same group percentages (our 3/9 and 20/60 example above). The downside is that it doesn’t work well for small numbers so you have to use an alternate formula that compares the local risk to the comparison group’s statewide risk.

So what does that get us? The chart below shows the weighted risk ratios for children in Florida at each listed stage of the system (based on the population from the previous step in most cases). All of the data comes right from DCF’s dashboards, just calculated in a different way. I’m using the categories White vs. Non-white (i.e., Black + Other) to make it easier, and also because I don’t trust DCF’s “Other” category to stand on its own. A risk ratio of 1.0 means that white and non-white children have equal risk of being screened in at that stage. If a value is less than 1 (orange), then white children have higher risk, and if the value is greater than 1 (blue), then non-white children have higher risk.

The top row of the graph shows that during fiscal year 2018-19, non-white children in Florida had 1.836 times the risk of an investigation as white children in Florida. White children were then slightly more likely to have their allegations verified (-1.072) and slightly more likely to be removed (-1.043), but neither number was large enough to make up for the original disparity. And, importantly, white children were also more likely to be discharged (-1.057). The result is that non-white children in Florida were at 1.733 time the risk of out-of-home care placement than white children.

The graph also breaks it down by age in the next rows. You can see that the risk was highest for non-white babies and decreased slightly through age 14. White babies were more likely to be verified and removed (albeit not at rates that overcame the original disparity in investigations), but then around age 10 things shift. Non-white tweens and teens with verified allegations were at higher risk of removal than white ones (1.023 and 1.131). White teens were then significantly more likely to be discharged than non-white teens (-1.374). The result is that non-white teens in Florida had 2.196 times the risk of out-of-home care placement than white teens, a risk much higher than any other age group.

Weighted risk ratios by CBC

Now for the bigger picture. Here is the same chart by CBC, again for fiscal year 2018-19. The CBC with the highest WRR for investigations, Citrus Family Care Network (3.427), is almost three times as high as the CBC with the lowest, Kids First of Florida (1.196). The WRR in the out-of-home care measure ranges from nearly even at -1.008 to a whopping 4.72. Racial disparities are happening everywhere, but not in the same ways.

You can see so much in these graphs. For example, you see that disparity starts in investigations under every single CBC. By the verification and removal stages, you can see how the local systems work either towards or against disparity, but only one CBC managed to overcome that initial skew. The systems with lower racial disparity in their investigations seemed to correct for it better than the areas with higher disparities. A few areas at the top of the chart actually compounded the disparity through disproportionate removals (but so did the CBC at the very bottom).

Finally, the graph shows that the areas with the highest levels of disproportionality in their out-of-home care populations (seen at the top of the chart) significantly struggled to correct for it through discharges. Only one CBC — Kids First of Florida — was close to equal on its out-of-home care rates, and it also had the highest rate of discharging non-white children.

Let’s look at the same chart from the perspective of Black children vs. non-Black children below.

A few things leap out when centering on Black children. First, the disparity in investigations is lower but still high. Black children in Kids First Florida in Clay County (Circuit 4) actually have lower risk of investigation as non-black kids. Again, removals of Black children tended to be correlated with areas with higher out-of-home care disparity. There are even CBCs where Black children have lower risk of being in out-of-home care than non-Black children, but that may be due to the over-representation of kids in the “Other” category. Those areas still had higher risk for non-white kids.

The most striking thing about this graph, though, is the insanely low risk of discharges (i.e., high risk of not being discharged) for Black children in those areas with the highest out-of-home care disparity. It’s like there’s a jetstream pushing Black kids into care and keeping them there. I wonder what that could be.

One more chart — the CBC chart organized by DCF regions.

Weighted Risk Ratios of Discharge Types

The obvious next step is to try to look at discharges by race as well. To do this, I used DCF’s Exits from Care dashboard and added up the values for fiscal year 2018-2019. Here’s what I got. You can see that CBCs with high disparity have trouble discharging non-white children across all discharge types. CBCs with lower disparity do manage to discharge non-white kids at higher rates here and there, especially into guardianships and reunification. Only three CBCs managed to adopt non-white kids out at slightly higher rates: Big Bend CBC (1.08), ChildNet Broward (1.06) and Heartland for Children (1.03).

One CBC really stands out above because its risk ratio numbers are huge. A white child with Family Integrity Program in St. Johns County (Circuit 7) had 7.29 times the risk of being adopted (I recognize that’s a weird way to say it) and 5.08 times the risk of guardianship as a non-white child. That’s because in 2018-2019 only 5 non-white children were adopted and 2 non-white children went into guardianships, compared to 56 and 15 white children respectively. That means 55% of white kids in St. Johns County exited through adoption or guardianship, but only 15% of non-white kids did. Yes, it’s a small CBC with about 180 kids, 80% of which are white. It only discharges about 15 kids per month, but as you can see below, it rarely breaks 5 non-white kids exiting per month, mostly to reunification, and many months have 0. Someone should ask questions about that.

Placement Settings

This last part doesn’t come from DCF’s dashboards. Instead I used the Public FSFN Database from February 2020 to look at every kid in foster care’s placement history. Specifically, I ran queries for correctional, therapeutic, mental health (Baker Act), and runaway episodes. I limited the queries to just those kids who came into care in fiscal year 2018-2019, which cut out kids who had been in care for years prior, but also let us say that these are events that happened early in their removal period and doesn’t bias towards kids who were in care longer and had more chance to be arrested, etc. I used DCF’s numbers for removals, which in retrospect may not have been the right call because DCF’s numbers were slightly lower than what is found in the database. It’ll still get us in the ballpark.

The graph below shows a significant difference in risk of non-white kids being placed in correctional placements, and very extreme skews towards white children for therapeutic placements and non-white children for mental health placements at many CBCs. Non-white children were at higher risk of running away almost everywhere, which could be a factor of there being more non-white teens in care or of not seeing the system as helpful.

That’s it. Now let’s get to work.