Products and services built around artificially intelligent algorithms offer a host of benefits to users but they require vast amounts of personal data in return. As a result, privacy is perhaps more vulnerable today than ever before. We posit that this vulnerability is not only technical, but psychological. Whereas people have historically cared about and fought for the right to privacy, the diffusion and conveniences of algorithms could be systematically eroding people’s capacity and psychological motivation to take meaningful action. Specifically, we examine four factors that increase the tendency to rationalize privacy-reducing algorithms: 1) awareness of the benefits and conveniences of algorithms, 2) a low perceived probability of experiencing harm, 3) exposure to negative consequences only after usage has already begun, and 4) certainty that losing privacy is inevitable. We suggest that future research should consider these and related factors in order to better understand the changing psychology of privacy.