First is enforcement. The bill largely relies on the agency responsible for consumer protection, the Federal Trade Commission, to issue and enforce regulations about what should be in companies’ impact assessments. The bill grants the F.T.C. new enforcement powers, including the power to require companies to “reasonably address in a timely manner the results of the impact assessments,” but the agency even now too rarely enforces its settlements with repeat privacy violators. Significantly, the bill does not clearly prohibit algorithmic bias or unfairness. It relies instead on the F.T.C. to do so, or on penalties set by existing consumer protection and discrimination law, which may not cover all forms of algorithmic bias.

Second, the bill lacks an avenue for meaningful public input. Technology companies often lack diverse voices and fail to adequately consider social impact, resulting in numerous fiascos — from Google’s image recognition algorithm that classified black people as gorillas, to Amazon’s job-recruiting engine that discriminated against women. In the United States, the typical impact assessment process, based on environmental law, includes an opportunity for public comments, but this bill would not require them.

The proposal suggests that the impact assessments should be conducted with independent auditors and technology experts “if reasonably possible.” But limiting external input to when it is “reasonably possible” may allow companies to evade public feedback. And discussions about discrimination require input not just from software engineers but from affected communities, legal experts and public interest representatives.

Third, the proposal needs to mandate at least some public transparency for the results of impact assessments. If the results of assessments aren’t public, we can’t learn anything from them. The bill is missing a way for insights to make their way back into broader policy discussions. For example, a common question about algorithmic bias is whether it is worse than human decision-making. Impact assessments can be a tool to help society figure this out — but not if they are kept secret.

There may be good reasons for some limited secrecy; full transparency might be off the table because of concerns about proprietary information or gaming the algorithm. One way to address this failing would be to require the F.T.C. to produce an annual report on the broader lessons it has learned from that year’s impact assessments.

The proposed Algorithmic Accountability Act is a welcome and necessary first step toward governing the secret, often biased algorithms already widely in use across society. But without increased accountability, it may not be effective.

Margot E. Kaminski is an associate professor at the University of Colorado Law School. Andrew D. Selbst is a postdoctoral scholar at the Data and Society Research Institute and a visiting fellow at the Yale Information Society Project.

Follow @privacyproject on Twitter and The New York Times Opinion Section on Facebook and Instagram.