Health tech consortium releases ethical guidelines for AI use in healthcare

The Partnership for Artificial Intelligence, Telemedicine and Robotics in Healthcare (PATH) released a new set of ethical guidelines for the development and implementation of AI in healthcare.

PATH was formed to identify the most valuable IT initiatives for care delivery and promote their adoption on a broad scale. The organization's membership includes stakeholders from across the healthcare industry, from providers to business executives, policymakers, researchers and educators, and is led by an advisory board comprising leaders from Jefferson Health, Qualcomm Life, GE and more.

Here are the ethical guidelines, developed by PATH members and other leaders in healthcare and based partly on existing statements such as the Hippocratic Oath and Asilomar AI Principles.

1. First Do No Harm: A guiding principle for both humans and health technology is that, whatever the intervention or procedure, the patient's wellbeing is the primary consideration.

2. Human Values: Advanced technologies used to delivery healthcare should be designed and operated to be compatible with ideals of human dignity, rights, freedoms and cultural diversity.

3. Safety: AI systems used in healthcare should be safe and secure to patients and providers throughout their operational lifetime, verifiably so where applicable and feasible.

4. Design Transparency: The design and algorithms used in health technology should be open to inspection by regulators.

5. Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

6. Responsibility: Designers and builders of all advanced healthcare technologies are stakeholders in the moral implications of their use, misuse and actions, with a responsibility and opportunity to shape those implications.

7. Value Alignment: Autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

8. Personal Privacy: Safeguards should be built into the design and deployment of healthcare AI applications to protect patient privacy including their personal data. Patients have the right to access, manage and control the data they generate.

9. Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people's real or perceived liberty.

10. Shared Benefit: AI technologies should benefit and empower as many people as possible.

11. Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

12. Evolutionary: Given constant innovation and change affecting devices and software as well as advances in medical research, advanced technology should be designed in ways that allow them to change in conformance with new discoveries.

More articles about AI:

Geisinger developing AI solutions to detect, prevent high-burden diseases

GlaxoSmithKline poaches another biotech exec for AI team: 3 notes

Study: AI outperforms radiologists in managing thyroid nodules

© Copyright ASC COMMUNICATIONS 2020. Interested in LINKING to or REPRINTING this content? View our policies by clicking here.