AI Ethics, Telehealth Reimbursement

Artificial Intelligence Bill of Rights: AI Ethics Offered by White House 

321
0

Please support Telehealth.org’s ability to deliver helpful news, opinions, and analyses by turning off your ad blocker.

The White House unveiled a bill of rights for artificial intelligence (AI) that lays out voluntary guidelines for how companies can avoid the misuse of AI. The new AI bill of rights results from a year’s initiative to help businesses, researchers, and policymakers work together to ensure that AI ethics are used are implemented to benefit Americans. 

The “Blueprint for an AI Bill of Rights” is a 73-page document developed by researchers, technologists, advocates, journalists, and policymakers. It contains a set of voluntary guidelines that public and private organizations can use to form an AI ethics code to protect people from abuse and discrimination in algorithms. “Too often, these tools limit our opportunities and prevent our access to critical resources or services. These problems are well-documented. In America and around the world, systems supposed to help with patient care have proven unsafe, ineffective, or biased,” according to the White House.

Healthcare AI ethics have been specifically mentioned concerning access to critical resources or services in the handout and data-sensitive domains and calls for enhanced protection and restrictions for data across such sensitive domains.

What’s in the AI Bill of Rights?

This Blueprint for an AI Bill of Rights offers five broad principles regarding technical protections and practices to help guide the development and implementation of AI systems, including AI relevant to behavioral health. Each principle on the new blueprint further contains three supplemental sections, which detail problems to be addressed, expected outcomes, automated systems, and practical steps that can be implemented to realize the vision of AI ethics. 

Safe & Effective Systems

The first principle of the Blueprint for an AI Bill of Rights states, “You (American people) should be protected from unsafe or ineffective systems.” Meeting this goal requires the identification of risks, pre-deployment testing, and monitoring of AI automated systems.  

Algorithmic Discrimination Protections 

The second principle states, “You (American people) should not face discrimination by algorithms, and systems should be used and designed in an equitable way.” This principle is important in making healthcare algorithms socially responsible and equitable. This principle recognizes bias in algorithms that may lead to discrimination and violate legal protections.

Data Privacy  

The third principle states, “You should be protected from abusive data practices via built-in protections, and you should have agency over how data about you is used.” This asks for design choices, user permissions, transfer, and deletion of data in line with reasonable expectations, to ensure and safeguard the collection of data by developers of AI systems. This principle emphasizes the need for organizations to use a language for asking permission that is understandable to users. 

Notice and Explanation

The fourth principle states, “You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.” This calls for clear descriptions of the overall system functioning and the role of automation in that system. 

Human Alternatives, Consideration, and Fallback

This principle states, “You should be able to opt-out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.” This principle asks for human alternatives so users can opt-out whenever they choose.

HHS Announces Proposed Rule to Strengthen Nondiscrimination in Health Care

Complementing the voluntary guidelines released by the White House, the United States Department of Health and Human Services (HHS) released its “Trustworthy AI (TAI) Playbook” (PDF) to Prevent Algorithmic Health Discrimination and Protect Consumers. HHS Chief AI Officer Oki Mek had this to say about the Trustworthy AI Playbook and its relevance to AI ethics:

HHS has a significant role to play in strengthening American leadership in Artificial Intelligence (AI). As we use AI to advance the health and wellbeing of the American people, we must maintain public trust by ensuring that our solutions are ethical, effective, and secure. The HHS Trustworthy AI (TAI) Playbook is an initial step by the Office of the Chief AI Officer (OCAIO) to support trustworthy AI development across the Department.

Essential Telehealth Law & Ethical Issues

Bring your telehealth practice into legal compliance. Get up to date on inter-jurisdictional practice, privacy, HIPAA, referrals, risk management, duty to warn, the duty to report, termination, and much more!

Disclaimer: Telehealth.org offers information as educational material designed to inform you of issues, products, or services potentially of interest. We cannot and do not accept liability for your decisions regarding any information offered. Please conduct your due diligence before taking action. Also, the views and opinions expressed are not intended to malign any organization, company, or individual. Product names, logos, brands, and other trademarks or images are the property of their respective trademark holders. There is no affiliation, sponsorship, or partnership suggested by using these brands unless contained in an ad. We do not and cannot offer legal, ethical, billing technical, medical, or therapeutic advice. Use of this site constitutes your agreement to Telehealth.org Privacy Policy and Terms and Conditions.

Please share your thoughts in the comment box below.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments