Enhancing Trust in AI Through Industry Self-Governance
Written by Dr. Joachim Roski, Dr. Ezekiel Maier, Dr. Kevin Vigilante, Elizabeth Kane, and Dr. Michael Matheny
We're building value and opportunity by investing in cybersecurity, analytics, digital solutions, engineering and science, and consulting. Our culture of innovation empowers employees as creative thinkers, bringing unparalleled value for our clients and for any problem we try to tackle.
Empower People to Change the World庐
Written by Dr. Joachim Roski, Dr. Ezekiel Maier, Dr. Kevin Vigilante, Elizabeth Kane, and Dr. Michael Matheny
Abstract
This article summarizes 鈥淓nhancing Trust in AI Through Industry Self-Governance,鈥 which was published online April 2021 in听Journal of the American Medical Informatics Association.
听
(JAMIA), Dr. Joachim Roski, 有料盒子APP health analytics leader; Dr. Ezekiel Maier, 有料盒子APP analytics leader; Dr. Kevin Vigilante, 有料盒子APP chief medical officer; Elizabeth Kane, 有料盒子APP health operations expert; and Vanderbilt University Medical Center's Dr. Michael Matheny, present insights that organizations of industry stakeholders can use to adopt self-governance as they work to maintain trust in artificial intelligence (AI) and prevent an "AI winter."
Industry stakeholders see AI as critical for extracting insights and value from the ever-increasing amount of health and healthcare data. Organizations can use AI to synthesize information, support clinical decision making, develop interventions, and more鈥攃reating high expectations for AI technologies to effectively address any health challenge. However, throughout the history of AI development, streaks of enthusiasm have been followed by periods of disillusionment. During these AI winters, both investment in and adoption of AI best practices wane.
鈥淭o counter growing mistrust of AI solutions, the AI/health industry could implement similar self-governance processes, including certification/accreditation programs targeting AI developers and implementers. Such programs could promote standards and verify adherence in a way that balances effective AI risk mitigation with the need to continuously foster innovation.鈥
- 鈥淓nhancing Trust in AI Through Industry Self-Governance,鈥 JAMIA, April 2021
Today, publicity around highly touted but underperforming AI solutions has placed the health sector at risk for another AI winter. To respond to this challenge, we propose that industry organizations consider implementing self-governance standards to better mitigate risks and encourage greater trust in AI capabilities.
Building on the National Academy of Medicine鈥檚 AI implementation lifecycle, we created a detailed organizational framework that identifies 10 groups of AI risks and 14 groups of mitigation practices across the four lifecycle phases. AI developers, implementers, and other stakeholders can use this analysis to guide collective, voluntary actions to select, establish, and track adherence to trust-enhancing AI standards.
Without industry self-governance, government agencies may act to institute their own compliance requirements. However, industries that have proactively defined, adopted, and implemented standards complementary to government regulation have reduced the urgency of public-sector action while allowing for the appropriate use of available resources. Industry self-governance also enables exceptional agility to respond to evolving technologies and markets.
When considering self-governance, there are a number of key success factors to take into account. These include the creation of an industry-sanctioned certification and accreditation program. It鈥檚 also important to understand that self-governance success is based on stakeholder confidence that all standards and methods have been developed in coordination with consumers and patients, clinicians, AI developers, AI users, and other key parties.听
While AI advancement continues with government support, there are also signs of a technology backlash, underscoring the need to mitigate AI-related risks. The government-led management of such public risks occurs in various ways; however, targeted, AI-specific legislation does not yet exist. Diverse organizations of health industry stakeholders could step in to help manage AI risks through self-governance. Using evidence-based risk mitigation practices could be effective across the industry and simultaneously promote and sustain user trust in AI, fending off the next AI winter.