HITRUST launches new AI assurance initiative for healthcare

The risk management and certification group says the program, touted as the first of its kind, aims to offer a CSF-based strategy to deploy trustworthy models.
By Mike Miliard
10:30 AM

Photo: zf L/Getty Images

HITRUST this week announced the launch of its new HITRUST AI Assurance Program, designed to help healthcare organizations develop strategies for secure and sustainable use of artificial intelligence models.

The standards, and certification organization says it's also developing forthcoming risk management guidance for AI systems.

WHY IT MATTERS
The HITRUST AI Assurance Program is meant to prioritize risk management as a foundational consideration in the newly updated version 11.2 of the HITRUST CSF, according to the group, and is meant to enable organizations deploying AI in various use cases to engage more proactively and efficiently with their AI service providers to discuss approaches to shared risk.

"The resulting clarity of shared risks and accountabilities will allow organizations to place reliance on shared information protection controls that already are available from internal shared IT services and external third-party organizations, including service providers of AI technology platforms and suppliers of AI-enabled applications and other managed AI services," according to HITRUST.

The group is billing the program as the first of its kind, focused on achieving and sharing cybersecurity control assurances for generative AI and other emerging algorithms applications.

HITRUST's strategy document, "A Path to Trustworthy AI," is available for download.

While AI models from cloud service providers and others are allowing healthcare organizations to scale models across use cases and specific needs, the opacity of deep neural networks introduces unique privacy and security challenges, HITRUST officials note. Healthcare organizations have to understand their responsibilities around patient data and ensure that they have reliable risk assurances for their service providers.

The goal of the program is to offer a "common, reliable, and proven approach to security assurance" that will enable healthcare organizations to understand the risks associated with AI model implementation and to "reliably demonstrate their adherence, with AI risk management principles using the same transparency, consistency, accuracy, and quality available through all HITRUST Assurance reports," officials say.

HITRUST says it's working with Microsoft Azure OpenAI Service on maintenance of the CSF and faster mapping of the CSF to new regulations, data protection laws and standards.

THE LARGER TREND
Recent research has shown that generative AI is poised to become a $22 billion part of the healthcare industry over the next decade.

As health systems race to deploy generative and other AI algorithms, they're eager to transform their operations and boost productivity across a variety of clinical and operational use cases. But HITRUST notes that, "any new disruptive technology also inherently delivers new risks, and generative AI is no different."

Deploying it responsibly is critically important – and most healthcare organizations are taking a cautious and prudent approach to their exploration of generative AI applications.

But there are always risks, especially when it comes to cybersecurity, where AI is very much a double-edged sword.

ON THE RECORD
"Risk management, security and assurance for AI systems requires that organizations contributing to the system understand the risks across the system and agree how they together secure the system," said Robert Booker, chief strategy officer at HITRUST, in a statement.

"Trustworthy AI requires understanding of how controls are implemented by all parties and shared and a practical, scalable, recognized and proven approach for an AI system to inherit the right controls from their service providers," he added. "We are building AI Assurances on a proven system that will provide the needed scalability and inspire confidence from all relying parties, including regulators, that care about a trustworthy foundation for AI implementations."

"AI has tremendous social potential and the cyber risks that security leaders manage every day extend to AI," said Omar Khawaja, field CISO of Databricks and a HITRUST board member. "Objective security assurance approaches such as the HITRUST CSF and HITRUST certification reports assess the needed security foundation that should underpin AI implementations."

Mike Miliard is executive editor of Healthcare IT News
Email the writer: mike.miliard@himssmedia.com

Healthcare IT News is a HIMSS publication.

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.