HITRUST Launches AI Risk Management Tool

Lilu Anderson
Photo: Finoracle.net

HITRUST's New AI Risk Management Tool

HITRUST, a leader in information security and risk management, has introduced a new tool aimed at managing the risks associated with artificial intelligence (AI) in various industries, including healthcare. This tool, named the AI Risk Management Assessment, aims to provide a structured assessment approach that helps organizations mitigate potential risks when deploying AI technologies.

Importance of AI Governance

The main purpose of this assessment is to ensure that organizations have effective governance structures in place when they implement AI tools. This means that companies need to have clear guidelines and oversight mechanisms that can be communicated effectively to management and board members. Ensuring this kind of structure aligns with standards issued by prominent organizations like NIST (National Institute of Standards and Technology) and ISO/IEC (International Organization for Standardization/International Electrotechnical Commission).

Supported by a Comprehensive Framework

HITRUST's approach is backed by a framework that includes a SaaS (Software as a Service) platform, which aids organizations in demonstrating their AI risk management outcomes. This framework can significantly reduce the time and effort required to establish and maintain risk assessment processes, which traditionally could take weeks or even months. As Bimal Sheth, EVP of standards development and assurance operations at HITRUST, points out, aligning to multiple industry standards can be exhaustive, but this tool simplifies the process.

Designed for Various AI Technologies

This risk management tool is designed to accommodate various AI technologies, such as machine learning algorithms and large language models used in generative AI. Organizations across sectors can utilize the tool for self-assessment or engage with external assessors for validation and verification, as explained by Jeremy Huval, chief innovation officer at HITRUST.

Part of a Larger Trend

The release of this tool follows HITRUST's AI Assurance Program, which was launched in October 2023. This earlier initiative offers guidance for developing secure and sustainable AI models. Moreover, HITRUST plans to introduce an AI Security Certification Program later this year, further enhancing its assurance methodologies and systems.

In a related development, NIST introduced an open-source platform, Dioptra, designed to help developers assess the safety of AI and machine learning models. This platform aims to tackle the unique data risks associated with these technologies.

Industry Insights

HITRUST emphasizes the rapidly evolving nature of AI risk management standards. Robert Booker, the chief strategy officer at HITRUST, highlights the necessity for companies to adopt a comprehensive approach to governance in AI, stressing that responsible risk management is key to unlocking AI's potential.

Share This Article
Lilu Anderson is a technology writer and analyst with over 12 years of experience in the tech industry. A graduate of Stanford University with a degree in Computer Science, Lilu specializes in emerging technologies, software development, and cybersecurity. Her work has been published in renowned tech publications such as Wired, TechCrunch, and Ars Technica. Lilu’s articles are known for their detailed research, clear articulation, and insightful analysis, making them valuable to readers seeking reliable and up-to-date information on technology trends. She actively stays abreast of the latest advancements and regularly participates in industry conferences and tech meetups. With a strong reputation for expertise, authoritativeness, and trustworthiness, Lilu Anderson continues to deliver high-quality content that helps readers understand and navigate the fast-paced world of technology.