Vulnerabilities in Open-Source AI and Machine Learning Tools Revealed
A new report released today by artificial intelligence and machine learning systems cybersecurity startup Protect AI Inc. highlights key vulnerabilities found in those systems recently uncovered by its bug bounty program.
Protect AI was founded in 2022 by former Amazon Web Services Inc. and Oracle Corp. employees, including Chief Executive Officer Ian Swanson, who was previously the worldwide leader for artificial intelligence and machine learning at AWS. The company offers products designed to deliver safer AI applications by providing organizations the ability to see, know, and manage their machine learning environments.
Bug Bounty Program Uncovers Security Threats in AI Supply Chain
Among its offerings is a bug bounty program to identify vulnerabilities in AI and machine learning, which Protect AI claims is the first of its type. The bug bounty program has seen strong success, with over 13,000 community members hunting for impactful vulnerabilities across the entire AI and machine learning supply chain.
Through both the bug bounty program and research, Protect AI has found that the tools used in the supply chain to build the machine learning models that power AI applications are vulnerable to unique security threats. The threat comes because many of the tools, frameworks, and artifacts are open source, meaning they may have vulnerabilities out of the box that can lead directly to complete system takeovers, such as unauthenticated remote code execution or local file inclusion.
Critical Flaws Found in Widely-Used MLflow Tool
The first vulnerability detailed posed a significant risk of server takeover and loss of sensitive information. The widely-used MLflow tool for storing and tracking models was found to have a critical flaw in its code for pulling down remote data storage. The vulnerability could deceive users into connecting to a malicious remote data source, potentially enabling attackers to execute commands on the user’s system.
New Vulnerabilities Expose Sensitive Information in MLflow
Another security issue uncovered in MLflow was the Arbitrary File Overwrite vulnerability. The vulnerability was due to a bypass in MLflow’s validation function that checks the safety of file paths. Malicious actors could exploit this flaw to remotely overwrite files on the MLflow server.
The third vulnerability in MLflow was a Local File Include issue. The vulnerability allows MLflow, when hosted on certain operating systems, to inadvertently expose the contents of sensitive files. The exposure was found to be caused by a bypass in the file path safety mechanism, with the potential damage including the loss of sensitive information and even complete system takeover, particularly if SSH keys or cloud keys were accessible to MLflow with sufficient permissions.
Need for Enhanced Security Measures in AI and Machine Learning Tools
All the vulnerabilities detailed were disclosed to maintainers at a minimum of 45 days prior to publication. Collectively, the vulnerabilities underscore a need for stringent security measures in AI and machine learning tools, given their access to critical and sensitive data.
These findings highlight the importance of ongoing efforts to identify and address vulnerabilities in open-source AI and machine learning tools. Organizations should prioritize implementing enhanced security measures and regularly update their systems to protect against potential attacks. As the reliance on AI and machine learning continues to grow, ensuring the security and integrity of these systems becomes even more crucial. By investing in robust security practices, companies can safeguard their data and minimize risks in an increasingly interconnected world.
Analyst comment
Positive news: The vulnerabilities in open-source AI and machine learning tools have been revealed, thanks to the bug bounty program by Protect AI. This highlights the need for enhanced security measures in these tools. Organizations should prioritize implementing security measures and regularly updating their systems to protect against potential attacks. Safeguarding data and minimizing risks is crucial as reliance on AI and machine learning grows.