Is AI Its Own Biggest Risk? Here’s What Enterprises Need to Know
*(Image Source: Shutterstock)*
Ashvin Kamaraju, global vice president at Thales, delves into the growing concerns surrounding the risks to AI rather than from it. As enterprises embrace AI, he explains the top risks and outlines strategic approaches for leaders to safeguard their AI ecosystems.
The Risks of AI: Understanding the Potential Threats
The rise of widely accessible generative AI platforms and tools drives decision-makers across businesses to evaluate where the technology can be leveraged within their stacks to enhance operations. According to the GitHub Survey, 92% of developers already use A.I. coding tools. These platforms are becoming the foundation for everything in the enterprise – from processes to solutions and mindset.
This growing focus on increasing AI usage has sparked conversations centered on the potential risks of the tech. Still, as it becomes more pervasive, a more concerning element must be considered: the risks to AI.
Ensuring Responsible AI Implementation in Enterprises
Widespread use of AI among enterprises is a growing trend, which means risks to AI will remain persistent unless properly addressed. Enterprises implementing these AI systems should do so responsibly, incorporating security industry guidance and including these systems in the threat landscape.
So, how can enterprises hold up their end of AI responsibility? By pinpointing proper business use cases of AI to get ahead of threats. These use cases include:
- Leveraging AI as a nimble defense: Today’s threats require a proactive approach to security, not a reactive one. By adding AI to their security stacks, businesses can address threats preemptively.
- Advancing anomaly detection with generative AI: With the threat intelligence AI systems gather, IT and security teams are powered with real-time anomaly detection.
- Reducing toil: AI removes the need for having an expert in every language. If an organization faces an attack due to a malicious script trying to target their system, IT and security teams can turn to generative AI to feed in the script and receive instantaneous directions on patching any existing vulnerabilities to defend against the attack.
Top Risks to AI and Strategies for Safeguarding
- Stealing the model: Threat actors can target machine learning models that use public APIs by copying a model. By having the exact model on hand, cybercriminals can learn the ins and outs of its capabilities, testing the limits to see how they can successfully target the real thing.
- Data Poisoning: Public datasets used to train deep-learning models have the potential to be tampered with. If accessible to a bad actor, these sets can be manipulated, and models trained on poisoned data produce false or malicious predictions and decisions.
- Prompt Injection: A risk that has already proven its harm to AI is prompt injection. Hackers are using the prompt injection technique to “trick” chatbots by inputting a series of prompts or questions to deceive the application to override its existing instructions.
- Extracting confidential information: There’s a growing concern about what these AI platforms store. If teams are uploading personally identifiable information (PII) or confidential information, organizations run the risk of having this data publicly shared.
Mitigating AI Risks: Best Practices for Enterprises
With all technologies, the industry plays a pivotal role in shaping future use. Almost overnight, AI became rapidly accessible, and its advancements are coming just as fast. There’s a demand to develop more responsible AI in the U.S., but the lack of clear-cut regulations leaves little clarity for those less familiar with the tech.
As AI continues infiltrating the workplace, enterprises face the immense burden of rapidly and securely deploying AI-based systems to meet new demands while avoiding exposing themselves to the expanding threat vector. This tremendous weight is not one that organizations alone can carry, so developing regulations and offering guidance and frameworks are instrumental in the future of workplace AI.
Luckily, existing frameworks, guidance, and resources are available for organizations to ensure proper business use and implementation of AI as we await more firm regulations. For example, The National Institute of Standards and Technology (NIST) launched the NIST AI Risk Management Framework. This framework aims to better manage risks to individuals, organizations, and society associated with AI.
Collaboration Key to Protecting AI: Why Enterprises Should Shift Focus
For a successful future of AI use, it’s clear that enterprises need to shift mindsets to focus on the risks with this technology. By placing resources behind protecting AI and calling for collaboration from business leaders, regulators, and industry experts, there’s a clear path to a more secure future that benefits from AI’s innovations.
Why do you think enterprises should shift focus on the risks of AI vs. the risks from it? Let us know on Facebook, Twitter, and LinkedIn. We’d love to hear from you!
MORE ON AI RISK
- Mitigating AI Risks: Protecting from Identity Theft
- Why a Risk-Based Approach to AI Regulation Is Critical
- Confronting The Risks of Artificial Intelligence Technology
- Leveraging Generative AI Safely and Responsibly
Analyst comment
The news can be evaluated as negative, as it highlights the risks associated with AI implementation. Enterprises need to focus on understanding and mitigating these risks to ensure responsible and secure use of AI systems. Without proper safeguards, threats such as model theft, data poisoning, prompt injection, and information extraction could have detrimental consequences. Collaboration among business leaders, regulators, and industry experts is essential to protect AI and ensure a more secure future.