Irregular Secures $80M to Enhance Security of Advanced AI Models

Lilu Anderson
Photo: Finoracle.net

Irregular Raises $80 Million to Bolster Security for Advanced AI Models

AI security firm Irregular announced on Wednesday that it has secured $80 million in a new funding round led by Sequoia Capital and Redpoint Ventures, with additional participation from Wiz CEO Assaf Rappaport. Sources familiar with the deal estimate the company’s valuation at around $450 million.

Irregular, formerly known as Pattern Labs, has established itself as a key player in AI model security and evaluation. Its proprietary framework, SOLVE, which assesses a model’s ability to detect vulnerabilities, is widely adopted across the industry. The company’s evaluations have informed security assessments for notable models such as Claude 3.7 Sonnet and OpenAI’s o3 and o4-mini.

Co-founder Dan Lahav emphasized the evolving nature of AI security challenges, stating, “Our view is that soon, a lot of economic activity is going to come from human-on-AI interaction and AI-on-AI interaction, and that’s going to break the security stack along multiple points.”

Proactive Identification of Emerging Risks

With this funding, Irregular aims to advance beyond identifying known vulnerabilities to detecting emergent risks and behaviors before they appear in real-world applications. The company has developed sophisticated simulated environments where AI systems engage in attacker-defender scenarios, enabling rigorous pre-release testing of new models.

Co-founder Omer Nevo explained, “We have complex network simulations where we have AI both taking the role of attacker and defender. So when a new model comes out, we can see where the defenses hold up and where they don’t.”

The focus on AI security has intensified industry-wide, especially as frontier AI models demonstrate enhanced capabilities that could be exploited. OpenAI recently revamped its internal security infrastructure in response to concerns such as corporate espionage. Simultaneously, AI’s growing proficiency in uncovering software vulnerabilities underscores the critical importance of robust defensive measures.

Ongoing Challenges in Securing Frontier AI

Irregular’s leadership acknowledges that securing advanced AI models is an ongoing and escalating challenge. Lahav remarked, “If the goal of the frontier lab is to create increasingly more sophisticated and capable models, our goal is to secure these models. But it’s a moving target, so inherently there’s much, much, much more work to do in the future.”

FinOracleAI — Market View

Irregular’s successful $80 million raise, led by prominent venture firms, underscores growing investor confidence in AI security as a critical sector. Their proactive approach to detecting emergent AI risks positions them well amid increasing industry focus on safeguarding advanced models. However, the evolving nature of AI threats presents ongoing challenges and execution risks. Market participants should monitor Irregular’s ability to scale its simulation frameworks and influence emerging AI security standards.

Impact: positive

Share This Article
Lilu Anderson is a technology writer and analyst with over 12 years of experience in the tech industry. A graduate of Stanford University with a degree in Computer Science, Lilu specializes in emerging technologies, software development, and cybersecurity. Her work has been published in renowned tech publications such as Wired, TechCrunch, and Ars Technica. Lilu’s articles are known for their detailed research, clear articulation, and insightful analysis, making them valuable to readers seeking reliable and up-to-date information on technology trends. She actively stays abreast of the latest advancements and regularly participates in industry conferences and tech meetups. With a strong reputation for expertise, authoritativeness, and trustworthiness, Lilu Anderson continues to deliver high-quality content that helps readers understand and navigate the fast-paced world of technology.