New Malware Threat Targets AI Coding Tools Used by Coinbase Engineers
A cybersecurity firm has uncovered a critical vulnerability in AI-powered coding assistants, including Cursor—an AI tool widely adopted by Coinbase’s engineering teams. The flaw enables hackers to embed malicious code across software projects without detection, posing significant risks to organizations relying on AI for software development.
Understanding the CopyPasta License Attack
HiddenLayer, the cybersecurity company behind the discovery, detailed a novel attack method termed the “CopyPasta License Attack.” This technique conceals harmful instructions within commonly used developer files such as LICENSE.txt and README.md. These files, often containing metadata or explanatory comments, serve as vectors for “prompt injections” that manipulate AI coding tools.
By disguising malicious payloads as essential license comments, attackers can compel AI models to replicate the harmful code across multiple files during automated code generation. HiddenLayer successfully demonstrated this technique by embedding the virus within a code repository and prompting Cursor to propagate the injected instructions throughout newly created files.
Implications for Software Security
HiddenLayer warns that this attack mechanism could be exploited for various malicious purposes, including establishing backdoors, exfiltrating sensitive information silently, introducing resource-intensive operations that degrade system performance, or corrupting critical files to disrupt development and production workflows. The stealthy nature of the injection—hidden in markdown comments invisible in rendered documentation—increases the risk of prolonged undetected compromise.
Coinbase’s AI Integration and Industry Backlash
Coinbase CEO Brian Armstrong recently disclosed that AI-generated code accounts for up to 40% of the company’s software, with plans to increase this to 50% imminently. This aggressive AI adoption has sparked criticism from cybersecurity experts and industry leaders. Larry Lyu, founder of decentralized exchange Dango, called the approach a “giant red flag” for security-sensitive businesses. Carnegie Mellon professor Jonathan Aldrich labeled the mandate to use AI tools as “insane,” expressing distrust in Coinbase’s security practices.
Delphi Consulting’s Ashwath Balakrishnan criticized Coinbase’s focus on AI usage percentages as “performative and vague,” urging the company to prioritize feature development and bug fixes. Bitcoin advocate Alex Pilař emphasized that Coinbase, as a major crypto custodian, should place security above AI-driven development goals.
Coinbase’s Response and AI Usage Scope
In response, Armstrong clarified that AI-generated code undergoes rigorous review and that AI tools are employed primarily in less sensitive areas such as front-end interfaces and non-critical data backends. Complex, system-critical exchange components see limited AI integration to mitigate risk. Coinbase’s engineering team also acknowledged that AI is not a universal solution and adoption varies by team and project requirements.
Mandatory AI Adoption and Internal Enforcement
Armstrong revealed on a podcast that Coinbase mandated AI tool usage among engineers following the acquisition of licenses for Cursor and GitHub Copilot. Engineers who resisted onboarding faced termination, a move Armstrong described as “heavy-handed” but necessary to accelerate AI adoption within the company.
This episode underscores the tension between rapid AI integration and maintaining robust security standards in high-stakes environments such as cryptocurrency exchanges.