Tigris Data Challenges Big Cloud with Distributed AI-Optimized Storage

Lilu Anderson
Photo: Finoracle.net

Distributed Storage for Modern AI Workloads

The surge in artificial intelligence development has intensified the demand for computing power, prompting companies like CoreWeave, Together AI, and Lambda Labs to offer distributed compute capacity. However, data storage remains largely centralized with the major cloud providers—Amazon Web Services, Google Cloud, and Microsoft Azure—whose architectures prioritize proximity to their own compute resources rather than across multiple regions or clouds. Tigris Data, a startup founded by Uber’s former storage platform engineers, aims to disrupt this model by building a network of localized data storage centers optimized for AI workloads. The company’s platform is designed to “move with your compute,” automatically replicating data where GPUs are located, supporting billions of small files, and enabling low-latency access essential for AI training, inference, and agentic tasks.

$25 Million Series A Funding to Accelerate Growth

Tigris recently closed a $25 million Series A round led by Spark Capital, with participation from existing investors including Andreessen Horowitz. This funding will support the expansion of its data center network and enhance its distributed storage capabilities to meet the growing needs of AI developers.

Addressing Cloud Egress Fees and Latency Challenges

Ovais Tariq, CEO of Tigris, highlights the limitations of the incumbent cloud providers, particularly their costly egress fees—colloquially known as the “cloud tax.” These fees penalize customers who transfer data between clouds or regions, restricting flexibility and inflating costs.
“Egress fees were just one symptom of a deeper problem: centralized storage that can’t keep up with a decentralized, high-speed AI ecosystem,” Tariq explained.
Latency also remains a critical issue. Large cloud providers are not optimized for the high-speed, multi-region demands of AI workloads, which require rapid access to extensive datasets for training and real-time inference. Tigris’ distributed approach addresses these bottlenecks by localizing storage near compute resources.

Customer Insights: Generative AI Startups Benefit

Many of Tigris’ more than 4,000 customers are generative AI startups developing models for image, video, and voice applications, which require managing large, latency-sensitive datasets. Fal.ai, a Tigris client, reports that egress fees previously represented the majority of its cloud expenses.

“Tigris lets us scale our workloads in any cloud by providing access to the same data filesystem from all these places without charging egress,” said Batuhan Taskaya, head of engineering at Fal.ai.

Tariq emphasized the importance of co-locating compute and storage to achieve minimal latency, especially for AI agents processing local audio or other real-time data.

Data Ownership and Compliance Driving Distributed Storage Demand

Beyond technical advantages, companies are increasingly motivated by data ownership concerns. High-profile cases, such as Salesforce’s restrictions on Slack data usage, underscore the need for enterprises to maintain control over their datasets fueling AI models. Additionally, regulated sectors like finance and healthcare require localized storage solutions to comply with data security and privacy mandates, further driving demand for distributed architectures.

Expansion Plans to Support Global AI Growth

Since its founding in November 2021, Tigris has grown eightfold annually and currently operates data centers in Virginia, Chicago, and San Jose. With the new capital infusion, the company plans to expand its footprint to additional U.S. locations and key international markets including London, Frankfurt, and Singapore.

FinOracleAI — Market View

Tigris Data’s distributed storage platform addresses critical pain points in the AI infrastructure landscape, notably latency and prohibitive data transfer costs imposed by dominant cloud providers. Its AI-native design and multi-region replication capabilities position it well to serve the rapidly expanding generative AI market and industries with stringent data requirements.
  • Opportunities: Growing demand for scalable, low-latency storage aligned with distributed AI compute; increasing enterprise focus on data ownership and compliance; potential to disrupt incumbent cloud storage monopolies.
  • Risks: Entrenched market dominance of AWS, Google Cloud, and Azure; complexity of expanding global data center infrastructure; maintaining competitive pricing and performance.

Impact: Tigris Data’s approach could significantly reshape AI infrastructure by enabling more cost-efficient, distributed storage solutions that align with emerging decentralized compute trends, challenging the traditional cloud storage paradigm.

Share This Article
Lilu Anderson is a technology writer and analyst with over 12 years of experience in the tech industry. A graduate of Stanford University with a degree in Computer Science, Lilu specializes in emerging technologies, software development, and cybersecurity. Her work has been published in renowned tech publications such as Wired, TechCrunch, and Ars Technica. Lilu’s articles are known for their detailed research, clear articulation, and insightful analysis, making them valuable to readers seeking reliable and up-to-date information on technology trends. She actively stays abreast of the latest advancements and regularly participates in industry conferences and tech meetups. With a strong reputation for expertise, authoritativeness, and trustworthiness, Lilu Anderson continues to deliver high-quality content that helps readers understand and navigate the fast-paced world of technology.