Run:ai raises a total of $118m., aims to simplify AI infrastructure

“It may sound dramatic, but AI is really the next phase of humanity’s development,” said Omri Geller, CEO and co-founder of Run:ai.

 Ronen Dar and Omri Geller, Run:ai's co-founders (photo credit: Run:ai)
Ronen Dar and Omri Geller, Run:ai's co-founders
(photo credit: Run:ai)

Run:ai, which simplifies artificial intelligence (AI) infrastructure orchestration and management, has raised $75 million in a series C round led by Tiger Global Management and Insight Partners, which also led the previous series B round. The round includes the participation of additional existing investors, TLV Partners and SCapital VC, and brings the total funding raised to date to $118m.

In the last year, Run:ai has seen a nine-fold increase in annual recurring revenue, and the company’s staff has more than tripled. The company plans to use the investment to further grow its global teams and will also be considering strategic acquisitions as it develops and enhances the company’s Atlas software platform.

“It may sound dramatic, but AI is really the next phase of humanity’s development,” said Omri Geller, CEO and co-founder of Run:ai. “When we founded Run:ai, our vision was to build the de facto foundational layer for running any AI workload. Our growth has been phenomenal, and this investment is a vote of confidence in our path. Run:ai is enabling organizations to orchestrate all stages of their AI work at scale, so companies can begin their AI journey and innovate faster.”

According to research firm IDC, global AI spending in 2022 could reach $433 billion – a nearly 20% annual increase. In light of that forecasted expansion, Run.ai aims to provide a simplified solution to AI infrastructure; the company’s Atlas platform provides a so-called “Foundation for AI Clouds,” allowing organizations to have their AI resources on a single, unified platform that supports AI at all stages of development, from building and training models to running inference in production.

“We do for AI hardware what VMware and virtualization did for traditional computing - more efficiency, simpler management, greater user productivity,” said Run:ai CTO and co-founder Ronen Dar.

Artificial intelligence (credit: PIXABAY/WIKIMEDIA)
Artificial intelligence (credit: PIXABAY/WIKIMEDIA)

“Traditional CPU computing has a rich software stack with many development tools for running applications at scale. AI, however, runs on dedicated hardware accelerators such as GPUs which have few tools to help with their implementation and scaling,” said Dar. “With Run:ai Atlas, we’ve built a cloud-native software layer that abstracts AI hardware away from data scientists and ML engineers, letting Ops and IT simplify the delivery of compute resources for any AI workload and any AI project.”

Lonne Jaffe, managing director at Insight Partners, said, “As enterprises in every industry reimagine themselves to become learning systems powered by AI and human talent, there has been a global surge in demand for AI hardware chipsets such as GPUs. As the Forrester Wave AI Infrastructure report recently highlighted, Run:ai creates extraordinary value by bringing advanced virtualization and orchestration capabilities to AI chipsets, making training and inference systems run both much faster and more cost-effectively.”