Run:AI Raises $13M for the Super-fast AI Software Stack of the Future

3 April 2019

TEL AVIV, Israel, April 3, 2019 /PRNewswire/ -- Israel startup Run:AI exited stealth mode today with the announcement of $13 million in funding for its virtualization and acceleration solution for deep learning. Run:AI bridges the gap between data science and computing infrastructure by creating a high performance compute virtualization layer for deep learning, speeding up the training of neural network models and enabling the development of huge AI models. The funding included a $10 million Series A round led by Haim Sadger's S Capital and TLV Partners, coming after a seed round of $3 million from TLV Partners.

Run:AI logo

Deep learning uses neural networks that mimic some functions of the human brain, and is the most complex and advanced type of AI, powering functions like image recognition, autonomous vehicles, smart assistants like Alexa and Siri, and much more.

Deep learning needs to be trained in order to work, a process that takes time and significant computing power. Companies often run deep learning workloads on a very large number of Graphical Processing Units (GPUs) or specialized AI cores. These workloads run continuously for days or weeks on expensive on-premises computers or cloud-based providers.

The time and cost of training new models are the biggest barriers to creating new AI solutions and bringing them quickly to market. Deep learning requires experimentation, and slightly-modified training workloads could be run hundreds of times before they're accurate enough to use. This results in very long times-to-delivery, as workflow complexities and costs grow.

Run:AI has completely rebuilt the software stack for deep learning to get past the limits of traditional computing, making training massively faster, cheaper and more efficient. It does this by virtualizing many separate compute resources into a single giant virtual computer with nodes that can work in parallel.

"Traditional computing uses virtualization to help many users or processes share one physical resource efficiently; virtualization tries to be generous," said Omri Geller, Run:AI co-founder and CEO. "But a deep learning workload is essentially selfish since it requires the opposite: it needs the full computing power of multiple physical resources for a single workload, without holding anything back. Traditional computing software just can't satisfy the resource requirements for deep learning workloads."

The company's software is tailored for these new computational workloads. The low-level solution works "close to the metal",  taking full advantage of new AI hardware. It creates a compute abstraction layer that automatically analyzes the computational characteristics of the workloads, eliminating bottlenecks and optimizing them for faster and easier execution using graph-based parallel computing algorithms. It also automatically allocates and runs the workloads. This makes deep learning experiments run faster, lowers GPU costs, and maximizes server utilization while simplifying workflows.

Behind the scenes, Run:AI uses advanced mathematics to break up the original deep learning model into multiple smaller models that run in parallel. This has the additional benefit of bypassing memory limits, letting companies run models that are bigger than the GPU RAM that they usually have available.

Run:AI was founded by Omri Geller, Dr. Ronen Dar, and Prof. Meir Feder. The three met while Ronen and Omri studied under Prof. Feder at Tel Aviv University. Ronen was previously a postdoc researcher at Bell Labs and R&D and Algorithms engineer at Apple, Anobit and Intel. Omri was a member of an elite technological unit of the Israeli military where he led large scale projects and deployments. Prof. Meir Feder previously founded and sold two startups and is an internationally recognized authority in Information Theory.

Rona Segev-Gal, Managing Partner of TLV Partners, said, "Executing deep neural network workloads across multiple machines is a constantly moving target, requiring recalculations for each model and iteration based on availability of resources. Run:AI determines the most efficient and cost-effective way to run a deep learning training workload, taking into account the network bandwidth, compute resources, cost, configurations and the data pipeline and size. We've seen many AI companies in recent years, but Omri, Ronen and Meir's approach blew our mind," she said.

Aya Peterburg, Managing Partner of S Capital, said, "Run:AI is the third AI company we're investing in, so we've learned a lot about what makes a strong, successful startup in the space. The talent and experience of the Run:AI team gave us huge confidence that they can fill this vital need in the growing sector of developing deep learning solutions."

Run:AI's team brings together deep learning, hardware, and parallel computing experts covering different areas of the AI industry, giving them a holistic understanding of the real-world needs of AI development. In stealth since it was founded in 2018, the company has already signed several early customers internationally and has established a US office.

Media Contact

Lazer Cohen



Run:AI Raises $13M for the super-fast AI software stack of the future

Cision View original content to download multimedia:

SOURCE Runai Labs