Machine Learning Operations (MLOps) company Deepchecks today announced the release of its open-source platform for continuously validating machine learning (ML) models. This new offering aims to establish an ML safety and predictability standard, bridging the gap between research and production.
In addition, the company has secured $14 million in seed funding, with Alpha Wave Ventures leading the investment round with participation from Hetz Ventures and Grove Ventures.
As ML moves from lengthy research projects to agile software-like development cycles, the industry requires robust processes and tools to ensure timely and high-quality deployments. Unlike traditional software development, ML’s complex and opaque nature poses challenges to its safe and predictable implementation.
Deepchecks asserts that it tackles these challenges by drawing upon lessons from software development. The company’s new offering empowers developers to attain visibility and confidence throughout the entire ML lifecycle, encompassing development, deployment and production operations.
Transitioning models into production
Chorev emphasized her company’s commitment to equipping practitioners with user-friendly tools for constructing and customizing crucial tests that identify and prevent problems, such as regression testing. These tests can be created and applied in a reusable and efficient manner.
She believes that this assistance aids businesses in overcoming a significant hurdle: The transition of reliable models into production.
“Deepchecks applies the principles of continuous testing and validation from software development to ML, making the development process more efficient and agile,” she added. “This allows practitioners to take responsibility for their models’ performance, the stability of the systems they develop, and easily reuse validation tests throughout the ML lifecycle and across different organizational tasks, minimizing time spent on non-critical tasks.”
The new tool also provides monitoring and root cause analysis features for production environments. The company claims the platform has garnered more than 500,000 downloads and is already being used by renowned companies including AWS, Booking and Wix, as well as in highly regulated sectors like finance and healthcare.
Deepchecks said that its enterprise version offers advanced collaboration and security features.
Enhancing AI model testing through validation and monitoring
Chorev said that despite the ML market’s projected rapid growth — it is estimated to reach $225.91 billion by 2030 — only half of ML models successfully make it to production. These models frequently encounter time and budget constraints or suffer significant failures.
She said she believes this underscores the necessity for enhanced approaches to bolster applications’ reliability and predictability.
“Implementing testing and validation in ML is different due to inherent challenges (many moving parts, no clear ‘code coverage’ alternatives and frequent silent failures),” Chorev said. “Therefore, we aim to provide a well-defined solution that automates test running, supports efficient repeatability and reusability within the organization and helps with collaboration and sharing through clear dashboards and reports.”
Verifying AI systems work as intended
The company’s new offering benefits practitioners, developers and stakeholders, she said. It enhances transparency and trust while improving the efficiency of implementing these measures.
Chorev cofounded Deepchecks with CEO Philip Tannor three years ago. Both have been recognized in Forbes’ 30 under 30 list. Their backgrounds encompass experience in the IDF’s Talpiot program and the elite 8200 intelligence unit, where they acquired expertise in ML.
“We identified a significant obstacle to broader and safer AI adoption: The need to effectively verify that AI systems work as intended and don’t go off the rails,” Chorev added. “Essentially, we were looking for a solution like Deepchecks but couldn’t find one. Realizing the market need and the technological challenges to overcome it, we teamed up to develop a solution ourselves.”
A future of opportunities in machine learning validation and MLOps
The company assists organizations in implementing and executing comprehensive testing and continuous integration (CI) processes. It facilitates collaboration by enabling the sharing of validation results with stakeholders and efficient iterations with auditors.
Chorev said this streamlined approach ensures an effective and efficient validation process.
“When scaling up, you’ve got skilled and costly experts involved in ML validation, unlike traditional QA, which is often an entry-level role,” she explained. “That’s where Deepchecks comes in, allowing enterprises to automatically incorporate it into their processes and minimizing the time spent on manual validation processes.”
The enterprise version enables testing, validation and monitoring of multiple models simultaneously, she said. Deepchecks also provides relevant dashboards and enables advanced user management and permission features.
Open source essential
Chorev said that the open-source nature of the company’s tools played a big part in gaining traction across the tech industry, even among large enterprises.
“Traditionally, those enterprises went for closed systems (SAS), but things are changing now,” Chorev said. “In our space, open-source solutions are great for data privacy and security because you can use them locally and don’t have to send your data outside your organization.”
The company’s approach and structure have enabled them and their users to easily expand support for various types of data and integrations and to add validation to different phases and processes within the AI lifecycle, she added.
“This ensures problems are caught efficiently and early,” Chorev said.