
AI testing stands apart from conventional software testing, demanding a consistent and adaptive approach that is applied persistently to substantial datasets. This distinction arises from the inherent probabilistic and dynamic nature of AI, which necessitates a continuous evaluation process.
Traditional software testing has often relied on a fixed set of test cases designed to verify specific functionalities. However, AI systems (particularly those employing machine learning algorithms) learn and adapt based on the data they are exposed to. This means that the behavior of an AI system can change over time, which makes it difficult to predict and test against a static set of scenarios. To ensure the reliability and effectiveness of AI systems, it is essential to continuously monitor and evaluate their performance using diverse and representative datasets.
That said, the probabilistic nature of AI algorithms introduces an element of uncertainty into the testing process. Even if an AI system performs well on a given set of test data, there is no guarantee that it will perform equally well on unseen data. This is why AI testing often involves techniques such as statistical analysis and hypothesis testing to assess the confidence level of the results.
Businesses need a solution or platform that allows them to mitigate the risks associated with deploying faulty AI products and safeguard their financial, regulatory and reputational interests.
Following a $19 million Series A funding round led by Two Sigma Ventures, Distributional announced initial enterprise deployments of its AI testing platform that gives AI engineering and product teams confidence in the reliability of their AI applications, reducing operational AI risk in the process.
Distributional is designed to evaluate the reliability of any AI/ML application, particularly GenAI, which is notoriously unpredictable due to its tendency to produce varying outputs from identical inputs. GenAI also frequently undergoes changes in its underlying components, which is difficult to control. As AI leaders face increasing pressure to release GenAI products, Distributional offers automated AI testing with intelligent recommendations for enhancing application data, suggesting tests and creating a feedback loop that dynamically adjusts these tests for each specific AI application.
Distributional's platform better equips AI product teams to proactively and continuously identify, understand and address AI risks before they impact customers.
Distributional's adaptable testing framework allows AI application teams to gather and enhance data, conduct tests on this data, report on test results, categorize these results and resolve issues through either adaptive adjustments or analysis-driven debugging. This framework can be deployed as a self-managed solution within a customer's virtual private cloud and integrates with existing data stores, workflow systems and alert platforms.
Teams utilize Distributional's customizable testing dashboards to collaborate on test repositories, analyze test results, prioritize failed tests, adjust tests, record test session audit trails and report test outcomes for governance purposes. This enables multiple teams to work together on an AI testing workflow throughout the application's lifecycle and establish standardized processes across AI platform, product, application and governance teams.
Distributional also simplifies the process for teams to begin and expand AI testing by automating data augmentation, test selection and the adjustment of these steps through an adaptive preference learning process. Intelligence serves as the driving force that refines a test suite to suit a particular AI application throughout its production lifecycle and scales testing across all properties for all components of all AI applications.
“Between my previous line of work optimizing AI applications at SigOpt and deploying AI applications at Intel, and through conversations with Fortune 500 CIOs, it became clear that reliability of AI applications is both critical and challenging to assess,” said Scott Clark, co-founder and CEO of Distributional. “With Distributional, we have built a scalable statistical testing platform to discover, triage, root cause and resolve issues with the consistency of AI/ML application behavior.”
The funding round also had participation from Andreessen Horowitz, Operator Collective, Oregon Venture Fund, Essence VC, Alumni Ventures and dozens of angel investors. The new round brings Distributional’s total capital raised to $30 million less than one year since incorporation.
Be part of the discussion about the latest trends and developments in the Generative AI space at Generative AI Expo, taking place February 11-13, 2025 in Fort Lauderdale, Florida. Generative AI Expo covers the evolution of GenAI and will feature conversations focused on the potential for GenAI across industries and how the technology is already being used to create new opportunities for businesses to improve operations, enhance customer experiences, and create new growth opportunities.
Edited by
Alex Passett