Test & Tune
AI PoweredAI requires a new way of testing
With traditional applications, we know what the expected output should be and our testing ensures we get what we expect. With AI, expected output can vary and there may be a large number of "correct" variations. This requires a new way of testing for correct behavior. It requires validating factual correctness; verifying semantic accuracy; ensuring logical consistency; confirming the appropriateness of the response on the context of previous responses; and identifying any biases in the responses.
Five dimensions of testing
Verify AI defines a new approach to AI testing. Our powerful AI agents test your custom AI application across five dimensions: factual correctness, semantic accuracy, context appropriateness, logical consistency, and bias.
Automated Testing
Verify Ai uses its own custom agent to automatically generate test cases with a baseline set of prompts and expected responses. These test cases can be run through each of the five dimensions, with a comprehensive test report generated for each run.
Person-in-the-Loop testing
Augment your automated testing with your own suite of test cases. This is strongly rcommended to provide appropriate checks on AI testing AI.
Tune your application
Verify AI supports human feedback to tune your application based on testing output. It will then use this feedback to update the applications behavior or its policies.