Introducing Evaluations, a powerful feature designed to enable you to effortlessly test and compare a selection of AI models against your datasets.
Whether you're fine-tuning models or evaluating performance metrics, Oxen evaluations simplifies the process, allowing you to quickly and easily run prompts through an entire dataset.
Once you're happy with the results, output the resulting dataset to a new file, another branch, or directly as a new commit.
ea2e0db6-896c-4e1a-b61f-9a3473968d42
ea2e0db6-896c-4e1a-b61f-9a3473968d42 10 / 14872 rowserror
Bessie
1 month ago
Prompt: generate a random number between {x_position} and {y_position}
text → textOpenAI/GPT-4o mini
Source:
Target: