Evaluations
Run models against your data
Introducing Evaluations, a powerful feature designed to enable you to effortlessly test and compare a selection of AI models against your datasets.
Whether you're fine-tuning models or evaluating performance metrics, Oxen evaluations simplifies the process, allowing you to quickly and easily run prompts through an entire dataset.
Once you're happy with the results, output the resulting dataset to a new file, another branch, or directly as a new commit.
test 12
12be36d7-414a-440e-9524-e14661e9ddc3
error
realityinspector
realityinspector
3 weeks ago
Prompt: please say "{data}"
text → textAnthropic AIAnthropic AI/Claude 3.7 Sonnet
test 13
57fa006f-df77-4793-bd93-e3cad27209c7
error
realityinspector
realityinspector
3 weeks ago
Prompt: {data}
1 iteration$ 0.0000
text → textOpenAIOpenAI/GPT 4o mini
test 12
7dd85407-ab1c-45d7-a624-4014e12d1284
error
realityinspector
realityinspector
3 weeks ago
Prompt: what is this? {type} {data}
text → textMetaMeta/Llama 3.1 8B Instruct Turbo