Evaluations (2)
Introducing Evaluations, a powerful feature designed to enable you to effortlessly test and compare a selection of AI models against your datasets.
Whether you're fine-tuning models or evaluating performance metrics, Oxen evaluations simplifies the process, allowing you to quickly and easily run prompts through an entire dataset.
Once you're happy with the results, output the resulting dataset to a new file, another branch, or directly as a new commit.
meow
6fc9815c-7193-41d9-8465-994693dd137c
5 row sample 00:00:38completed
netsin
netsin
1 month ago
Prompt: How to build a city on mars {prompt}
1 iteration 2470 tokens
textopenaiOpenAI/GPT-4o
0a8f9a97-228e-4301-9ea2-ae8b765541ed
0a8f9a97-228e-4301-9ea2-ae8b765541ed
5 row sample 00:00:09completed
Luke Byrne
1 month ago
Prompt: {prompt} Select one of the following choices, returning the index of the choice and the exact text in a json object: <choices>{choices}</choices>
2 iterations 623 tokens
textopenaiOpenAI/GPT-4o