Evaluations
Run models against your data
Introducing Evaluations, a powerful feature designed to enable you to effortlessly test and compare a selection of AI models against your datasets.
Whether you're fine-tuning models or evaluating performance metrics, Oxen evaluations simplifies the process, allowing you to quickly and easily run prompts through an entire dataset.
Once you're happy with the results, output the resulting dataset to a new file, another branch, or directly as a new commit.
006acb5d-0d4f-4f88-ab49-f466d70a432b
OpenAIOpenAI/GPT 4o minitexttext
elau
1 month ago
test
completed 5 row sample114 tokens$ 0.0001 1 iteration
e97106b9-38e6-428b-acd2-19d0b378aefb
GoogleGoogle/Text Embedding 004textembeddings
elau
1 month ago
label
completed 5 row sample0 tokens$ 0.0000 1 iteration
d4c04025-bca1-4022-b0cd-14bbd737c4c1
GoogleGoogle/Text Embedding 004textembeddings
elau
1 month ago
path
completed 5 row sample0 tokens$ 0.0000 3 iterations
9dfdc090-5aaa-4af0-a367-4cdff7090b51
GoogleGoogle/Gemini 2.0 Flash Litetexttext
elau
3 months ago
Category: {category}
Model: {model}
Prompt: {prompt}
Response: {response}

Critique {model}'s response.
completed 5 row sample6501 tokens$ 0.0011 1 iteration