Evaluations
Run models against your data
Introducing Evaluations, a powerful feature designed to enable you to effortlessly test and compare a selection of AI models against your datasets.
Whether you're fine-tuning models or evaluating performance metrics, Oxen evaluations simplifies the process, allowing you to quickly and easily run prompts through an entire dataset.
Once you're happy with the results, output the resulting dataset to a new file, another branch, or directly as a new commit.
e362ec60-180c-4393-8a75-325e99419dbd
OpenAIOpenAI/GPT 4o minitexttext
Laurence
4 months ago
Output "test"
cancelled cancelled 11 / 3088 rows142 tokens$ 0.0000 2 iterations
d7e60384-a581-4d8a-8f97-23d0bbce37b4
MetaMeta/Llama 3.2 11B Visionimagetext
Laurence
4 months ago
{options}
error no function clause matching in String.replace/4 Waiting... 0 tokens$ 0.0000 3 iterations
bcdf8491-939c-41bf-8b36-7f5c777bb59e
MetaMeta/Llama 3.2 11B Visionimagetext
Laurence
4 months ago
{image}

{question}

The options are as follows: {options}
error no function clause matching in String.replace/4 Waiting... 0 tokens$ 0.0000 4 iterations