Introducing Evaluations, a powerful feature designed to enable you to effortlessly test and compare a selection of AI models against your datasets.
Whether you're fine-tuning models or evaluating performance metrics, Oxen evaluations simplifies the process, allowing you to quickly and easily run prompts through an entire dataset.
Once you're happy with the results, output the resulting dataset to a new file, another branch, or directly as a new commit.
Political Spam Classification
33248f00-b7e3-4ca8-8abe-d1eb71210ead 5 row sample 00:00:02completed
Mathias Barragan
3 weeks ago
Prompt: Based on the text message below, is the text political spam:
{message}
Answer with only one word, either “True” or “False”
1 iteration 376 tokens
textOpenAI/GPT-4o
Source:
main
texts.parquet
GPT 4o Evaluation
9c4c2f50-5a9d-4ec0-a419-d0636c685f00 1620 rows 00:18:27completed
Mathias Barragan
4 weeks ago
Prompt: Based on the text message below, is the text political spam:
{message}
Answer with only one word, either "True" or "False"
2 iterations 120306 tokens
textOpenAI/GPT-4o
Source:
main
texts.parquet
Target:
main
texts.parquet