Introducing Evaluations, a powerful feature designed to enable you to effortlessly test and compare a selection of AI models against your datasets.
Whether you're fine-tuning models or evaluating performance metrics, Oxen evaluations simplifies the process, allowing you to quickly and easily run prompts through an entire dataset.
Once you're happy with the results, output the resulting dataset to a new file, another branch, or directly as a new commit.
fdf92237-d121-413d-a08e-3c83b6e2aafd
fdf92237-d121-413d-a08e-3c83b6e2aafd 1 / 5 row sample completed
Bessie
2 weeks ago
Prompt: What is in the image?
{image}
1 iteration$ 0.0000
image → textFireworks AI/Qwen2 VL 72B Instruct
Source:
719338f8-b2cd-41ac-9132-0454a0367941
719338f8-b2cd-41ac-9132-0454a0367941 1 / 5 row sample completed
Bessie
2 weeks ago
Prompt: What's in the image?
{image}
1 iteration$ 0.0000
image → textFireworks AI/Qwen2 VL 72B Instruct
Source:
Generate questions
c6ef2a5d-49cc-4e86-8f16-e18cd65e0a9f 34 / 102021 rows cancelledcancelled
Bessie
2 weeks ago
Prompt: {image}
What is a question someone might ask about this image? Answer with only the question.
image → textFireworks AI/Qwen2 VL 72B Instruct
Source:
main
Target:
N/A