Evaluations
Run models against your data
Introducing Evaluations, a powerful feature designed to enable you to effortlessly test and compare a selection of AI models against your datasets.
Whether you're fine-tuning models or evaluating performance metrics, Oxen evaluations simplifies the process, allowing you to quickly and easily run prompts through an entire dataset.
Once you're happy with the results, output the resulting dataset to a new file, another branch, or directly as a new commit.
Product question generation
04f75190-395f-4b43-b480-2132b79b7576
1000 rows completed
Mathias Barragan
Mathias Barragan
1 week ago
Prompt: {uuid} Imagine you are a {role} what is a question you would ask about {product} to see if its a good fit for your job?
4 iterations 302190 tokens$ 0.2720
text → textfireworksFireworks AI/Llama v3.1 70B Instruct
Source:
Llama-v3.1-70B
Question about product gen
b3783115-fc9e-42a6-949e-1a3a83455326
1000 rows completed
Mathias Barragan
Mathias Barragan
1 week ago
Prompt: {uuid} Imagine you are a {role} what is a question you would ask about {product} to see if its a good fit for your job?
2 iterations 303331 tokens$ 0.2730
text → textfireworksFireworks AI/Llama v3.1 70B Instruct
Source:
Llama-v3.1-70B
Llama v.3.1 70B
71cee76a-32a5-433e-84f1-f34d2d5c780e
1000 rows completed
Mathias Barragan
Mathias Barragan
2 weeks ago
Prompt: {uuid} Imagine you are a {role} what is the most necessary software product you need for your job? Just answer with the product. No extra words or descriptions.
4 iterations 72965 tokens$ 0.0657
text → textfireworksFireworks AI/Llama v3.1 70B Instruct
Source:
Target:
Llama-v3.1-70B