Evaluations
Run models against your data
Introducing Evaluations, a powerful feature designed to enable you to effortlessly test and compare a selection of AI models against your datasets.
Whether you're fine-tuning models or evaluating performance metrics, Oxen evaluations simplifies the process, allowing you to quickly and easily run prompts through an entire dataset.
Once you're happy with the results, output the resulting dataset to a new file, another branch, or directly as a new commit.
Sentiment
151a40d1-5df5-4a47-b81c-938fd600a6e4
5 row sample completed
Bessie
Bessie
2 months ago
Prompt: What is the sentiment {text}
1 iteration 486 tokens$ 0.0002
text → textopenaiOpenAI/GPT-4o mini
Source:
GPT-4o Sentiment Analysis
74c32628-437f-422d-8fa5-70cea1ffa195
2000 rows completed
Mathias Barragan
Mathias Barragan
2 months ago
Prompt: Compute the sentiment of the text based on the how well the companies are performing in the market. Return only one of three options: positive, negative or neutral. Respond with one word all lowercase. Text: {text}
1 iteration 154533 tokens$ 0.4013
text → textopenaiOpenAI/GPT-4o
Source:
Target:
Sentiment Analysis
1a60e9eb-2a97-4eb4-ac0c-95f78eda3af0
2000 rowserror
Mathias Barragan
Mathias Barragan
2 months ago
Prompt: Compute the sentiment of the text based on the how well the companies are performing in the market. Return only one of three options: positive, negative or neutral. Respond with one word all lowercase. Text: {text}
1 iteration 154533 tokens$ 0.4013
text → textopenaiOpenAI/GPT-4o
Source:
Target: