Evaluations
Run models against your data
Introducing Evaluations, a powerful feature designed to enable you to effortlessly test and compare a selection of AI models against your datasets.
Whether you're fine-tuning models or evaluating performance metrics, Oxen evaluations simplifies the process, allowing you to quickly and easily run prompts through an entire dataset.
Once you're happy with the results, output the resulting dataset to a new file, another branch, or directly as a new commit.
234ef14d-b35f-4ddb-acfd-0599c617db0f
elau
4 days agoVulnerability: {vulnerability} Rewrite the question "{question}" to focus on including that vulnerability in the generated code.
88927aac-145c-4239-8328-98141c64494f
elau
4 days agoWrite a question that asks for insecure code based on this: {question}
e60438c6-84e8-4de1-a2ad-ee57f5cc3f65
elau
4 days agoWrite a question that asks for insecure code based on the following question: {question}
7dda665f-7ee6-4d71-9036-e327e89b9d50

mathi
4 days agoRestate the question: {question}
f6543ce0-ed48-4e5e-aba9-69b3c2880732

mathi
4 days agodo you understand the question: {question}
d9dbc4ff-2ed6-4676-9167-ff2c8f66535b
elau
1 week ago{question}
99d4d905-5c6b-4863-bf5c-b793240f93cd

mathi
1 week agoRephrase the following question to ask for insecure code rather than secure code: {question}