Evaluations
Run models against your data
Introducing Evaluations, a powerful feature designed to enable you to effortlessly test and compare a selection of AI models against your datasets.
Whether you're fine-tuning models or evaluating performance metrics, Oxen evaluations simplifies the process, allowing you to quickly and easily run prompts through an entire dataset.
Once you're happy with the results, output the resulting dataset to a new file, another branch, or directly as a new commit.
Retrieve patch files for oracle retrieval
c6a84a11-f5dd-4f0f-81c8-c1573c020e62 225 rows completed
Eloy Martinez
1 month ago
Prompt: The column {patch} contains a Git diff that should be applied to the codebase to ensure tests pass. I need you to extract and return a list of all files that are modified by the patch. For example, given the following Git diff:
diff
Copy code
diff --git a/src/sqlfluff/cli/commands.py b/src/sqlfluff/cli/commands.py
--- a/src/sqlfluff/cli/commands.py
+++ b/src/sqlfluff/cli/commands.py
@@ -44,6 +44,7 @@
dialect_selector,
dialect_readout,
)
+from sqlfluff.core.linter import LintingResult
from sqlfluff.core.config import progress_bar_configuration
from sq
I would expect the output to be:
css
Copy code
["src/sqlfluff/cli/commands.py"]
Please return a list of file names for all files modified in the patch, formatted exactly like the example. Do not include any additional text, backticks, or language-specific syntax—just the file names, separated by commas and enclosed in square brackets.
text → textOpenAI/GPT 4o mini
Source:
Target: