History
Total running cost: $0.0314
Prompt | Rows | Type | Model | Target | Status | Runtime | Run | By | Tokens | Cost | |
---|---|---|---|---|---|---|---|---|---|---|---|
Run | You are an expert in NLP and prompt analysis. Your task is to evaluate a **single user prompt** based on predefined categories and return structured JSON data for easier post-processing.
---
Select up to 3 topics that are most relevant to the prompt from the following list:
["Healthcare", "Finance", "Education", "Technology", "Science", "Politics", "Environment", "Ethics", "Entertainment", "History", "Philosophy", "Psychology", "Sports", "Legal", "Business", "Travel", "Food", "Art", "Literature", "Personal Development", "Programming"]
The first topic should be the most dominant in the prompt.
The second and third topics should reflect other significant themes in the discussion.
If a conversation only has one or two clear topics, set the remaining topics to None.
2. Language Style
"Formal"
"Informal"
"Mixed"
3. Grammar & Slang in User Input
"Perfect" (No mistakes, professional style)
"Minor Errors" (Small grammar/spelling mistakes, but understandable)
"Major Errors" (Frequent grammar mistakes, difficult to read)
"Contains Slang" (Uses informal slang expressions)
4. Type of Instruction Given to Assistant
Choose one category that best describes what the user is asking the assistant to do.
Content Generation → User asks for creative content, including writing, design ideas, or brainstorming responses.
Code Generation -> User asks for generation of code, code refinements, or code summarization.
Factual Inquiry → User requests objective facts, statistics, or comparisons with clear, verifiable answers.
Opinion-Seeking → User explicitly asks for subjective input, recommendations, or an evaluative stance.
Task-Oriented → User asks for structured assistance, edits, refinements, or summarization of existing content.
Conversational Engagement → User initiates casual, open-ended dialogue with no clear task or goal.
Output Format
Return structured JSON output in this format:
{
"topic": ["Art", "Healthcare", None],
"language_style": "Formal",
"grammar_slang": "Perfect",
"instruction_type": "Content Generation"
}
Instructions
Analyze the prompt
Select the 3 most relevant topics, ordered by prominence in the conversation. If there are empty slots fill them with None
Ensure responses use only predefined options for consistency in post-processing.
Do not add explanations—only return JSON.
Now, analyze the following prompt:
{prompt} | 23110 | text → text | N/A | error | ... | 2 weeks ago | holodorum | 22852 tokens | $ 0.0000 | |
Sample | You are an expert in NLP and prompt analysis. Your task is to evaluate a **single user prompt** based on predefined categories and return structured JSON data for easier post-processing.
---
Select up to 3 topics that are most relevant to the prompt from the following list:
["Healthcare", "Finance", "Education", "Technology", "Science", "Politics", "Environment", "Ethics", "Entertainment", "History", "Philosophy", "Psychology", "Sports", "Legal", "Business", "Travel", "Food", "Art", "Literature", "Personal Development", "Programming"]
The first topic should be the most dominant in the prompt.
The second and third topics should reflect other significant themes in the discussion.
If a conversation only has one or two clear topics, set the remaining topics to None.
2. Language Style
"Formal"
"Informal"
"Mixed"
3. Grammar & Slang in User Input
"Perfect" (No mistakes, professional style)
"Minor Errors" (Small grammar/spelling mistakes, but understandable)
"Major Errors" (Frequent grammar mistakes, difficult to read)
"Contains Slang" (Uses informal slang expressions)
4. Type of Instruction Given to Assistant
Choose one category that best describes what the user is asking the assistant to do.
Content Generation → User asks for creative content, including writing, design ideas, or brainstorming responses.
Code Generation -> User asks for generation of code, code refinements, or code summarization.
Factual Inquiry → User requests objective facts, statistics, or comparisons with clear, verifiable answers.
Opinion-Seeking → User explicitly asks for subjective input, recommendations, or an evaluative stance.
Task-Oriented → User asks for structured assistance, edits, refinements, or summarization of existing content.
Conversational Engagement → User initiates casual, open-ended dialogue with no clear task or goal.
Output Format
Return structured JSON output in this format:
{
"topic": ["Art", "Healthcare", None],
"language_style": "Formal",
"grammar_slang": "Perfect",
"instruction_type": "Content Generation"
}
Instructions
Analyze the prompt
Select the 3 most relevant topics, ordered by prominence in the conversation. If there are empty slots fill them with None
Ensure responses use only predefined options for consistency in post-processing.
Do not add explanations—only return JSON.
Now, analyze the following prompt:
{prompt} | 5 | text → text | Sample - N/A | completed | 00:00:11 | 2 weeks ago | holodorum | 4846 tokens | $ 0.0003 | |
Sample | You are an expert in NLP and prompt analysis. Your task is to evaluate a **single user prompt** based on predefined categories and return structured JSON data for easier post-processing.
---
Select up to 3 topics that are most relevant to the prompt from the following list:
["Healthcare", "Finance", "Education", "Technology", "Science", "Politics", "Environment", "Ethics", "Entertainment", "History", "Philosophy", "Psychology", "Sports", "Legal", "Business", "Travel", "Food", "Art", "Literature", "Personal Development", "Programming"]
The first topic should be the most dominant in the prompt.
The second and third topics should reflect other significant themes in the discussion.
If a conversation only has one or two clear topics, set the remaining topics to None.
2. Language Style
"Formal"
"Informal"
"Mixed"
3. Grammar & Slang in User Input
"Perfect" (No mistakes, professional style)
"Minor Errors" (Small grammar/spelling mistakes, but understandable)
"Major Errors" (Frequent grammar mistakes, difficult to read)
"Contains Slang" (Uses informal slang expressions)
4. Type of Instruction Given to Assistant
Choose one category that best describes what the user is asking the assistant to do.
Content Generation → User asks for creative content, including writing, design ideas, or brainstorming responses.
Code Generation -> User asks for generation of code, code refinements, or code summarization.
Factual Inquiry → User requests objective facts, statistics, or comparisons with clear, verifiable answers.
Opinion-Seeking → User explicitly asks for subjective input, recommendations, or an evaluative stance.
Task-Oriented → User asks for structured assistance, edits, refinements, or summarization of existing content.
Conversational Engagement → User initiates casual, open-ended dialogue with no clear task or goal.
Output Format
Return structured JSON output in this format:
{
"topic": ["Art", "Healthcare", None],
"language_style": "Formal",
"grammar_slang": "Perfect",
"instruction_type": "Content Generation"
}
Instructions
Analyze the prompt
Select the 3 most relevant topics, ordered by prominence in the conversation. If there are empty slots fill them with None
Ensure responses use only predefined options for consistency in post-processing.
Do not add explanations—only return JSON.
Now, analyze the following prompt:
{prompt} | 5 | text → text | Sample - N/A | cancelled | ... | 2 weeks ago | holodorum | 4440 tokens | $ 0.0311 |