Evaluations/Categories, Sentiment, and language
main
ultrachat_200k_test_sft.parquet
texttext
OpenAIOpenAI/GPT 4o mini
OpenAI OpenAI
prompt_prediction
You are an expert in NLP and prompt analysis. Your task is to evaluate a **single user prompt** based on predefined categories and return structured JSON data for easier post-processing.

---

### **Input Format**
You will receive a **single user prompt**:

Analyze the given prompt only and classify it according to the categories below.
1. Primary Topic (Single Choice)

Select the most relevant topic from the following list:

["Healthcare", "Finance", "Education", "Technology", "Science", "Politics", "Environment", "Ethics", "Entertainment", "History", "Philosophy", "Psychology", "Sports", "Legal", "Business", "Travel", "Food", "Art", "Literature", "Personal Development"]

2. Language Style

    "Formal"
    "Informal"
    "Mixed"

3. Grammar & Slang in User Input

    "Perfect" (No mistakes, professional style)
    "Minor Errors" (Small grammar/spelling mistakes, but understandable)
    "Major Errors" (Frequent grammar mistakes, difficult to read)
    "Contains Slang" (Uses informal slang expressions)

4. Type of Instruction Given to Assistant

Choose one category that best describes what the user is asking the assistant to do.

    Content Generation → User asks for creative content, including writing, design ideas, or brainstorming responses.
        Example: "Create a t-shirt design about animal rights."
        Example: "Write a short sci-fi story."
        Example: "Generate ideas for a marketing slogan."

    Factual Inquiry → User requests objective facts, statistics, or comparisons with clear, verifiable answers.
        Example: "What are the top 5 largest animal rights organizations?"
        Example: "Give me statistics on deforestation and animal extinction."
        Example: "Compare the environmental impact of cotton vs. synthetic fabrics."

    Opinion-Seeking → User explicitly asks for subjective input, recommendations, or an evaluative stance.
        Example: "What’s your opinion on using synthetic leather?"
        Example: "Do you think my t-shirt design idea is effective?"
        Example: "What’s the best way to convince people to care about animal rights?"

    Task-Oriented → User asks for structured assistance, edits, refinements, or summarization of existing content.
        Example: "Summarize the key points from this discussion."
        Example: "Improve my t-shirt design by making it more dynamic."
        Example: "Make my speech more persuasive."

    Conversational Engagement → User initiates casual, open-ended dialogue with no clear task or goal.
        Example: "What do you think about animal welfare?"
        Example: "Tell me something interesting about t-shirts!"
        Example: "Let’s chat about animal rights history."

Output Format

Return structured JSON output in this format:

{
  "topic": "Art",
  "language_style": "Formal",
  "grammar_slang": "Perfect",
  "instruction_type": "Content Generation"
}

Instructions

    Analyze only the first user message.
    Select only one topic (most relevant).
    Use only predefined options for consistency.
    Do not add explanations—only return JSON.

Now, analyze the following prompt:

{{prompt}}
Mar 16, 2025, 11:52 AM UTC
Mar 16, 2025, 11:52 AM UTC
10 row sample
6965 tokens$ 0.0012
10 rows processed, 6965 tokens used ($0.0012)
Estimated cost for all 23110 rows: $2.83
Sample Results completed
4 columns, 1-10 of 23110 rows