Mistral AI / Ministral 3B
texttext
Input: $0.04 / Output: $0.04
Ministral 3B is a 3B parameter LLM optimized for on-device and edge computing. It excels in knowledge, commonsense reasoning, and function-calling, outperforming larger models like Mistral 7B on most benchmarks. Supporting up to 128k context length, it’s ideal for orchestrating agentic workflows and specialist tasks with efficient inference.
Metric | Value |
---|---|
Parameter Count | 3 billion |
Mixture of Experts | No |
Context Length | 128,000 tokens |
Multilingual | Yes |
Quantized* | Unknown |
*Quantization is specific to the inference provider and the model may be offered with different quantization levels by other providers.
Mistral AI models available on Oxen.ai
Modality | Price (1M tokens) | ||||
---|---|---|---|---|---|
Model | Inference provider | Input | Output | Input | Output |
text | text | $0.20 | $0.60 | ||
text | text | $0.30 | $0.90 | ||
text | text | $0.04 | $0.04 | ||
text | text | $0.10 | $0.10 | ||
text | text | $0.25 | $0.25 | ||
text | text | $2.00 | $6.00 | ||
text | text | $0.15 | $0.15 | ||
text | text | $0.20 | $0.60 | ||
text | text | $2.00 | $6.00 | ||
text | text | $0.70 | $0.70 | ||
text | text | $0.15 | $0.15 |