Models/Ministral 3B
Mistral AIMistral AI / Ministral 3B
texttext
Input: $0.04 / Output: $0.04

Ministral 3B is a 3B parameter LLM optimized for on-device and edge computing. It excels in knowledge, commonsense reasoning, and function-calling, outperforming larger models like Mistral 7B on most benchmarks. Supporting up to 128k context length, it’s ideal for orchestrating agentic workflows and specialist tasks with efficient inference.

MetricValue
Parameter Count3 billion
Mixture of ExpertsNo
Context Length128,000 tokens
MultilingualYes
Quantized*Unknown

*Quantization is specific to the inference provider and the model may be offered with different quantization levels by other providers.

Mistral AI models available on Oxen.ai
ModalityPrice (1M tokens)
ModelInference providerInputOutputInputOutput
Mistral AI
texttext$0.20$0.60
Mistral AI
texttext$0.30$0.90
Mistral AI
texttext$0.04$0.04
Mistral AI
texttext$0.10$0.10
Mistral AI
texttext$0.25$0.25
Mistral AI
texttext$2.00$6.00
Mistral AI
texttext$0.15$0.15
Mistral AI
texttext$0.20$0.60
Mistral AI
texttext$2.00$6.00
Mistral AI
texttext$0.70$0.70
Mistral AI
texttext$0.15$0.15
See all models available on Oxen.ai