Mistral AI / Ministral 3B
texttext
Input: $0.04 / Output: $0.04
Ministral 3B is a Small Language Model (SLM) designed for edge computing and on-device use cases. It excels in efficient processing, low-latency performance, and function-calling capabilities, making it suitable for applications on mobile devices and laptops.
Some other noteworthy features of Ministral 3B include input parsing, task routing, and API calling with minimal latency and operational costs.
Metric | Value |
---|---|
Parameter Count | 3 billion |
Mixture of Experts | No |
Context Length | 128,000 tokens |
Multilingual | Yes |
Quantized* | Unknown |
*Quantization is specific to the inference provider and the model may be offered with different quantization levels by other providers.
Mistral AI models available on Oxen.ai
Modality | Price (1M tokens) | ||||
---|---|---|---|---|---|
Model | Inference provider | Input | Output | Input | Output |
Mistral AI | text | text | $0.20 | $0.60 | |
Mistral AI | text | text | $0.04 | $0.04 | |
Mistral AI | text | text | $0.10 | $0.10 | |
Mistral AI | text | text | $0.25 | $0.25 | |
Mistral AI | text | text | $2.00 | $6.00 | |
Mistral AI | text | text | $0.15 | $0.15 | |
Mistral AI | text | text | $0.20 | $0.60 | |
Mistral AI | text | text | $2.00 | $6.00 | |
Mistral AI | text | text | $0.70 | $0.70 | |
Mistral AI | text | text | $0.15 | $0.15 |