Mistral AI / Ministral 8B
Released: 10/16/2024texttext
Input: $0.10 / Output: $0.10
Ministral 8B is a Small Language Model (SLM) designed for edge computing and on-device applications. It excels in efficient inference, knowledge retrieval, common-sense reasoning, and multilingual understanding, making it ideal for low-latency tasks and resource-constrained environments.
Some other noteworthy features of Ministral 8B include native function calling support and interleaved sliding-window attention for faster and memory-efficient processing.
Metric | Value |
---|---|
Parameter Count | 8 billion |
Mixture of Experts | No |
Context Length | 128,000 tokens |
Multilingual | Yes |
Quantized* | Unknown |
*Quantization is specific to the inference provider and the model may be offered with different quantization levels by other providers.
Mistral AI models available on Oxen.ai
Modality | Price (1M tokens) | ||||
---|---|---|---|---|---|
Model | Inference provider | Input | Output | Input | Output |
Mistral AI | text | text | $0.20 | $0.60 | |
Mistral AI | text | text | $0.04 | $0.04 | |
Mistral AI | text | text | $0.10 | $0.10 | |
Mistral AI | text | text | $0.25 | $0.25 | |
Mistral AI | text | text | $2.00 | $6.00 | |
Mistral AI | text | text | $0.15 | $0.15 | |
Mistral AI | text | text | $0.20 | $0.60 | |
Mistral AI | text | text | $2.00 | $6.00 | |
Mistral AI | text | text | $0.70 | $0.70 | |
Mistral AI | text | text | $0.15 | $0.15 |