Mistral AI / Mistral Small
Released: 9/17/2024texttext
Input: $0.20 / Output: $0.60
Mistral Small is an LLM designed for low-latency workloads. It excels in retrieval-augmented generation (RAG), coding tasks, and multilingual support.
Some noteworthy features of Mistral Small include its ability to handle long context windows up to 32,000 tokens and its strong performance in multiple languages such as French, German, Spanish, Italian, and English.
Metric | Value |
---|---|
Parameter Count | 24 billion |
Mixture of Experts | No |
Context Length | 32,000 tokens |
Multilingual | Yes |
Quantized* | No |
*Quantization is specific to the inference provider and the model may be offered with different quantization levels by other providers.
Mistral AI models available on Oxen.ai
Modality | Price (1M tokens) | ||||
---|---|---|---|---|---|
Model | Inference provider | Input | Output | Input | Output |
text | text | $0.20 | $0.60 | ||
text | text | $0.30 | $0.90 | ||
text | text | $0.04 | $0.04 | ||
text | text | $0.10 | $0.10 | ||
text | text | $0.25 | $0.25 | ||
text | text | $2.00 | $6.00 | ||
text | text | $0.15 | $0.15 | ||
text | text | $0.20 | $0.60 | ||
text | text | $2.00 | $6.00 | ||
text | text | $0.70 | $0.70 | ||
text | text | $0.15 | $0.15 |