Try Mistral Small 3.1 in the Workbench
Run this model interactively, tune parameters, and compare outputs.
mistral-small-2503
Mistral Small 3.1 is a 24 billion parameter Multimodal LLM designed for a wide range of generative AI tasks.
It excels in instruction following, conversational assistance, image understanding, and function calling, while being lightweight enough to run on a single RTX 4090 or a Mac with 32GB RAM when quantized.
Some other noteworthy features of Mistral Small 3.1 include fast-response conversational assistance, low-latency function calling, and the ability to be fine-tuned for specialized domains such as legal advice, medical diagnostics, and technical support.
| Metric | Value |
|---|---|
| Parameter Count | 24 billion |
| Mixture of Experts | No |
| Context Length | 128,000 tokens |
| Multilingual | Yes |
| Quantized* | Yes |
| Precision* | Unknown |
Example request
- Minimal
- Basic parameters
- All parameters
Fetch model details
The models endpoint returns the full model object, including itsjson_request_schema.