Skip to content

Integration

Using MetaMemory with Mistral AI

Mistral AI offers a compelling embedding solution through its mistral-embed model, which produces compact 256-dimensional vectors that punch well above their weight in retrieval benchmarks. MetaMemory's integration with Mistral is optimized to leverage this efficiency, making it an excellent choice for cost-conscious deployments that still demand high-quality memory. The 256-dimensional fixed output means significantly lower storage and computation costs compared to models that produce 1024 or 3072-dimensional vectors. For MetaMemory deployments with millions of memories, this translates to substantial infrastructure savings without a proportional loss in retrieval quality. Mistral embeddings are particularly strong for European languages, reflecting the company's French origins and training data composition. If your agents serve users across Europe, Mistral embeddings often outperform larger models on French, German, Spanish, and Italian content. MetaMemory automatically adjusts its similarity thresholds and normalization for the 256-dimensional vector space, so you get optimal retrieval quality without manual calibration. The integration handles Mistral's API format, batching, and rate limits transparently. Because Mistral also offers strong LLM capabilities, teams that use Mistral for their agent's reasoning can keep their entire stack with a single provider while still benefiting from MetaMemory's multi-vector architecture. The combination of low cost, solid quality, and a unified provider makes Mistral a smart choice for teams optimizing their total cost of ownership.

Setup Guide

1

Create a Mistral AI API Key

Go to console.mistral.ai and sign up or sign in. Navigate to the API Keys section and click "Create New Key". Mistral offers a free tier with sufficient credits for development and testing. Copy your API key securely — you will need it to configure MetaMemory. Make sure your account has the embedding endpoint enabled, as some older accounts may need to explicitly activate this capability in their settings.

2

Set Up Mistral in MetaMemory

In the MetaMemory dashboard, go to Settings then Provider Keys and select "Mistral AI" from the dropdown. Paste your API key and confirm mistral-embed as the default model. Note that Mistral embeddings are fixed at 256 dimensions — MetaMemory automatically adapts its vector storage and similarity computation for this dimensionality. The key validation will confirm that your Mistral account has embedding access before saving the configuration.

3

Store Your First Memory

Use the MetaMemory API to send a text payload for memory storage. The system routes your content through the Mistral embedding endpoint and generates compact 256-dimensional vectors for each of the four memory types. Test retrieval with a semantic query to confirm the integration is working. Despite the lower dimensionality, you should see strong relevance scores for related content. Compare the storage size with other providers to see the efficiency gains firsthand.

Configuration Example

curl -X POST https://api.metamemory.tech/v1/providers \
  -H "Authorization: Bearer YOUR_METAMEMORY_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "provider": "mistral",
    "api_key": "YOUR_MISTRAL_API_KEY",
    "default_model": "mistral-embed",
    "settings": {
      "dimensions": 256,
      "encoding_format": "float"
    }
  }'

Supported Models

mistral-embedDefault

Capabilities

EmbeddingsLLM

Ready to use Mistral AI with MetaMemory?

Get started in minutes. Connect your Mistral AI API key and give your agents persistent, intelligent memory.

Your agents deserve to remember

Bring your own AI keys. Integrate in minutes. Your data stays yours.