evals.models.mistralai#
- class MistralAIModel(default_concurrency=20, _verbose=False, _rate_limiter=<factory>, model='mistral-large-latest', temperature=0, top_p=None, random_seed=None, response_format=None, safe_mode=False, safe_prompt=False, initial_rate_limit=5)#
Bases:
BaseModel
An interface for using MistralAI models.
This class wraps the MistralAI SDK for use with Phoenix LLM evaluations. Calls to the MistralAI API are dynamically throttled when encountering rate limit errors. Requires the mistralai package to be installed.
- Supports Async: ✅
If possible, makes LLM calls concurrently.
- Parameters:
model (str, optional) – The model name to use. Defaults to “mistral-large-latest”.
temperature (float, optional) – Sampling temperature to use. Defaults to 0.0.
top_p (float, optional) – Total probability mass of tokens to consider at each step. Defaults to None.
random_seed (int, optional) – Random seed to use for sampling. Defaults to None.
response_format (Dict[str, str], optional) – A dictionary specifying the format of the response. Defaults to None.
safe_mode (bool, optional) – Whether to use safe mode. Defaults to False.
safe_prompt (bool, optional) – Whether to use safe prompt. Defaults to False.
initial_rate_limit (int, optional) – The initial internal rate limit in allowed requests per second for making LLM calls. This limit adjusts dynamically based on rate limit errors. Defaults to 5.
Example
# Get your own Mistral API Key: https://docs.mistral.ai/#api-access # Set the MISTRAL_API_KEY environment variable from phoenix.evals import MistralAIModel model = MistralAIModel(model="mistral-large-latest")