evals.models.openai#

class OpenAIModel(default_concurrency=20, _verbose=False, _rate_limiter=<factory>, api_key=None, organization=None, base_url=None, model='gpt-4', temperature=0.0, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0, n=1, model_kwargs=<factory>, request_timeout=None, api_version=None, azure_endpoint=None, azure_deployment=None, azure_ad_token=None, azure_ad_token_provider=None, default_headers=None, initial_rate_limit=10, model_name=None)#

Bases: BaseModel

An interface for using OpenAI models.

This class wraps the OpenAI SDK library for use with Phoenix LLM evaluations. Calls to the OpenAI API are dynamically throttled when encountering rate limit errors. Requires the openai package to be installed.

Additionally, OpenAIModel supports Azure OpenAI API. To use Azure OpenAI API, you need to provide the azure_endpoint and azure_deployment parameters. You can also provide the azure_ad_token or azure_ad_token_provider to authenticate with Azure OpenAI API.

Supports Async: ✅

If possible, makes LLM calls concurrently.

Parameters:
  • api_key (str, optional) – Your OpenAI key. If not provided, will be read from the environment variable. Defaults to None.

  • organization (str, optional) – The organization to use for the OpenAI API. If not provided, will default to what’s configured in OpenAI. Defaults to None.

  • base_url (str, optional) – An optional base URL to use for the OpenAI API. If not provided, will default to what’s configured in OpenAI. Defaults to None.

  • model (str, optional) – Model name to use. In of azure, this is the deployment name such as gpt-35-instant. Defaults to “gpt-4”.

  • temperature (float, optional) – What sampling temperature to use. Defaults to 0.0.

  • max_tokens (int | None, optional) – The maximum number of tokens to generate in the completion. To unset this limit, set max_tokens to None. Defaults to 256.

  • top_p (float, optional) – Total probability mass of tokens to consider at each step. Defaults to 1.

  • frequency_penalty (float, optional) – Penalizes repeated tokens according to frequency. Defaults to 0.

  • presence_penalty (float, optional) – Penalizes repeated tokens. Defaults to 0.

  • n (int, optional) – How many completions to generate for each prompt. Defaults to 1.

  • model_kwargs (Dict[str, Any], optional) – Holds any model parameters valid for create call not explicitly specified. Defaults to an empty dictionary.

  • request_timeout (Optional[Union[float, Tuple[float, float]]], optional) – Timeout for requests to OpenAI completion API. Default is 600 seconds. Defaults to None.

  • api_version (str, optional) – The version of the Azure API to use. Defaults to None. https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#rest-api-versioning

  • azure_endpoint (str, optional) – The endpoint to use for azure openai. Available in the Azure portal. Defaults to None.

  • azure_deployment (str, optional) – The deployment to use for azure openai. Defaults to None.

  • azure_ad_token (str, optional) – The azure AD token to use for azure openai. Defaults to None.

  • azure_ad_token_provider (Callable[[], str], optional) – A callable that returns the azure ad token to use for azure openai. Defaults to None.

  • default_headers (Mapping[str, str], optional) – Default headers required by AzureOpenAI. Defaults to None.

  • initial_rate_limit (int, optional) – The initial internal rate limit in allowed requests per second for making LLM calls. This limit adjusts dynamically based on rate limit errors. Defaults to 10.

Examples

After setting the OPENAI_API_KEY environment variable: .. code-block:: python

from phoenix.evals import OpenAIModel model = OpenAIModel(model=”gpt-4o”)

Using OpenAI models via Azure is similar (after setting the AZURE_OPENAI_API_KEY environment variable):

from phoenix.evals import OpenAIModel
model = OpenAIModel(
    model="gpt-35-turbo-16k",
    azure_endpoint="https://your-endpoint.azure.com/",
    api_version="2023-09-15-preview",
)
model_name = None#

Deprecated since version 3.0.0.

use model instead. This will be removed