evals.models.openai#
- class AzureOptions(api_version: str, azure_endpoint: str, azure_deployment: str | None, azure_ad_token: str | None, azure_ad_token_provider: Callable[[], str] | None)#
Bases:
object
- api_version: str#
- azure_ad_token: str | None#
- azure_ad_token_provider: Callable[[], str] | None#
- azure_deployment: str | None#
- azure_endpoint: str#
- class OpenAIModel(default_concurrency: int = 20, _verbose: bool = False, _rate_limiter: phoenix.evals.models.rate_limiters.RateLimiter = <factory>, api_key: Optional[str] = None, organization: Optional[str] = None, base_url: Optional[str] = None, model: str = 'gpt-4', temperature: float = 0.0, max_tokens: int = 256, top_p: float = 1, frequency_penalty: float = 0, presence_penalty: float = 0, n: int = 1, model_kwargs: Dict[str, Any] = <factory>, batch_size: int = 20, request_timeout: Union[float, Tuple[float, float], NoneType] = None, api_version: Optional[str] = None, azure_endpoint: Optional[str] = None, azure_deployment: Optional[str] = None, azure_ad_token: Optional[str] = None, azure_ad_token_provider: Optional[Callable[[], str]] = None, default_headers: Optional[Mapping[str, str]] = None, model_name: Optional[str] = None)#
Bases:
BaseModel
- api_key: str | None = None#
Your OpenAI key. If not provided, will be read from the environment variable
- api_version: str | None = None#
//learn.microsoft.com/en-us/azure/ai-services/openai/reference#rest-api-versioning
- Type:
https
- azure_ad_token: str | None = None#
- azure_ad_token_provider: Callable[[], str] | None = None#
- azure_deployment: str | None = None#
- azure_endpoint: str | None = None#
The endpoint to use for azure openai. Available in the azure portal. https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource
- base_url: str | None = None#
An optional base URL to use for the OpenAI API. If not provided, will default to what’s configured in OpenAI
- batch_size: int = 20#
Batch size to use when passing multiple documents to generate.
- default_headers: Mapping[str, str] | None = None#
Default headers required by AzureOpenAI
- frequency_penalty: float = 0#
Penalizes repeated tokens according to frequency.
- property invocation_params: Dict[str, Any]#
- max_tokens: int = 256#
The maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the models maximal context size.
- model: str = 'gpt-4'#
Model name to use. In of azure, this is the deployment name such as gpt-35-instant
- model_kwargs: Dict[str, Any]#
Holds any model parameters valid for create call not explicitly specified.
- model_name: str | None = None#
Deprecated since version 3.0.0.
use model instead. This will be removed
- n: int = 1#
How many completions to generate for each prompt.
- organization: str | None = None#
The organization to use for the OpenAI API. If not provided, will default to what’s configured in OpenAI
- presence_penalty: float = 0#
Penalizes repeated tokens.
- property public_invocation_params: Dict[str, Any]#
- reload_client() None #
- request_timeout: float | Tuple[float, float] | None = None#
Timeout for requests to OpenAI completion API. Default is 600 seconds.
- property supports_function_calling: bool#
- temperature: float = 0.0#
What sampling temperature to use.
- top_p: float = 1#
Total probability mass of tokens to consider at each step.
- verbose_generation_info() str #