client#

To import Client, use:

from phoenix import Client

class Client(*, endpoint: str | None = None, warn_if_server_not_running: bool = True, headers: Mapping[str, str] | None = None, **kwargs: Any)#

Bases: TraceDataExtractor

append_to_dataset(*, dataset_name: str, dataframe: DataFrame | None = None, csv_file_path: str | Path | None = None, input_keys: Iterable[str] = (), output_keys: Iterable[str] = (), metadata_keys: Iterable[str] = (), inputs: Iterable[Mapping[str, Any]] = (), outputs: Iterable[Mapping[str, Any]] = (), metadata: Iterable[Mapping[str, Any]] = (), dataset_description: str | None = None) Dataset#

Append examples to dataset on the Phoenix server. If dataframe or csv_file_path are provided, must also provide input_keys (and optionally with output_keys or metadata_keys or both), which is a list of strings denoting the column names in the dataframe or the csv file. On the other hand, a sequence of dictionaries can also be provided via inputs (and optionally with outputs or metadat or both), each item of which represents a separate example in the dataset.

Parameters:
  • dataset_name – (str): Name of the dataset.

  • dataframe (pd.DataFrame) – pandas DataFrame.

  • csv_file_path (str | Path) – Location of a CSV text file

  • input_keys (Iterable[str]) – List of column names used as input keys. input_keys, output_keys, metadata_keys must be disjoint, and must exist in CSV column headers.

  • output_keys (Iterable[str]) – List of column names used as output keys. input_keys, output_keys, metadata_keys must be disjoint, and must exist in CSV column headers.

  • metadata_keys (Iterable[str]) – List of column names used as metadata keys. input_keys, output_keys, metadata_keys must be disjoint, and must exist in CSV column headers.

  • inputs (Iterable[Mapping[str, Any]]) – List of dictionaries object each corresponding to an example in the dataset.

  • outputs (Iterable[Mapping[str, Any]]) – List of dictionaries object each corresponding to an example in the dataset.

  • metadata (Iterable[Mapping[str, Any]]) – List of dictionaries object each corresponding to an example in the dataset.

  • dataset_description – (Optional[str]): Description of the dataset.

Returns:

A Dataset object with its examples.

get_dataset(*, id: str | None = None, name: str | None = None, version_id: str | None = None) Dataset#

Gets the dataset for a specific version, or gets the latest version of the dataset if no version is specified.

Parameters:
  • id (Optional[str]) – An ID for the dataset.

  • name (Optional[str]) – the name for the dataset. If provided, the ID

  • name. (is ignored and the dataset is retrieved by)

  • version_id (Optional[str]) – An ID for the version of the dataset, or

  • None.

Returns:

A dataset object.

get_dataset_versions(dataset_id: str, /, *, limit: int | None = 100) DataFrame#

Get dataset versions as pandas DataFrame.

Parameters:
  • dataset_id (str) – dataset ID

  • limit (Optional[int]) – maximum number of versions to return, starting from the most recent version

Returns:

pandas DataFrame

get_evaluations(project_name: str | None = None) List[Evaluations]#

Retrieves evaluations for a given project from the Phoenix server or active session.

Parameters:

project_name (str, optional) – The name of the project to retrieve evaluations for. This can be set using environment variables. If not provided, falls back to the default project.

Returns:

A list of Evaluations objects containing evaluation data. Returns an

empty list if no evaluations are found.

Return type:

List[Evaluations]

log_evaluations(*evals: Evaluations, **kwargs: Any) None#

Logs evaluation data to the Phoenix server.

Parameters:
  • evals (Evaluations) – One or more Evaluations objects containing the data to log.

  • project_name (str, optional) – The project name under which to log the evaluations. This can be set using environment variables. If not provided, falls back to the default project.

Returns:

None

log_traces(trace_dataset: TraceDataset, project_name: str | None = None) None#

Logs traces from a TraceDataset to the Phoenix server.

Parameters:
  • trace_dataset (TraceDataset) – A TraceDataset instance with the traces to log to the Phoenix server.

  • project_name (str, optional) – The project name under which to log the evaluations. This can be set using environment variables. If not provided, falls back to the default project.

Returns:

None

query_spans(*queries: SpanQuery, start_time: datetime | None = None, end_time: datetime | None = None, limit: int | None = 1000, root_spans_only: bool | None = None, project_name: str | None = None, stop_time: datetime | None = None) DataFrame | List[DataFrame] | None#

Queries spans from the Phoenix server or active session based on specified criteria.

Parameters:
  • queries (SpanQuery) – One or more SpanQuery objects defining the query criteria.

  • start_time (datetime, optional) – The start time for the query range. Default None.

  • end_time (datetime, optional) – The end time for the query range. Default None.

  • root_spans_only (bool, optional) – If True, only root spans are returned. Default None.

  • project_name (str, optional) – The project name to query spans for. This can be set using environment variables. If not provided, falls back to the default project.

Returns:

A pandas DataFrame or a list of pandas

DataFrames containing the queried span data, or None if no spans are found.

Return type:

Union[pd.DataFrame, List[pd.DataFrame]]

upload_dataset(*, dataset_name: str, dataframe: DataFrame | None = None, csv_file_path: str | Path | None = None, input_keys: Iterable[str] = (), output_keys: Iterable[str] = (), metadata_keys: Iterable[str] = (), inputs: Iterable[Mapping[str, Any]] = (), outputs: Iterable[Mapping[str, Any]] = (), metadata: Iterable[Mapping[str, Any]] = (), dataset_description: str | None = None) Dataset#

Upload examples as dataset to the Phoenix server. If dataframe or csv_file_path are provided, must also provide input_keys (and optionally with output_keys or metadata_keys or both), which is a list of strings denoting the column names in the dataframe or the csv file. On the other hand, a sequence of dictionaries can also be provided via inputs (and optionally with outputs or metadat or both), each item of which represents a separate example in the dataset.

Parameters:
  • dataset_name – (str): Name of the dataset.

  • dataframe (pd.DataFrame) – pandas DataFrame.

  • csv_file_path (str | Path) – Location of a CSV text file

  • input_keys (Iterable[str]) – List of column names used as input keys. input_keys, output_keys, metadata_keys must be disjoint, and must exist in CSV column headers.

  • output_keys (Iterable[str]) – List of column names used as output keys. input_keys, output_keys, metadata_keys must be disjoint, and must exist in CSV column headers.

  • metadata_keys (Iterable[str]) – List of column names used as metadata keys. input_keys, output_keys, metadata_keys must be disjoint, and must exist in CSV column headers.

  • inputs (Iterable[Mapping[str, Any]]) – List of dictionaries object each corresponding to an example in the dataset.

  • outputs (Iterable[Mapping[str, Any]]) – List of dictionaries object each corresponding to an example in the dataset.

  • metadata (Iterable[Mapping[str, Any]]) – List of dictionaries object each corresponding to an example in the dataset.

  • dataset_description – (Optional[str]): Description of the dataset.

Returns:

A Dataset object with the uploaded examples.

property web_url: str#

Return the web URL of the Phoenix UI. This is different from the base URL in the cases where there is a proxy like colab

Returns:

A fully qualified URL to the Phoenix UI.

Return type:

str

exception DatasetUploadError#

Bases: Exception