client#

API reference for phoenix.Client, which helps you upload and download data to and from local or remote Phoenix servers.

phoenix.Client#

class Client(*, endpoint=None, warn_if_server_not_running=True, headers=None, **kwargs)#

Bases: TraceDataExtractor

__init__(*, endpoint=None, warn_if_server_not_running=True, headers=None, **kwargs)#

Client for connecting to a Phoenix server.

Parameters:
  • endpoint (str, optional) – Phoenix server endpoint, e.g. http://localhost:6006. If not provided, the endpoint will be inferred from the environment variables.

  • headers (Mapping[str, str], optional) – Headers to include in each network request. If not provided, the headers will be inferred from the environment variables (if present).

append_to_dataset(*, dataset_name, dataframe=None, csv_file_path=None, input_keys=(), output_keys=(), metadata_keys=(), inputs=(), outputs=(), metadata=(), dataset_description=None)#

Append examples to dataset on the Phoenix server. If dataframe or csv_file_path are provided, must also provide input_keys (and optionally with output_keys or metadata_keys or both), which is a list of strings denoting the column names in the dataframe or the csv file. On the other hand, a sequence of dictionaries can also be provided via inputs (and optionally with outputs or metadat or both), each item of which represents a separate example in the dataset.

Parameters:
  • dataset_name – (str): Name of the dataset.

  • dataframe (pd.DataFrame) – pandas DataFrame.

  • csv_file_path (str | Path) – Location of a CSV text file

  • input_keys (Iterable[str]) – List of column names used as input keys. input_keys, output_keys, metadata_keys must be disjoint, and must exist in CSV column headers.

  • output_keys (Iterable[str]) – List of column names used as output keys. input_keys, output_keys, metadata_keys must be disjoint, and must exist in CSV column headers.

  • metadata_keys (Iterable[str]) – List of column names used as metadata keys. input_keys, output_keys, metadata_keys must be disjoint, and must exist in CSV column headers.

  • inputs (Iterable[Mapping[str, Any]]) – List of dictionaries object each corresponding to an example in the dataset.

  • outputs (Iterable[Mapping[str, Any]]) – List of dictionaries object each corresponding to an example in the dataset.

  • metadata (Iterable[Mapping[str, Any]]) – List of dictionaries object each corresponding to an example in the dataset.

Returns:

A Dataset object with its examples.

get_dataset(*, id=None, name=None, version_id=None)#

Gets the dataset for a specific version, or gets the latest version of the dataset if no version is specified.

Parameters:
  • id (Optional[str]) – An ID for the dataset.

  • name (Optional[str]) – the name for the dataset. If provided, the ID is ignored and the dataset is retrieved by name.

  • version_id (Optional[str]) – An ID for the version of the dataset, or None.

Returns:

A dataset object.

get_dataset_versions(dataset_id, *, limit=100)#

Get dataset versions as pandas DataFrame.

Parameters:
  • dataset_id (str) – dataset ID

  • limit (Optional[int]) – maximum number of versions to return, starting from the most recent version

Returns:

pandas DataFrame

get_evaluations(project_name=None)#

Retrieves evaluations for a given project from the Phoenix server or active session.

Parameters:

project_name (str, optional) – The name of the project to retrieve evaluations for. This can be set using environment variables. If not provided, falls back to the default project.

Returns:

A list of Evaluations objects containing evaluation data. Returns an empty list if no evaluations are found.

Return type:

List[Evaluations]

log_evaluations(*evals, **kwargs)#

Logs evaluation data to the Phoenix server.

Parameters:
  • evals (Evaluations) – One or more Evaluations objects containing the data to log.

  • project_name (str, optional) – The project name under which to log the evaluations. This can be set using environment variables. If not provided, falls back to the default project.

Returns:

None

log_traces(trace_dataset, project_name=None)#

Logs traces from a TraceDataset to the Phoenix server.

Parameters:
  • trace_dataset (TraceDataset) – A TraceDataset instance with the traces to log to the Phoenix server.

  • project_name (str, optional) – The project name under which to log the evaluations. This can be set using environment variables. If not provided, falls back to the default project.

Returns:

None

query_spans(*queries, start_time=None, end_time=None, limit=DEFAULT_SPAN_LIMIT, root_spans_only=None, project_name=None, stop_time=None, timeout=DEFAULT_TIMEOUT_IN_SECONDS)#

Queries spans from the Phoenix server or active session based on specified criteria.

Parameters:
  • queries (SpanQuery) – One or more SpanQuery objects defining the query criteria.

  • start_time (datetime, optional) – The start time for the query range. Default None.

  • end_time (datetime, optional) – The end time for the query range. Default None.

  • root_spans_only (bool, optional) – If True, only root spans are returned. Default None.

  • project_name (str, optional) – The project name to query spans for. This can be set using environment variables. If not provided, falls back to the default project.

Returns:

A pandas DataFrame or a list of pandas. DataFrames containing the queried span data, or None if no spans are found.

Return type:

Union[pd.DataFrame, List[pd.DataFrame]]

upload_dataset(*, dataset_name, dataframe=None, csv_file_path=None, input_keys=(), output_keys=(), metadata_keys=(), inputs=(), outputs=(), metadata=(), dataset_description=None)#

Upload examples as dataset to the Phoenix server. If dataframe or csv_file_path are provided, must also provide input_keys (and optionally with output_keys or metadata_keys or both), which is a list of strings denoting the column names in the dataframe or the csv file. On the other hand, a sequence of dictionaries can also be provided via inputs (and optionally with outputs or metadat or both), each item of which represents a separate example in the dataset.

Parameters:
  • dataset_name – (str): Name of the dataset.

  • dataframe (pd.DataFrame) – pandas DataFrame.

  • csv_file_path (str | Path) – Location of a CSV text file

  • input_keys (Iterable[str]) – List of column names used as input keys. input_keys, output_keys, metadata_keys must be disjoint, and must exist in CSV column headers.

  • output_keys (Iterable[str]) – List of column names used as output keys. input_keys, output_keys, metadata_keys must be disjoint, and must exist in CSV column headers.

  • metadata_keys (Iterable[str]) – List of column names used as metadata keys. input_keys, output_keys, metadata_keys must be disjoint, and must exist in CSV column headers.

  • inputs (Iterable[Mapping[str, Any]]) – List of dictionaries object each corresponding to an example in the dataset.

  • outputs (Iterable[Mapping[str, Any]]) – List of dictionaries object each corresponding to an example in the dataset.

  • metadata (Iterable[Mapping[str, Any]]) – List of dictionaries object each corresponding to an example in the dataset.

  • dataset_description – (Optional[str]): Description of the dataset.

Returns:

A Dataset object with the uploaded examples.

property web_url#

Return the web URL of the Phoenix UI. This is different from the base URL in the cases where there is a proxy like colab

Returns:

A fully qualified URL to the Phoenix UI.

Return type:

str