Skip to content

OpenAI agents SDK

Coralogix's AI Observability integrations are designed to provide deep insight into complex agentic AI applications. Through a dedicated integration with the OpenAI Agents SDK, Coralogix delivers end-to-end visibility into how your agents interact, collaborate, and utilize tools. This helps teams monitor the flow of tasks across Handoffs, analyze tool performance, and optimize the entire agentic system for efficiency and accuracy.

Overview

This library offers customized OpenTelemetry instrumentation for the OpenAI Agents SDK, optimized to support large language model (LLM) application development with streamlined integration, detailed production tracing, and effective debugging capabilities.

Requirements

Installation

Run the following command.

pip install "llm-tracekit-openai-agents"

Authentication

Authentication data is passed during OTel Span Exporter definition:

  1. Choose the ingress.:443 endpoint that corresponds to your Coralogix domain using the domain selector at the top of the page.
  2. Use your customized API key in the authorization request header.
  3. Provide the application and subsystem names.
from llm_tracekit.openai_agents import setup_export_to_coralogix

setup_export_to_coralogix(
    coralogix_token=<your_coralogix_token>,
    coralogix_endpoint="ingress.:443",
    service_name="ai-service",
    application_name="ai-application",
    subsystem_name="ai-subsystem",
    capture_content=True,
)

Note

All of the authentication parameters can also be provided through environment variables (CX_TOKEN, CX_ENDPOINT, etc.).

Usage

This section describes how to set up instrumentation for the OpenAI Agents SDK.

Set up tracing

Automatic

Use the setup_export_to_coralogix function to set up tracing and export traces to Coralogix. See the code snippet in the Authentication section.

Manual

Alternatively, you can set up tracing manually.

from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import SimpleSpanProcessor

tracer_provider = TracerProvider(
    resource=Resource.create({SERVICE_NAME: "ai-service"}),
)
exporter = OTLPSpanExporter()
span_processor = SimpleSpanProcessor(exporter)
tracer_provider.add_span_processor(span_processor)
trace.set_tracer_provider(tracer_provider)

Instrument

To instrument all clients, call the instrument method.

from llm_tracekit.openai_agents import OpenAIAgentsInstrumentor

OpenAIAgentsInstrumentor().instrument()

Uninstrument

To uninstrument clients, call the uninstrument method.

OpenAIAgentsInstrumentor().uninstrument()

Full example

from agents import Agent, Runner
from llm_tracekit.openai_agents import OpenAIAgentsInstrumentor, setup_export_to_coralogix

# Optional: Configure sending spans to Coralogix
# Reads Coralogix connection details from the following environment variables:
# - CX_TOKEN
# - CX_ENDPOINT
setup_export_to_coralogix(
    service_name="ai-service",
    application_name="ai-application",
    subsystem_name="ai-subsystem",
    capture_content=True,
)

# Activate instrumentation
OpenAIAgentsInstrumentor().instrument()

# OpenAI Agents usage example
agent = Agent(name="Assistant", instructions="You are a helpful assistant.")

# Option 1: Pass user identifier via ModelSettings extra_args (per-agent)
from agents import Agent, ModelSettings
agent = Agent(
    name="Assistant",
    instructions="You are a helpful assistant.",
    model_settings=ModelSettings(extra_args={"user": "user@company.com"}),
)
result = Runner.run_sync(agent, input="Write a short poem on open telemetry.")

# Option 2: Pass user identifier via trace metadata (per-trace)
from agents import trace
with trace("my-trace", metadata={"user": "user@company.com"}):
    result = Runner.run_sync(agent, input="Write a short poem on open telemetry.")

# Option 3: Pass user identifier via RunConfig (per-run)
from agents import RunConfig
result = Runner.run_sync(
    agent,
    input="Write a short poem on open telemetry.",
    run_config=RunConfig(trace_metadata={"user": "user@company.com"}),
)
print(result.final_output)

Enable message content capture

By default, message content — prompt contents, completions, function arguments, and return values — is not captured. To capture message content as span attributes:

  • Pass capture_content=True when calling setup_export_to_coralogix.
  • Set the environment variable OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT to true.

Most Coralogix AI evaluations require message contents to function properly, so enabling message capture is strongly recommended.

Semantic conventions

AttributeTypeDescriptionExample
gen_ai.prompt.<message_number>.rolestringRole of message author for user message <message_number>system, user, assistant, tool
gen_ai.prompt.<message_number>.contentstringContents of user message <message_number>What's the weather in Paris?
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.idstringID of tool call in user message <message_number>call_O8NOz8VlxosSASEsOY7LDUcP
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.typestringType of tool call in user message <message_number>function
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.function.namestringThe name of the function used in tool call within user message <message_number>get_current_weather
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.function.argumentsstringArguments passed to the function used in tool call within user message <message_number>{"location": "Seattle, WA"}
gen_ai.prompt.<message_number>.tool_call_idstringTool call ID in user message <message_number>call_mszuSIzqtI65i1wAUOE8w5H4
gen_ai.completion.<choice_number>.rolestringRole of message author for choice <choice_number> in model responseassistant
gen_ai.completion.<choice_number>.finish_reasonstringFinish reason for choice <choice_number> in model responsestop, tool_calls, error
gen_ai.completion.<choice_number>.contentstringContents of choice <choice_number> in model responseThe weather in Paris is rainy and overcast, with temperatures around 57°F
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.idstringID of tool call in choice <choice_number>call_O8NOz8VlxosSASEsOY7LDUcP
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.typestringType of tool call in choice <choice_number>function
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.function.namestringThe name of the function used in tool call within choice <choice_number>get_current_weather
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.function.argumentsstringArguments passed to the function used in tool call within choice <choice_number>{"location": "Seattle, WA"}
gen_ai.request.tools.<tool_number>.typestringType of tool definition advertised to the modelfunction
gen_ai.request.tools.<tool_number>.function.namestringName of the tool/function exposed to the modelget_current_weather
gen_ai.request.tools.<tool_number>.function.descriptionstringDescription of the tool/function when provided by the SDK response payloadGet current weather for a city.
gen_ai.request.tools.<tool_number>.function.parametersstringJSON schema describing the tool/function parameters passed with the request{"type": "object", "properties": {"city": {"type": "string"}}}
gen_ai.request.userstringA unique identifier representing the end user (from ModelSettings(extra_args={"user": "..."}), trace(metadata={"user": "..."}), or RunConfig(trace_metadata={"user": "..."}))user@company.com

Agent spans

These spans represent the execution of a single agent. They act as parents for LLM calls, guardrails, and handoffs initiated by that agent.
AttributeTypeDescriptionExample
typestringThe type of the span, identifying it as an agent execution.agent
agent_namestringThe name of the agent being executed.Assistant
handoffsstring[]A list of other agents that this agent is capable of handing off to.["WeatherAgent"]
toolsstring[]A list of tool names available to the agent.["get_current_weather"]
output_typestringThe expected data type of the agent's final output.MessageOutput

Guardrail spans

These spans represent the execution of a guardrail check.
AttributeTypeDescriptionExample
typestringThe type of the span, identifying it as a guardrail.guardrail
namestringThe unique name of the guardrail being executed.MathGuardrail
triggeredbooleanIndicates whether the guardrail condition was met (and triggered).false

Handoff spans

These spans represent the moment an agent attempts to delegate a task to another agent.

Handling multiple handoffs

If the LLM attempts to hand off to multiple agents in a single turn, the to_agent attribute will only contain the name of the first agent in the list. The span will also be marked with an error status to indicate this ambiguity.

AttributeTypeDescriptionExample
typestringThe type of the span, identifying it as a handoff.handoff
from_agentstringThe name of the agent initiating the handoff.Assistant
to_agentstringThe name of the agent intended to receive the handoff.WeatherAgent

Function spans

These spans represent the execution of a tool (a Python function).
AttributeTypeDescriptionExample
typestringThe type of the span, identifying it as a function.function
namestringThe name of the function that was called.get_current_weather
inputstringThe JSON string of arguments passed to the function.{"city":"Tel Aviv"}
outputstringThe string representation of the function's return value.The weather in Tel Aviv is 30°C and sunny.

Enriched LLM call spans

These attributes are added to the existing span to link LLM calls back to the responsible agent.
AttributeTypeDescriptionExample
gen_ai.agent.namestringThe name of the agent that initiated this LLM call.Assistant, WeatherAgent

Next steps

Once your integration is set up, explore the AI Center Overview to monitor performance, costs, quality issues, and security across all your AI applications — and to set up Guardrails for real-time policy enforcement.