OpenAI agents SDK
Coralogix's AI Observability integrations are designed to provide deep insight into complex agentic AI applications. Through a dedicated integration with the OpenAI Agents SDK, Coralogix delivers end-to-end visibility into how your agents interact, collaborate, and utilize tools. This helps teams monitor the flow of tasks across Handoffs, analyze tool performance, and optimize the entire agentic system for efficiency and accuracy.
Overview
This library offers customized OpenTelemetry instrumentation for the OpenAI Agents SDK, optimized to support large language model (LLM) application development with streamlined integration, detailed production tracing, and effective debugging capabilities.
Requirements
- Python 3.10–3.13.
- Coralogix API keys.
Installation
Run the following command.
Authentication
Authentication data is passed during OTel Span Exporter definition:
- Choose the ingress.:443 endpoint that corresponds to your Coralogix domain using the domain selector at the top of the page.
- Use your customized API key in the authorization request header.
- Provide the application and subsystem names.
from llm_tracekit.openai_agents import setup_export_to_coralogix
setup_export_to_coralogix(
coralogix_token=<your_coralogix_token>,
coralogix_endpoint="ingress.:443",
service_name="ai-service",
application_name="ai-application",
subsystem_name="ai-subsystem",
capture_content=True,
)
Note
All of the authentication parameters can also be provided through environment variables (CX_TOKEN, CX_ENDPOINT, etc.).
Usage
This section describes how to set up instrumentation for the OpenAI Agents SDK.
Set up tracing
Automatic
Use the setup_export_to_coralogix function to set up tracing and export traces to Coralogix. See the code snippet in the Authentication section.
Manual
Alternatively, you can set up tracing manually.
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import SimpleSpanProcessor
tracer_provider = TracerProvider(
resource=Resource.create({SERVICE_NAME: "ai-service"}),
)
exporter = OTLPSpanExporter()
span_processor = SimpleSpanProcessor(exporter)
tracer_provider.add_span_processor(span_processor)
trace.set_tracer_provider(tracer_provider)
Instrument
To instrument all clients, call the instrument method.
from llm_tracekit.openai_agents import OpenAIAgentsInstrumentor
OpenAIAgentsInstrumentor().instrument()
Uninstrument
To uninstrument clients, call the uninstrument method.
Full example
from agents import Agent, Runner
from llm_tracekit.openai_agents import OpenAIAgentsInstrumentor, setup_export_to_coralogix
# Optional: Configure sending spans to Coralogix
# Reads Coralogix connection details from the following environment variables:
# - CX_TOKEN
# - CX_ENDPOINT
setup_export_to_coralogix(
service_name="ai-service",
application_name="ai-application",
subsystem_name="ai-subsystem",
capture_content=True,
)
# Activate instrumentation
OpenAIAgentsInstrumentor().instrument()
# OpenAI Agents usage example
agent = Agent(name="Assistant", instructions="You are a helpful assistant.")
# Option 1: Pass user identifier via ModelSettings extra_args (per-agent)
from agents import Agent, ModelSettings
agent = Agent(
name="Assistant",
instructions="You are a helpful assistant.",
model_settings=ModelSettings(extra_args={"user": "user@company.com"}),
)
result = Runner.run_sync(agent, input="Write a short poem on open telemetry.")
# Option 2: Pass user identifier via trace metadata (per-trace)
from agents import trace
with trace("my-trace", metadata={"user": "user@company.com"}):
result = Runner.run_sync(agent, input="Write a short poem on open telemetry.")
# Option 3: Pass user identifier via RunConfig (per-run)
from agents import RunConfig
result = Runner.run_sync(
agent,
input="Write a short poem on open telemetry.",
run_config=RunConfig(trace_metadata={"user": "user@company.com"}),
)
print(result.final_output)
Enable message content capture
By default, message content — prompt contents, completions, function arguments, and return values — is not captured. To capture message content as span attributes:
- Pass
capture_content=Truewhen callingsetup_export_to_coralogix. - Set the environment variable
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENTtotrue.
Most Coralogix AI evaluations require message contents to function properly, so enabling message capture is strongly recommended.
Semantic conventions
| Attribute | Type | Description | Example |
|---|---|---|---|
gen_ai.prompt.<message_number>.role | string | Role of message author for user message <message_number> | system, user, assistant, tool |
gen_ai.prompt.<message_number>.content | string | Contents of user message <message_number> | What's the weather in Paris? |
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.id | string | ID of tool call in user message <message_number> | call_O8NOz8VlxosSASEsOY7LDUcP |
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.type | string | Type of tool call in user message <message_number> | function |
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.function.name | string | The name of the function used in tool call within user message <message_number> | get_current_weather |
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.function.arguments | string | Arguments passed to the function used in tool call within user message <message_number> | {"location": "Seattle, WA"} |
gen_ai.prompt.<message_number>.tool_call_id | string | Tool call ID in user message <message_number> | call_mszuSIzqtI65i1wAUOE8w5H4 |
gen_ai.completion.<choice_number>.role | string | Role of message author for choice <choice_number> in model response | assistant |
gen_ai.completion.<choice_number>.finish_reason | string | Finish reason for choice <choice_number> in model response | stop, tool_calls, error |
gen_ai.completion.<choice_number>.content | string | Contents of choice <choice_number> in model response | The weather in Paris is rainy and overcast, with temperatures around 57°F |
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.id | string | ID of tool call in choice <choice_number> | call_O8NOz8VlxosSASEsOY7LDUcP |
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.type | string | Type of tool call in choice <choice_number> | function |
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.function.name | string | The name of the function used in tool call within choice <choice_number> | get_current_weather |
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.function.arguments | string | Arguments passed to the function used in tool call within choice <choice_number> | {"location": "Seattle, WA"} |
gen_ai.request.tools.<tool_number>.type | string | Type of tool definition advertised to the model | function |
gen_ai.request.tools.<tool_number>.function.name | string | Name of the tool/function exposed to the model | get_current_weather |
gen_ai.request.tools.<tool_number>.function.description | string | Description of the tool/function when provided by the SDK response payload | Get current weather for a city. |
gen_ai.request.tools.<tool_number>.function.parameters | string | JSON schema describing the tool/function parameters passed with the request | {"type": "object", "properties": {"city": {"type": "string"}}} |
gen_ai.request.user | string | A unique identifier representing the end user (from ModelSettings(extra_args={"user": "..."}), trace(metadata={"user": "..."}), or RunConfig(trace_metadata={"user": "..."})) | user@company.com |
Agent spans
These spans represent the execution of a single agent. They act as parents for LLM calls, guardrails, and handoffs initiated by that agent.
| Attribute | Type | Description | Example |
|---|---|---|---|
type | string | The type of the span, identifying it as an agent execution. | agent |
agent_name | string | The name of the agent being executed. | Assistant |
handoffs | string[] | A list of other agents that this agent is capable of handing off to. | ["WeatherAgent"] |
tools | string[] | A list of tool names available to the agent. | ["get_current_weather"] |
output_type | string | The expected data type of the agent's final output. | MessageOutput |
Guardrail spans
These spans represent the execution of a guardrail check.
| Attribute | Type | Description | Example |
|---|---|---|---|
type | string | The type of the span, identifying it as a guardrail. | guardrail |
name | string | The unique name of the guardrail being executed. | MathGuardrail |
triggered | boolean | Indicates whether the guardrail condition was met (and triggered). | false |
Handoff spans
These spans represent the moment an agent attempts to delegate a task to another agent.
Handling multiple handoffs
If the LLM attempts to hand off to multiple agents in a single turn, the to_agent attribute will only contain the name of the first agent in the list. The span will also be marked with an error status to indicate this ambiguity.
| Attribute | Type | Description | Example |
|---|---|---|---|
type | string | The type of the span, identifying it as a handoff. | handoff |
from_agent | string | The name of the agent initiating the handoff. | Assistant |
to_agent | string | The name of the agent intended to receive the handoff. | WeatherAgent |
Function spans
These spans represent the execution of a tool (a Python function).
| Attribute | Type | Description | Example |
|---|---|---|---|
type | string | The type of the span, identifying it as a function. | function |
name | string | The name of the function that was called. | get_current_weather |
input | string | The JSON string of arguments passed to the function. | {"city":"Tel Aviv"} |
output | string | The string representation of the function's return value. | The weather in Tel Aviv is 30°C and sunny. |
Enriched LLM call spans
These attributes are added to the existing span to link LLM calls back to the responsible agent.
| Attribute | Type | Description | Example |
|---|---|---|---|
gen_ai.agent.name | string | The name of the agent that initiated this LLM call. | Assistant, WeatherAgent |
Next steps
Once your integration is set up, explore the AI Center Overview to monitor performance, costs, quality issues, and security across all your AI applications — and to set up Guardrails for real-time policy enforcement.