Skip to content

Trace Calls with OpenTelemetry

Enable distributed tracing for your QType applications using OpenTelemetry to monitor LLM calls, execution times, and data flow through Phoenix or other observability platforms.

QType YAML

telemetry:
  id: phoenix_trace
  provider: Phoenix
  endpoint: http://localhost:6006/v1/traces

Explanation

  • telemetry: Top-level application configuration for observability
  • id: Unique identifier for the telemetry sink
  • provider: Telemetry backend (Phoenix or Langfuse)
  • endpoint: URL where OpenTelemetry traces are sent

Starting Phoenix

Before running your application, start the Phoenix server:

python3 -m phoenix.server.main serve

Phoenix will start on http://localhost:6006 where you can view traces and spans in real-time.

Complete Example

id: trace_example
description: Example of tracing QType application calls with OpenTelemetry to Phoenix

models:
  - type: Model
    id: nova
    provider: aws-bedrock
    model_id: amazon.nova-lite-v1:0
    inference_params:
      temperature: 0.7
      max_tokens: 512

flows:
  - type: Flow
    id: classify_text
    interface:
      type: Complete
    variables:
      - id: text
        type: text
      - id: response
        type: text
    inputs:
      - text
    outputs:
      - response
    steps:
      - id: classify
        type: LLMInference
        model: nova
        system_message: "Classify the following text as positive, negative, or neutral. Respond with only one word."
        inputs:
          - text
        outputs:
          - response

telemetry:
  id: phoenix_trace
  provider: Phoenix
  endpoint: http://localhost:6006/v1/traces

Run the example:

qtype run examples/observability_debugging/trace_with_opentelemetry.qtype.yaml --text "I love this product!"

Then open http://localhost:6006 in your browser to see the traced execution.

See Also