Skip to content

A Simple Chatbot

This example creates a simple chat bot that you can converse with in the included UI, and keep track of the execution with open telemetry.

The QType File

id: hello_world
description: A simple chat flow with OpenAI and telemetry
models:
  - type: Model
    id: gpt4
    provider: openai
    model_id: gpt-4
    inference_params:
      temperature: 0.7
      max_tokens: 512
    auth: openai_auth
auths:
  - type: api_key
    id: openai_auth
    api_key: ${OPENAI_KEY}
    host: https://api.openai.com
memories:
  - id: chat_memory
    token_limit: 10000
flows:
  - type: Flow
    id: chat_example
    interface:
      type: Conversational
    variables:
      - id: user_message
        type: ChatMessage
      - id: response
        type: ChatMessage
    inputs:
      - user_message
    outputs:
      - response
    steps:
      - id: llm_inference_step
        type: LLMInference
        model: gpt4
        system_message: "You are a helpful assistant."
        memory: chat_memory
        inputs:
          - user_message
        outputs:
          - response
telemetry:
  id: hello_world_telemetry
  endpoint: http://localhost:6006/v1/traces

You can download it here. There is also a version for AWS Bedrock.

The Architecture

flowchart TD
    subgraph APP ["📱 Application: hello_world"]
        direction TB

    subgraph FLOW_0 ["💬 Flow: chat_example
A simple chat flow with OpenAI"]
        direction LR
        FLOW_0_START@{shape: circle, label: "▶️ Start"}
        FLOW_0_S0@{shape: rounded, label: "✨ llm_inference_step"}
        FLOW_0_START -->|user_message: ChatMessage'>| FLOW_0_S0
    end

    subgraph RESOURCES ["🔧 Shared Resources"]
        direction LR
        AUTH_OPENAI_AUTH@{shape: hex, label: "🔐 openai_auth\nAPI_KEY"}
        MODEL_GPT_4@{shape: rounded, label: "✨ gpt-4 (openai)" }
        MODEL_GPT_4 -.->|uses| AUTH_OPENAI_AUTH
    end

    subgraph TELEMETRY ["📊 Observability"]
        direction TB
        TEL_SINK@{shape: curv-trap, label: "📡 hello_world_telemetry\nhttp://localhost:6006/v1/traces"}
    end

    end

    FLOW_0_S0 -.->|uses| MODEL_GPT_4
    FLOW_0_S0 -.->|traces| TEL_SINK

    %% Styling
    classDef appBox fill:none,stroke:#495057,stroke-width:3px
    classDef flowBox fill:#e1f5fe,stroke:#0277bd,stroke-width:2px
    classDef llmNode fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
    classDef modelNode fill:#e8f5e8,stroke:#2e7d32,stroke-width:2px
    classDef authNode fill:#fff3e0,stroke:#ef6c00,stroke-width:2px
    classDef telemetryNode fill:#fce4ec,stroke:#c2185b,stroke-width:2px
    classDef resourceBox fill:#f5f5f5,stroke:#616161,stroke-width:1px

    class APP appBox
    class FLOW_0 flowBox
    class RESOURCES resourceBox
    class TELEMETRY telemetryNode

Authorization

You'll need an OpenAI key. Put it in a .env file, and name the variable OPENAI_KEY

Telemetry

The code pushes telemetry to a sync on your local machine. Start arize-phoenix with:

phoenix serve
Before running.

Runing the App

Just run:

qtype serve chat_with_telemetry.qtype.yaml

And you can opne thet chat at http://localhost:8000/ui