POST
/
event-ingestion
/
api
/
v2
/
events
/
trace_conversation
curl --request POST \
  --url https://backend.nebuly.com/event-ingestion/api/v2/events/trace_conversation \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
  "conversation": {
    "interactions": [
      {
        "input": "<string>",
        "output": "<string>",
        "time_start": "<string>",
        "time_end": "<string>",
        "tags": {},
        "trace": [
          {
            "model": "<string>",
            "messages": [
              {}
            ],
            "output": "<string>",
            "input_tokens": 123,
            "output_tokens": 123,
            "source": "<string>",
            "input": "<string>",
            "outputs": [
              "<string>"
            ]
          }
        ]
      }
    ],
    "end_user": "<string>"
  }
}'

For users of the european region, the correct endpoint is https://backend.eu.nebuly.com/event-ingestion/api/v2/events/trace_conversation.

You can use this endpoint to send the full conversation between the user and the AI system. This approach allows to send interactions (with their traces) in a single request. This endpoint is designed for sending data asynchronously in bulk once conversations are completed.

conversation
object
required

The conversation an user is having with an LLM based assistant.