POST
/
event-ingestion
/
api
/
v2
/
events
/
trace_interaction
curl --request POST \
  --url https://backend.nebuly.com/event-ingestion/api/v2/events/trace_interaction \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
  "interaction": {
    "conversation_id": "<string>",
    "input": "<string>",
    "output": "<string>",
    "time_start": "<string>",
    "time_end": "<string>",
    "end_user": "<string>",
    "tags": {},
    "feature_flag": [
      "<string>"
    ]
  },
  "traces": [
    {
      "model": "<string>",
      "messages": [
        {}
      ],
      "output": "<string>",
      "input_tokens": 123,
      "output_tokens": 123,
      "source": "<string>",
      "input": "<string>",
      "outputs": [
        "<string>"
      ]
    }
  ],
  "anonymize": true
}'

You can use this endpoint to send a user interaction with its generation trace. A generation trace is a list of chain steps that the AI agentic system performed before generating the response.

For users of the european region, the correct endpoint is https://backend.eu.nebuly.com/event-ingestion/api/v2/events/trace_interaction.

To correctly track the costs of the interaction, it is mandatory to provide the API calls made to the LLM as LLMTraces. Ensure to pass the input and output tokens consumed to have more accurate cost estimation.

interaction
object
required

The interaction to send to the nebuly platform.

traces
object[]
required

The full trace of your LLM agent or chain.

anonymize
boolean
default:"true"

Boolean flag to anonymize your data