Send Interaction
curl --request POST \
--url https://backend.nebuly.com/event-ingestion/api/v2/events/interactions \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '{
"interaction": {
"conversation_id": "<string>",
"output": "<string>",
"time_start": "<string>",
"time_end": "<string>",
"messages": [
{}
],
"end_user": "<string>",
"rag_sources": [
"<string>"
],
"model": "<string>",
"tags": {},
"feature_flag": [
"<string>"
]
},
"anonymize": true
}'
You can use this endpoint to send a user interaction in simple AI systems. This approach is suggested for non-agentic systems, where the response of the ai system is generated in a single call to the LLM, without using any external tools or resources.
Please refer to the interaction with trace endpoint for a more comprehensive way to track interactions.
For users of the european region, the correct endpoint is https://backend.eu.nebuly.com/event-ingestion/api/v2/events/interactions.
The interaction to send to the nebuly platform.
The conversation id of the interaction. This parameter can be used to enforce interactions to be grouped by a specific conversation.
The LLM output in the interaction (the text shown to the user as assistant response).
The start time of the call to the LLM. You can approximate this to the time when the user sends the interaction. The accepted format is the ISO 8601
.
Example: 2023-12-07T15:00:00.000Z
The end time of the call to the LLM. This is when the user receives the full answer from the model. The accepted format is the ISO 8601
.
Example: 2023-12-07T15:00:10.000Z
A list of messages from the conversation so far, following the same format used by OpenAI Chat Completion endpoints.
Each message has two required fields:
role
: the role of who is sending the message. Possible values are: system, user, assistant, tool.content
: the content of the message.
The messages in the list should be ordered according to the conversation’s sequence, from the oldest message to the most recent.
Example:
"messages": [
{
"role": "system",
"content": "This is a system prompt"
},
{
"role": "user",
"content": "What's the weather like in Turin?"
},
{
"role": "assistant",
"content": "The weather is currently rainy in Turin."
},
{
"role": "user",
"content": "What's the weather like in Rome?"
}
]
An id or username uniquely identifying the end-user. We recommend hashing their username or email address, in order to avoid sending us any identifying information.
The RAG sources used to produce the output. Note that this is an array of strings so only the name of the RAG source should be given. If you are interested in tracking also the input-output of the RAG source please refer to the interaction with trace endpoint.
The LLM model you are using. Please note that this is needed if you want to visualize the cost of your requests. Now we support costs only for OpenAI models, cost of other providers coming soon.
Tag user interactions by adding key-value pairs using this parameter. Each key represents the tag name, and the corresponding value is the tag value.
For example, if you want to tag an interaction with the model version used to reply to user input, provide it as an argument for nebuly_tags, e.g. {"version" => "v1.0.0"}
. You have the flexibility to define custom tags, making them available as potential filters on the Nebuly platform.
This field is used for the AB testing feature. Please refer to its documentation for further details.
Boolean flag to anonymize your data
curl --request POST \
--url https://backend.nebuly.com/event-ingestion/api/v2/events/interactions \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '{
"interaction": {
"conversation_id": "<string>",
"output": "<string>",
"time_start": "<string>",
"time_end": "<string>",
"messages": [
{}
],
"end_user": "<string>",
"rag_sources": [
"<string>"
],
"model": "<string>",
"tags": {},
"feature_flag": [
"<string>"
]
},
"anonymize": true
}'