Send Interaction
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
The interaction to send to the nebuly platform.
The user input in the interaction.
The LLM output in the interaction (the text shown to the user as assistant response).
The start time of the call to the LLM. You can approximate this to the time when the user sends the interaction. The accepted format is the Example:
The end time of the call to the LLM. This is when the user receives the full answer from the model. The accepted format is the Example:
History is an array of tuples of input and output, representing the entire conversation. It should be structured as the example below: In this format,
An id or username uniquely identifying the end-user. We recommend hashing their username or email address, in order to avoid sending us any identifying information.
The RAG sources used to produce the output. Note that this is an array of strings so only the name of the RAG source should be given. If you are interested in tracking also the input-output of the RAG source please refer to the
The LLM model you are using. Please note that this is needed if you want to visualize the cost of your requests. Now we support costs only for OpenAI models, cost of other providers coming soon.
The system prompt you are currently using to instruct your model. It will be used to get a more precise value for the interaction cost.
Tag user interactions by adding key-value pairs using this parameter. Each key represents the tag name, and the corresponding value is the tag value. For example, if you want to tag an interaction with the model version used to reply to user input, provide it as an argument for nebuly_tags, e.g.
This field is used for the AB testing feature. Please refer to its documentation for further details.
Boolean flag to anonymize your data
curl --request POST \
--url https://backend.nebuly.com/event-ingestion/api/v1/events/interactions \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '{
"interaction": {
"input": "<string>",
"output": "<string>",
"time_start": "<string>",
"time_end": "<string>",
"history": [
[
"<string>"
]
],
"end_user": "<string>",
"rag_sources": [
"<string>"
],
"model": "<string>",
"system_prompt": "<string>",
"tags": {},
"feature_flag": [
"<string>"
]
},
"anonymize": true
}'
The interaction to send to the nebuly platform.
The user input in the interaction.
The LLM output in the interaction (the text shown to the user as assistant response).
The start time of the call to the LLM. You can approximate this to the time when the user sends the interaction. The accepted format is the ISO 8601
.
Example: 2023-12-07T15:00:00.000Z
The end time of the call to the LLM. This is when the user receives the full answer from the model. The accepted format is the ISO 8601
.
Example: 2023-12-07T15:00:10.000Z
History is an array of tuples of input and output, representing the entire conversation. It should be structured as the example below:
history = [
["input_1", "output_1"],
["input_2", "output_2"]
]
In this format, input_*
and output_*
stand for the input and output of each interaction. So, input_1
and output_1
are the input and output of the first interaction, input_2
and output_2
are for the second interaction, and so on.
An id or username uniquely identifying the end-user. We recommend hashing their username or email address, in order to avoid sending us any identifying information.
The RAG sources used to produce the output. Note that this is an array of strings so only the name of the RAG source should be given. If you are interested in tracking also the input-output of the RAG source please refer to the interaction with trace endpoint.
The LLM model you are using. Please note that this is needed if you want to visualize the cost of your requests. Now we support costs only for OpenAI models, cost of other providers coming soon.
The system prompt you are currently using to instruct your model. It will be used to get a more precise value for the interaction cost.
Tag user interactions by adding key-value pairs using this parameter. Each key represents the tag name, and the corresponding value is the tag value.
For example, if you want to tag an interaction with the model version used to reply to user input, provide it as an argument for nebuly_tags, e.g. {"version" => "v1.0.0"}
. You have the flexibility to define custom tags, making them available as potential filters on the Nebuly platform.
This field is used for the AB testing feature. Please refer to its documentation for further details.
Boolean flag to anonymize your data
curl --request POST \
--url https://backend.nebuly.com/event-ingestion/api/v1/events/interactions \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '{
"interaction": {
"input": "<string>",
"output": "<string>",
"time_start": "<string>",
"time_end": "<string>",
"history": [
[
"<string>"
]
],
"end_user": "<string>",
"rag_sources": [
"<string>"
],
"model": "<string>",
"system_prompt": "<string>",
"tags": {},
"feature_flag": [
"<string>"
]
},
"anonymize": true
}'