Hide properties
Hide properties
The conversation id of the interaction. This parameter can be used to enforce interactions to be grouped by a specific conversation.
The user input in the interaction.
The LLM output in the interaction (the text shown to the user as assistant response).
The start time of the call to the LLM. You can approximate this to the time when the user sends the interaction. The accepted format is the
ISO 8601
.Example: 2023-12-07T15:00:00.000Z
The end time of the call to the LLM. This is when the user receives the full answer from the model. The accepted format is the
ISO 8601
.Example: 2023-12-07T15:00:00.000Z
An id or username uniquely identifying the end-user. We recommend hashing their username or email address, in order to avoid sending us any identifying information.
Tag user interactions by adding key-value pairs using this parameter. Each key represents the tag name, and the corresponding value is the tag value.For example, if you want to tag an interaction with the model version used to reply to user input, provide it as an argument for nebuly_tags, e.g.
{"version" => "v1.0.0"}
. You have the flexibility to define custom tags, making them available as potential filters on the Nebuly platform.A list of feedback actions provided by the end-user during or after the interaction.
As optional extra parameters is possible to include:
Hide properties
Hide properties
The type of action. Accepted values include:
thumbs_up
thumbs_down
copy_input
copy_output
paste
comment
regenerate
edit
rating
- text (string, optional): textual feedback associated with the action (if applicable).
- value (integer, optional): only used for the
rating
action to capture a numerical score.
Copy
Ask AI
"feedback_actions": [
{
"slug": "thumbs_up",
"text": "Very helpful response!"
},
{
"slug": "comment",
"text": "Can you explain more about this?"
},
{
"slug": "rating",
"value": 4,
"text": "Pretty good!"
},
{
"slug": "regenerate"
}
]
The full trace of your LLM agent or chain.
Show properties
Show properties
The LLM used.
A list of messages from the conversation so far, following the same format used by OpenAI Chat Completion endpoints.Each message has two required fields:
role
: the role of who is sending the message. Possible values are: system, user, assistant, tool.content
: the content of the message.
Copy
Ask AI
"messages": [
{
"role": "system",
"content": "This is a system prompt"
},
{
"role": "user",
"content": "What's the weather like in Turin?"
},
{
"role": "assistant",
"content": "The weather is currently rainy in Turin."
},
{
"role": "user",
"content": "What's the weather like in Rome?"
}
]
The LLM output message.
The number of input tokens.
The number of output tokens.
Boolean flag to anonymize your data