Welcome
Tracking
- Getting started
- Python
- Node.js
- Ruby
- Javascript
Send conversation with interaction traces
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
The conversation an user is having with an LLM based assistant.
The interaction to send to the nebuly platform.
An id or username uniquely identifying the end-user. We recommend hashing their username or email address, in order to avoid sending us any identifying information.
The conversation an user is having with an LLM based assistant.
The interaction to send to the nebuly platform.
The user input in the interaction.
The LLM output in the interaction (the text shown to the user as assistant response).
The start time of the call to the LLM. You can approximate this to the time when the user sends the interaction. The accepted format is the ISO 8601
.
Example: 2023-12-07T15:00:00.000Z
The end time of the call to the LLM. This is when the user receives the full answer from the model. The accepted format is the ISO 8601
.
Example: 2023-12-07T15:00:00.000Z
Tag user interactions by adding key-value pairs using this parameter. Each key represents the tag name, and the corresponding value is the tag value.
For example, if you want to tag an interaction with the model version used to reply to user input, provide it as an argument for nebuly_tags, e.g. {"version" => "v1.0.0"}
. You have the flexibility to define custom tags, making them available as potential filters on the Nebuly platform.
The full trace of your LLM agent or chain. There are two kind of steps that can compose a trace:
The intermediate call to an LLM.
The LLM used.
A list of messages from the conversation so far, following the same format used by OpenAI Chat Completion endpoints.
Each message has two required fields:
role
: the role of who is sending the message. Possible values are: system, user, assistant, tool.content
: the content of the message.
The messages in the list should be ordered according to the conversation’s sequence, from the oldest message to the most recent. Example:
"messages": [
{
"role": "system",
"content: "This is a system prompt"
},
{
"role": "user",
"content": "user input 1"
},
{
"role": "assistant",
"content": "assistant response 1"
},
{
"role": "user",
"content": "user input"
}
]
The LLM output message.
The number of input tokens.
The number of output tokens.
An id or username uniquely identifying the end-user. We recommend hashing their username or email address, in order to avoid sending us any identifying information.