In this guide, we will show how to send users’ interactions with an OpenAI model to Nebuly’s platform.
Before getting started, you would need two API keys:
Your OpenAI key
Nebuly authentication key
Head to ⚙️ settings and navigate to “project settings”;
Create a new project (if you don’t have one already) and give it a name (e.g. Internal chatbot);
Copy the nebuly default_key that has been assigned to the project.
require'net/http'require'uri'require'json'require'date'OPENAI_API_KEY="<OPENAI_API_KEY>"NEBULY_API_KEY="<NEBULY_API_KEY>"system_prompt ="You are a helpful assistant"messages =[{role:"system",content: system_prompt },{role:"user",content:"Who is the president of the United States of America?"},]time_start =Time.now.utc.to_datetime.iso8601uri =URI.parse("https://api.openai.com/v1/chat/completions")request = Net::HTTP::Post.new(uri)request.content_type ="application/json"request["Authorization"]="Bearer #{OPENAI_API_KEY}"request.body =JSON.dump({"model"=>"gpt-3.5-turbo","messages"=> messages})req_options ={ use_ssl: uri.scheme =="https",}response = Net::HTTP.start(uri.hostname, uri.port, req_options)do|http| http.request(request)endcompletion =JSON.parse(response.body)output = completion['choices'].first['message']['content']model = completion['model']time_end =Time.now.utc.to_datetime.iso8601uri =URI.parse("https://backend.nebuly.com/event-ingestion/api/v2/events/interactions")request = Net::HTTP::Post.new(uri)request.content_type ="application/json"request["Authorization"]="Bearer #{NEBULY_API_KEY}"request.body =JSON.dump({"interaction"=>{"messages"=> messages,"output"=> output,"time_start"=> time_start,"time_end"=> time_end,"end_user"=>"<YOUR_USER_ID>","model"=> model,"tags"=>{"version"=>"v1.0.0"}},"anonymize"=>true,})req_options ={ use_ssl: uri.scheme =="https",}response = Net::HTTP.start(uri.hostname, uri.port, req_options)do|http| http.request(request)endputs JSON.parse(response.body)
Below you can find a detailed explanation of the parameters:
messages
array
required
A list of messages from the conversation so far, following the same format used by OpenAI Chat Completion endpoints.
Each message has two required fields:
role: the role of who is sending the message. Possible values are: system, user, assistant, tool.
content: the content of the message.
The messages in the list should be ordered according to the conversation’s sequence, from the oldest message to the most recent.
Example:
messages = [ { "role" => "system", "content" => "This is a system prompt" }, { "role" => "user", "content" => "What's the weather like in Turin?" }, { "role" => "assistant", "content" => "The weather is currently rainy in Turin." }, { "role" => "user", "content" => "What's the weather like in Rome?" }]
output
string
required
Output from the LLM model
time_start
string
required
When the end-user request process started
time_end
string
required
When the end-user request process ended
end_user
string
required
An id or username uniquely identifying the end-user. We recommend hashing their username or email address, in order to avoid sending us any identifying information.
model
string
required
The LLM model you are using. Please note that this is needed if you want to visualize the cost of your requests. Now we support costs only for OpenAI models, cost of other providers coming soon.
tags
hash
Tag user interactions by adding key-value pairs using this parameter. Each key represents the tag name, and the corresponding value is the tag value.
For example, if you want to tag an interaction with the model version used to reply to user input, provide it as an argument for nebuly_tags, e.g. {"version" => "v1.0.0"}. You have the flexibility to define custom tags, making them available as potential filters on the Nebuly platform.