In this guide, we will show how to send users’ interactions with an OpenAI model to Nebuly’s platform.

Before getting started, you would need two API keys:

  • Your OpenAI key
  • Nebuly authentication key
require 'net/http'
require 'uri'
require 'json'
require 'date'

OPENAI_API_KEY = "<OPENAI_API_KEY>"
NEBULY_API_KEY = "<NEBULY_API_KEY>"

system_prompt = "You are a helpful assistant"
messages = [
    { role: "system", content: system_prompt },
    { role: "user", content: "Who is the president of the United States of America?" },
]
time_start = Time.now.utc.to_datetime.iso8601

uri = URI.parse("https://api.openai.com/v1/chat/completions")
request = Net::HTTP::Post.new(uri)
request.content_type = "application/json"
request["Authorization"] = "Bearer #{OPENAI_API_KEY}"
request.body = JSON.dump({
  "model" => "gpt-3.5-turbo",
  "messages" => messages
})

req_options = {
  use_ssl: uri.scheme == "https",
}

response = Net::HTTP.start(uri.hostname, uri.port, req_options) do |http|
  http.request(request)
end

completion = JSON.parse(response.body)
output = completion['choices'].first['message']['content']
model = completion['model']

time_end = Time.now.utc.to_datetime.iso8601

uri = URI.parse("https://backend.nebuly.com/event-ingestion/api/v1/events/interactions")
request = Net::HTTP::Post.new(uri)
request.content_type = "application/json"
request["Authorization"] = "Bearer #{NEBULY_API_KEY}"
request.body = JSON.dump({
  "interaction" => {
    "input" => messages.last[:content],
    "output" => output,
    "time_start" => time_start,
    "time_end" => time_end,
    "history" => [],
    "end_user" => "<YOUR_USER_ID>",
    "model" => model,
    "system_prompt" => system_prompt
  },
  "anonymize" => true,
})

req_options = {
  use_ssl: uri.scheme == "https",
}

response = Net::HTTP.start(uri.hostname, uri.port, req_options) do |http|
  http.request(request)
end

puts JSON.parse(response.body)

Below you can find a detailed explanation of the parameters:

input
string
required

Input prompt from the user

output
string
required

Output from the LLM model

time_start
string
required

When the end-user request process started

time_end
string
required

When the end-user request process ended

end_user
string
required

An id or username uniquely identifying the end-user. We recommend hashing their username or email address, in order to avoid sending us any identifying information.

model
string
required

The LLM model you are using. Please note that this is needed if you want to visualize the cost of your requests. Now we support costs only for OpenAI models, cost of other providers coming soon.

history
array

History is an array of tuples of input and output, representing the entire conversation. It should be structured as the example below:

history = [
    ["input_1", "output_1"],
    ["input_2", "output_2"]
]

In this format, input_* and output_* stand for the input and output of each interaction. So, input_1 and output_1 are the input and output of the first interaction, input_2 and output_2 are for the second interaction, and so on.

nebuly_tags
hash

Tag user interactions by adding key-value pairs using this parameter. Each key represents the tag name, and the corresponding value is the tag value.

For example, if you want to tag an interaction with the model version used to reply to user input, provide it as an argument for nebuly_tags, e.g. {"version" => "v1.0.0"}. You have the flexibility to define custom tags, making them available as potential filters on the Nebuly platform.

You can find all the API details in the API documentation.