Ruby
In this guide, we will show how to send users’ interactions with an OpenAI model to Nebuly’s platform.
Before getting started, you would need two API keys:
- Your OpenAI key
- Nebuly authentication key
Ruby
require 'net/http'
require 'uri'
require 'json'
require 'date'
OPENAI_API_KEY = "<OPENAI_API_KEY>"
NEBULY_API_KEY = "<NEBULY_API_KEY>"
system_prompt = "You are a helpful assistant"
messages = [
{ role: "system", content: system_prompt },
{ role: "user", content: "Who is the president of the United States of America?" },
]
time_start = Time.now.utc.to_datetime.iso8601
uri = URI.parse("https://api.openai.com/v1/chat/completions")
request = Net::HTTP::Post.new(uri)
request.content_type = "application/json"
request["Authorization"] = "Bearer #{OPENAI_API_KEY}"
request.body = JSON.dump({
"model" => "gpt-3.5-turbo",
"messages" => messages
})
req_options = {
use_ssl: uri.scheme == "https",
}
response = Net::HTTP.start(uri.hostname, uri.port, req_options) do |http|
http.request(request)
end
completion = JSON.parse(response.body)
output = completion['choices'].first['message']['content']
model = completion['model']
time_end = Time.now.utc.to_datetime.iso8601
uri = URI.parse("https://backend.nebuly.com/event-ingestion/api/v1/events/interactions")
request = Net::HTTP::Post.new(uri)
request.content_type = "application/json"
request["Authorization"] = "Bearer #{NEBULY_API_KEY}"
request.body = JSON.dump({
"interaction" => {
"input" => messages.last[:content],
"output" => output,
"time_start" => time_start,
"time_end" => time_end,
"history" => [],
"end_user" => "<YOUR_USER_ID>",
"model" => model,
"system_prompt" => system_prompt
},
"anonymize" => true,
})
req_options = {
use_ssl: uri.scheme == "https",
}
response = Net::HTTP.start(uri.hostname, uri.port, req_options) do |http|
http.request(request)
end
puts JSON.parse(response.body)
Below you can find a detailed explanation of the parameters:
Input prompt from the user
Output from the LLM model
When the end-user request process started
When the end-user request process ended
An id or username uniquely identifying the end-user. We recommend hashing their username or email address, in order to avoid sending us any identifying information.
The LLM model you are using. Please note that this is needed if you want to visualize the cost of your requests. Now we support costs only for OpenAI models, cost of other providers coming soon.
History is an array of tuples of input and output, representing the entire conversation. It should be structured as the example below:
history = [
["input_1", "output_1"],
["input_2", "output_2"]
]
In this format, input_*
and output_*
stand for the input and output of each interaction. So, input_1
and output_1
are the input and output of the first interaction, input_2
and output_2
are for the second interaction, and so on.
You can use this parameter to group your end-users in different group-profiles. A common use-case of this parameter is to group users by different tiers or organizations.
You can find all the API details in the API documentation.