In this guide, we will show how to send users’ interactions with an OpenAI model to Nebuly’s platform.
Before getting started, you would need two API keys:
Your OpenAI key
Nebuly authentication key
Head to ⚙️ settings and navigate to “project settings”;
Create a new project (if you don’t have one already) and give it a name (e.g. Internal chatbot);
Copy the nebuly default_key that has been assigned to the project.
import { NebulySdk } from "@nebuly-ai/nebuly-js-sdk"import { OpenAI } from "openai";const openai = new OpenAI({apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted});async function main() { const nebulySdk = new NebulySdk('NEBULY_API_KEY'); const modelInputs: OpenAI.Chat.ChatCompletionCreateParams = { messages: [{ role: 'user', content: 'Say this is a test' }], model: 'gpt-3.5-turbo', } const startTime = new Date(); const chatCompletion = await openai.chat.completions.create(modelInputs); const endTime = new Date(); nebulySdk.sendOpenAIInteraction( modelInputs['messages'], chatCompletion.choices[0].message.content as string, modelInputs['model'] as string, startTime, endTime, '<YOUR_USER_ID>' );}main();
import { NebulySdk } from "@nebuly-ai/nebuly-js-sdk"import { OpenAI } from "openai";const openai = new OpenAI({apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted});async function main() { const nebulySdk = new NebulySdk('NEBULY_API_KEY'); const modelInputs: OpenAI.Chat.ChatCompletionCreateParams = { messages: [{ role: 'user', content: 'Say this is a test' }], model: 'gpt-3.5-turbo', } const startTime = new Date(); const chatCompletion = await openai.chat.completions.create(modelInputs); const endTime = new Date(); nebulySdk.sendOpenAIInteraction( modelInputs['messages'], chatCompletion.choices[0].message.content as string, modelInputs['model'] as string, startTime, endTime, '<YOUR_USER_ID>' );}main();
Below you can find a detailed explanation of the parameters:
An id or username uniquely identifying the end-user. We recommend hashing their username or email address, in order to avoid sending us any identifying information.
The LLM model you are using. Please note that this is needed if you want to visualize the cost of your requests. Now we support costs only for OpenAI models, cost of other providers coming soon.
History is an array of tuples of input and output, representing the entire conversation. It should be structured as the example below:
history = [ ["input_1", "output_1"], ["input_2", "output_2"]]
In this format, input_* and output_* stand for the input and output of each interaction. So, input_1 and output_1 are the input and output of the first interaction, input_2 and output_2 are for the second interaction, and so on.
Tag user interactions by adding key-value pairs using this parameter. Each key represents the tag name, and the corresponding value is the tag value.
For example, if you want to tag an interaction with the model version used to reply to user input, provide it as an argument for tags, e.g. {version: "v1.0.0"}. You have the flexibility to define custom tags, making them available as potential filters on the Nebuly platform.