It takes less than 2 minutes to track interactions to Nebuly. In this guide, we provide some guidance on what to track and how to track it.
We recommend starting with just one interaction to gain an immediate understanding of the product.
Step 1: Create a project and API Key
To create your first Nebuly’s API Key follow the next steps:
Go to ⚙️ settings and navigate to “project settings”;
Create a new project (if you don’t have one already) and give it a name (e.g. Internal chatbot);
Add an API Key and assign a name to the key.
Now you can start using the SDK and track your first interaction.
Step 2: Track your first interaction
Use your LLMs as usual, just add:
Nebuly’s API key;
user_id within the model call.
Python Example
Copy
Ask AI
import nebulyimport openainebuly.init(api_key="<nebuly_api_key>")openai.api_key = "<your_openai_api_key>"response = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ { "role": "system", "content": "You are an helpful assistant" }, { "role": "user", "content": "Hello, I need help with my computer" } ], user_id="test_user",)
Python Example
Copy
Ask AI
import nebulyimport openainebuly.init(api_key="<nebuly_api_key>")openai.api_key = "<your_openai_api_key>"response = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ { "role": "system", "content": "You are an helpful assistant" }, { "role": "user", "content": "Hello, I need help with my computer" } ], user_id="test_user",)
Python Example
Copy
Ask AI
import nebulynebuly.init(api_key="{nebuly_api_key}")from anthropic import Anthropic, HUMAN_PROMPT, AI_PROMPTanthropic = Anthropic(api_key="<anthropic_api_key>")completion = anthropic.completions.create( model="claude-2", max_tokens_to_sample=300, prompt=f"{HUMAN_PROMPT} How many toes do dogs have?{AI_PROMPT}", user_id="test_user")
Python Example
Copy
Ask AI
import nebulyimport coherenebuly.init(api_key="{nebuly_api_key}")client = cohere.Client("<cohere_api_key>")response = client.chat( message="Hello, I need help with my computer", chat_history=[], model="command", user_id="test_user")
Python Example
Copy
Ask AI
import nebulynebuly.init(api_key="{nebuly_api_key}")from huggingface_hub import InferenceClient# Define the model namemodel_name = "mistralai/Mistral-7B-Instruct-v0.1"# Define the input datainput_data = "Hello, how are you doing today?"# Load the modelclient = InferenceClient(model_name, token="<your_api_key>")output_data = client.text_generation( input_data, user_id="test_user")
Python Example
Copy
Ask AI
import nebulyfrom langchain.chains import LLMChainfrom langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatenebuly.init(api_key="{nebuly_api_key}")prompt = PromptTemplate.from_template("You are an helpful assistant. You must answer to the following question.\n{question}")llm = OpenAI( openai_api_key="<openai_api_key>", model="gpt-3.5-turbo-instruct")chain = LLMChain(llm=llm, prompt=prompt)response = chain.run( question="What is the capital of France?", user_id="test_user")
For use cases not using a third party SDK or for situations where an highly customization is needed we also expose the raw endpoint you can use to send the interaction.
Below an example of integrating an openai call with the custom endpoint.