This functionality for now is not supported when using async mode.

There are cases where the overall task is complex, and your application might involve a chain of distinct models. The Nebuly SDK offers a convenient method to encapsulate all these models within a unified workflow.

This functionality is possible using the new_interaction context manager, which you can employ to gather a lengthy and intricate sequence of models into a single input-output interaction:

with new_interaction(user_id="test_user") as interaction:
    # Put all your llm calls inside this context manager

Within the context manager, you have the option to utilize three methods as needed:

  • set_input: This method is beneficial for explicitly defining the input for the model chain.
  • set_history: It can be employed to specify the previous history of interactions.
  • set_output: This method is valuable for explicitly defining the output of the model chain.

It’s important to note that all these methods are optional, and if not used, the data will be automatically inferred.

Example

Let’s illustrate this with an example in which we use two models to assess the sentiment and topic of a LinkedIn post:

import nebuly
from nebuly.contextmanager import new_interaction

nebuly.init(api_key="<YOUR_NEBULY_API_KEY>")

import cohere
import openai

openai.api_key = "<YOUR_OPENAI_API_KEY>"
co = cohere.Client("<YOUR_COHERE_API_KEY>")

linkedin_post = "Sample linkedin post about the new product launch"
user_input = f"Detect the sentiment and the topic of this linkedin post: {linkedin_post}"

with new_interaction(user_id="<YOUR_USER_ID>") as interaction:

    interaction.set_input(user_input)
    
    # If needed you can also set an history
    # interaction.set_history([{"role": "user/assistant", "content": "sample history content"}}])

    # First model to detect the sentiment
    sentiment_completion = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[
            {
                "role": "user", 
                "content": f"Detect the sentiment of this linkedin post: {linkedin_post}"
            }
        ]
    )
    sentiment = sentiment_completion.choices[0].message["content"]

    # Second model to detect the topic
    topic_completion = co.generate(
        prompt=(
            f'Detect the topic of this linkedin post: {linkedin_post}. '
            f'The sentiment computed on the post is: {sentiment}'
        ),
    )
    topic = topic_completion.generations[0].text

    interaction.set_output(f"Sentiment: {sentiment}, topic: {topic}")

In this example, the SDK will collect all the workflow inside a single input-output interaction, and send it to the server. You can find a detailed explanation of the allowed nebuly additional keyword arguments below:

user_id
string
required

An id or username uniquely identifying the end-user. We recommend hashing their username or email address, in order to avoid sending us any identifying information.

nebuly_tags
dict

Tag user interactions by adding key-value pairs using this parameter. Each key represents the tag name, and the corresponding value is the tag value.

For example, if you want to tag an interaction with the model version used to reply to user input, provide it as an argument for nebuly_tags, e.g. {"version": "v1.0.0"}. You have the flexibility to define custom tags, making them available as potential filters on the Nebuly platform.

nebuly_api_key
string

You can use this field to temporarily override the Nebuly API key for the selected model call. The interaction will be stored in the project associated with the provided API key.