AI21 API Starter Sample for Gen AI Beginners

If you frequently use AI assistants and are amazed by the near-magical abilities of LLMs, you can take it a step further by exploring how to use them programmatically.

Here's a beginner-friendly sample in Python that utilizes the AI21 API to generate responses from its Jamba Instruct model. AI21 Labs offers a Free Trial with $10 credits for 3 months. No credit card is needed.

You don’t even need a local development setup to get started. You can use Google Colaboratory (Colab), a hosted Jupyter Notebook service that requires no setup and offers free access to computing resources, including GPUs and TPUs.

At a high level, we'll follow the below steps -

  1. Install AI21 Python SDK
  2. Obtain your API key by logging in and visiting this link
  3. Initialize the AI21 client using your API key
  4. Send your query to the AI21 Jamba Instruct model
  5. Print the generated response

To install the ai21 package to work with AI21 Studio models, use pip to install the ai21 library - !pip install ai21

Copy the following Python code and make sure to replace the API key -

"""
AI21 API Example: Get an answer to a question
"""
from ai21 import AI21Client
from ai21.models.chat import ChatMessage
import os
# Import necessary modules to work with the AI21 API
# AI21Client: Used to interact with the AI21 service
# ChatMessage: Represents a message in a chat conversation with the model
# os: Used to interact with the operating system's environment variables
# Set the AI21 API key as an environment variable.
# This way, the AI21Client can automatically find and use it.
os.environ["AI21_STUDIO_API_KEY"] = "XXX" # Replace with your actual key
# Create an AI21Client instance, which is the main object to communicate with the AI21 API.
# It uses the API key from the environment variable for authentication.
client = AI21Client(api_key=os.environ["AI21_STUDIO_API_KEY"])
# Constants for configuration, adjust as needed
MAX_TOKENS = 30
TEMPERATURE = 0.7
def get_answer(prompt, max_tokens=MAX_TOKENS, temperature=TEMPERATURE):
"""
Define a function called 'get_answer' that takes a 'prompt' (your question) as input.
Parameters:
- prompt (str): The question to ask the AI.
- max_tokens (int): Limits the length of the generated response.
- temperature (float): Controls the randomness of the output (higher values mean more creative responses).
Returns:
- str: The answer from the AI.
"""
response = client.chat.completions.create(
model="jamba-instruct",
messages=[ChatMessage(role="user", content=prompt)],
max_tokens=MAX_TOKENS,
temperature=TEMPERATURE
)
# Send the prompt to the AI21 model and get a response.
# - model: Specifies the AI21 model to use ("jamba-instruct" is an instruction-following model).
# - messages: A list of ChatMessage objects, representing the conversation history. Here, only one message is used, which is the user's prompt.
# - max_tokens: Limits the length of the generated response.
# - temperature: Controls the randomness of the output (higher values mean more creative responses).
return response.choices[0].message.content # Extract the answer directly
# Return the actual answer text from the response object.
# Get an answer
question = "Who was the first emperor of Rome?"
# Define the question you want to ask the AI.
answer = get_answer(question)
# Call the 'get_answer' function with your question and store the answer.
print(answer)
# Print the answer to the console.
view raw ai21-starter.py hosted with ❤ by GitHub
To improve the code further, you can add error handling to manage potential issues especially when setting the API key and making the API call.

Comments

Popular posts from this blog

The Mercurial Grok AI Assistant Understands & Speaks Indian Languages

Things Near Me – Find & Learn About Landmarks Nearby