Zephyrnet Logo

Getting Started with Claude 2 API – KDnuggets

Date:

Getting Started with Claude 2 API
Image by Author

Anthropic’s conversational AI assistant, Claude 2, is the latest version that comes with significant improvements in performance, response length, and availability compared to its previous iteration. The latest version of the model can be accessed through our API and a new public beta website at claude.ai.

Claude 2 is known for being easy to chat with, clearly explaining its reasoning, avoiding harmful outputs, and having a robust memory. It has enhanced reasoning capabilities. 

Claude 2 showed a significant improvement in the multiple-choice section of the Bar exam, scoring 76.5% compared to Claude 1.3’s 73.0%. Moreover, Claude 2 outperforms over 90% of human test takers in reading and writing sections of the GRE. In coding evaluations such as HumanEval, Claude 2 achieved 71.2% accuracy, which is a considerable increase from 56.0% in the past.

The Claude 2 API is being offered to our thousands of business customers at the same price as Claude 1.3. You can easily use it through web API, as well as Python and Typescript clients. This tutorial will guide you through the setup and usage of the Claude 2 Python API, and help you learn about the various functionalities it offers.

Before we jump into accessing the API, we need to first apply for API Early Access. You will fill out the form and wait for confirmation. Make sure you are using your business email address. I was using @kdnuggets.com

After receiving the confirmation email, you will be provided with access to the console. From there, you can generate API keys by going to Account Settings.

Install the Anthropic Python client using PiP. Make sure you are using the latest Python version.   

pip install anthropic

 

Setup the anthropic client using the API key.

client = anthropic.Anthropic(api_key=os.environ["API_KEY"])

 

Instead of providing the API key for creating the client object, you can set ANTHROPIC_API_KEY environment variable and provide it the key.

Here is the basic sync version of generating responses using the prompt. 

  1. Import all necessary modules.
  2. Initial the client using API key.
  3. To generate a response you have to provide a model name, max tokes, and prompt. 
  4. Your prompt usually has HUMAN_PROMPT (‘nnHuman:’) and AI_PROMPT (‘nnAssistant:’).
  5. Print the response. 
from anthropic import Anthropic, HUMAN_PROMPT, AI_PROMPT
import os anthropic = Anthropic( api_key= os.environ["ANTHROPIC_API_KEY"],
) completion = anthropic.completions.create( model="claude-2", max_tokens_to_sample=300, prompt=f"{HUMAN_PROMPT} How do I find soul mate?{AI_PROMPT}",
)
print(completion.completion)

 

Output:

As we can see, we got quite good results. I think it is even better than GPT-4. 

Here are some tips for finding your soulmate: - Focus on becoming your best self. Pursue your passions and interests, grow as a person, and work on developing yourself into someone you admire. When you are living your best life, you will attract the right person for you. - Put yourself out there and meet new people. Expand your social circles by trying new activities, joining clubs, volunteering, or using dating apps. The more people you meet, the more likely you are to encounter someone special.........

 

You can also call Claude 2 API using asynchronous requests. 

Synchronous APIs execute requests sequentially, blocking until a response is received before invoking the next call, while asynchronous APIs allow multiple concurrent requests without blocking, handling responses as they complete through callbacks, promises or events; this provides asynchronous APIs greater efficiency and scalability.

  1. Import AsyncAnthropic instead of Anthropic
  2. Define a function with async syntax. 
  3. Use await with each API call
from anthropic import AsyncAnthropic anthropic = AsyncAnthropic() async def main(): completion = await anthropic.completions.create( model="claude-2", max_tokens_to_sample=300, prompt=f"{HUMAN_PROMPT} What percentage of nitrogen is present in the air?{AI_PROMPT}", ) print(completion.completion) await main()

 

Output: 

We got accurate results. 

About 78% of the air is nitrogen. Specifically:
- Nitrogen makes up approximately 78.09% of the air by volume.
- Oxygen makes up approximately 20.95% of air. - The remaining 0.96% is made up of other gases like argon, carbon dioxide, neon, helium, and hydrogen.

 

Note: If you are using the async function in Jupyter Notebook, use await main(). Otherwise, use asyncio.run(main()).

Streaming has become increasingly popular for large language models. Instead of waiting for the complete response, you can start processing the output as soon as it becomes available. This approach helps reduce the perceived latency by returning the output of the Language Model token by token, as opposed to all at once.

You just have to set a new argument stream to True in completion function. Caude 2 uses Server Side Events (SSE) to support the response streaming.

stream = anthropic.completions.create( prompt=f"{HUMAN_PROMPT} Could you please write a Python code to analyze a loan dataset?{AI_PROMPT}", max_tokens_to_sample=300, model="claude-2", stream=True,
)
for completion in stream: print(completion.completion, end="", flush=True)

 

Output:

 

Getting Started with Claude 2 API

Billing is the most important aspect of integrating the API in your application. It will help you plan for the budget and charge your clients. Aloof the LLMs APIs are charged based on token.  You can check the below table to understand the pricing structure. 

 

Getting Started with Claude 2 API
Image from Anthropic
 

An easy way to count the number of tokens is by providing a prompt or response to the count_tokens function.

client = Anthropic()
client.count_tokens('What percentage of nitrogen is present in the air?')

 

Apart from basic response generation, you can use the API to fully integrate into your application.

  • Using types: The requests and responses use TypedDicts and Pydantic models respectively for type checking and autocomplete.
  • Handling errors: Errors raised include APIConnectionError for connection issues and APIStatusError for HTTP errors.
  • Default Headers: anthropic-version header is automatically added. This can be customized.
  • Logging: Logging can be enabled by setting the ANTHROPIC_LOG environment variable.
  • Configuring the HTTP client: The HTTPx client can be customized for proxies, transports etc.
  • Managing HTTP resources: The client can be manually closed or used in a context manager.
  • Versioning: Follows Semantic Versioning conventions but some backwards-incompatible changes may be released as minor versions.

The Anthropic Python API provides easy access to Claude 2 state-of-the-art conversational AI model, enabling developers to integrate Claude’s advanced natural language capabilities into their applications. The API offers synchronous and asynchronous calls, streaming, billing based on token usage, and other features to fully leverage Claude 2’s improvements over previous versions. 

The Claude 2 is my favorite so far, and I think building applications using the Anthropic API will help you build a product that outshines others.

Let me know if you would like to read a more advanced tutorial. Perhaps I can create an application using Anthropic API.
 
 

Abid Ali Awan (@1abidaliawan) is a certified data scientist professional who loves building machine learning models. Currently, he is focusing on content creation and writing technical blogs on machine learning and data science technologies. Abid holds a Master’s degree in Technology Management and a bachelor’s degree in Telecommunication Engineering. His vision is to build an AI product using a graph neural network for students struggling with mental illness.

spot_img

Latest Intelligence

spot_img