You are currently on a page documenting the use of OpenAI text completion models. The latest and most popular OpenAI models are chat completion models.Unless you are specifically using
gpt-3.5-turbo-instruct
, you are probably looking for this page instead.OpenAI
models
Overview
Integration details
Class | Package | Local | Serializable | JS support | Downloads | Version |
---|---|---|---|---|---|---|
ChatOpenAI | langchain-openai | ❌ | beta | ✅ |
Setup
To access OpenAI models you’ll need to create an OpenAI account, get an API key, and install thelangchain-openai
integration package.
Credentials
Head to platform.openai.com to sign up to OpenAI and generate an API key. Once you’ve done this set the OPENAI_API_KEY environment variable:Installation
The LangChain OpenAI integration lives in thelangchain-openai
package:
Instantiation
Now we can instantiate our model object and generate chat completions:Invocation
Chaining
Using a proxy
If you are behind an explicit proxy, you can specify the http_client to pass throughAPI reference
For detailed documentation of allOpenAI
llm features and configurations head to the API reference: python.langchain.com/api_reference/openai/llms/langchain_openai.llms.base.OpenAI.html