Class: ChatOpenAI
chat_models.ChatOpenAI
Wrapper around OpenAI large language models that use the Chat endpoint.
To use you should have the openai
package installed, with the
OPENAI_API_KEY
environment variable set.
Remarks
Any parameters that are valid to be passed to openai.createCompletion
can be passed through modelKwargs, even
if not explicitly available on this class.
Hierarchy
↳
ChatOpenAI
Implements
Constructors
constructor
• new ChatOpenAI(fields?
, configuration?
)
Parameters
Name | Type |
---|---|
fields? | Partial <OpenAIInput > & BaseLanguageModelParams & { cache? : boolean ; concurrency? : number ; openAIApiKey? : string } |
configuration? | ConfigurationParameters |
Overrides
Defined in
langchain/src/chat_models/openai.ts:165
Properties
callbackManager
• callbackManager: CallbackManager
Inherited from
Defined in
langchain/src/base_language/index.ts:34
caller
• Protected
caller: AsyncCaller
The async caller should be used by subclasses to make any async calls, which will thus benefit from the concurrency and retry logic.
Inherited from
Defined in
langchain/src/base_language/index.ts:40
frequencyPenalty
• frequencyPenalty: number
= 0
Penalizes repeated tokens according to frequency
Implementation of
Defined in
langchain/src/chat_models/openai.ts:141
logitBias
• Optional
logitBias: Record
<string
, number
>
Dictionary used to adjust the probability of specific tokens being generated
Implementation of
Defined in
langchain/src/chat_models/openai.ts:147
maxTokens
• Optional
maxTokens: number
Maximum number of tokens to generate in the completion. If not specified, defaults to the maximum number of tokens allowed by the model.
Implementation of
Defined in
langchain/src/chat_models/openai.ts:159
modelKwargs
• Optional
modelKwargs: Kwargs
Holds any additional parameters that are valid to pass to openai.create
that are not explicitly specified on this class.
Implementation of
Defined in
langchain/src/chat_models/openai.ts:151
modelName
• modelName: string
= "gpt-3.5-turbo"
Model name to use
Implementation of
Defined in
langchain/src/chat_models/openai.ts:149
n
• n: number
= 1
Number of chat completions to generate for each prompt
Implementation of
Defined in
langchain/src/chat_models/openai.ts:145
presencePenalty
• presencePenalty: number
= 0
Penalizes repeated tokens
Implementation of
Defined in
langchain/src/chat_models/openai.ts:143
stop
• Optional
stop: string
[]
List of stop words to use when generating
Implementation of
Defined in
langchain/src/chat_models/openai.ts:153
streaming
• streaming: boolean
= false
Whether to stream the results or not. Enabling disables tokenUsage reporting
Implementation of
Defined in
langchain/src/chat_models/openai.ts:157
temperature
• temperature: number
= 1
Sampling temperature to use, between 0 and 2, defaults to 1
Implementation of
Defined in
langchain/src/chat_models/openai.ts:137
timeout
• Optional
timeout: number
Timeout to use when making requests to OpenAI.
Implementation of
Defined in
langchain/src/chat_models/openai.ts:155
topP
• topP: number
= 1
Total probability mass of tokens to consider at each step, between 0 and 1, defaults to 1
Implementation of
Defined in
langchain/src/chat_models/openai.ts:139
verbose
• verbose: boolean
Whether to print out response text.
Inherited from
Defined in
langchain/src/base_language/index.ts:32
Methods
_combineLLMOutput
▸ _combineLLMOutput(...llmOutputs
): OpenAILLMOutput
Parameters
Name | Type |
---|---|
...llmOutputs | OpenAILLMOutput [] |
Returns
Overrides
BaseChatModel._combineLLMOutput
Defined in
langchain/src/chat_models/openai.ts:452
_generate
▸ _generate(messages
, stop?
): Promise
<ChatResult
>
Call out to OpenAI's endpoint with k unique prompts
Example
import { OpenAI } from "langchain/llms";
const openai = new OpenAI();
const response = await openai.generate(["Tell me a joke."]);
Parameters
Name | Type | Description |
---|---|---|
messages | BaseChatMessage [] | The messages to pass into the model. |
stop? | string [] | Optional list of stop words to use when generating. |
Returns
Promise
<ChatResult
>
The full LLM output.
Overrides
Defined in
langchain/src/chat_models/openai.ts:258
_identifyingParams
▸ _identifyingParams(): Object
Get the identifying parameters of the LLM.
Returns
Object
Name | Type |
---|---|
model_name | string |
Overrides
BaseChatModel._identifyingParams
Defined in
langchain/src/chat_models/openai.ts:228
_llmType
▸ _llmType(): string
Returns
string
Overrides
Defined in
langchain/src/chat_models/openai.ts:448
_modelType
▸ _modelType(): string
Returns
string
Inherited from
Defined in
langchain/src/chat_models/base.ts:76
call
▸ call(messages
, stop?
): Promise
<BaseChatMessage
>
Parameters
Name | Type |
---|---|
messages | BaseChatMessage [] |
stop? | string [] |
Returns
Promise
<BaseChatMessage
>
Inherited from
Defined in
langchain/src/chat_models/base.ts:97
callPrompt
▸ callPrompt(promptValue
, stop?
): Promise
<BaseChatMessage
>
Parameters
Name | Type |
---|---|
promptValue | BasePromptValue |
stop? | string [] |
Returns
Promise
<BaseChatMessage
>
Inherited from
Defined in
langchain/src/chat_models/base.ts:106
generate
▸ generate(messages
, stop?
): Promise
<LLMResult
>
Parameters
Name | Type |
---|---|
messages | BaseChatMessage [][] |
stop? | string [] |
Returns
Promise
<LLMResult
>
Inherited from
Defined in
langchain/src/chat_models/base.ts:39
generatePrompt
▸ generatePrompt(promptValues
, stop?
): Promise
<LLMResult
>
Parameters
Name | Type |
---|---|
promptValues | BasePromptValue [] |
stop? | string [] |
Returns
Promise
<LLMResult
>
Inherited from
Defined in
langchain/src/chat_models/base.ts:82
getNumTokens
▸ getNumTokens(text
): Promise
<number
>
Parameters
Name | Type |
---|---|
text | string |
Returns
Promise
<number
>
Inherited from
Defined in
langchain/src/base_language/index.ts:62
getNumTokensFromMessages
▸ getNumTokensFromMessages(messages
): Promise
<{ countPerMessage
: number
[] ; totalCount
: number
}>
Parameters
Name | Type |
---|---|
messages | BaseChatMessage [] |
Returns
Promise
<{ countPerMessage
: number
[] ; totalCount
: number
}>
Defined in
langchain/src/chat_models/openai.ts:394
identifyingParams
▸ identifyingParams(): Object
Get the identifying parameters for the model
Returns
Object
Name | Type |
---|---|
model_name | string |
Defined in
langchain/src/chat_models/openai.ts:239
invocationParams
▸ invocationParams(): Omit
<CreateChatCompletionRequest
, "messages"
> & Kwargs
Get the parameters used to invoke the model
Returns
Omit
<CreateChatCompletionRequest
, "messages"
> & Kwargs
Defined in
langchain/src/chat_models/openai.ts:212
serialize
▸ serialize(): SerializedLLM
Return a json-like object representing this LLM.
Returns
Inherited from
Defined in
langchain/src/base_language/index.ts:92
deserialize
▸ Static
deserialize(data
): Promise
<BaseLanguageModel
>
Load an LLM from a json-like object describing it.
Parameters
Name | Type |
---|---|
data | SerializedLLM |
Returns
Promise
<BaseLanguageModel
>
Inherited from
Defined in
langchain/src/base_language/index.ts:103