Skip to main content

Class: ChatOpenAI

chat_models.ChatOpenAI

Wrapper around OpenAI large language models that use the Chat endpoint.

To use you should have the openai package installed, with the OPENAI_API_KEY environment variable set.

Remarks

Any parameters that are valid to be passed to openai.createCompletion can be passed through modelKwargs, even if not explicitly available on this class.

Hierarchy

Implements

Constructors

constructor

new ChatOpenAI(fields?, configuration?)

Parameters

NameType
fields?Partial<OpenAIInput> & BaseLanguageModelParams & { cache?: boolean ; concurrency?: number ; openAIApiKey?: string }
configuration?ConfigurationParameters

Overrides

BaseChatModel.constructor

Defined in

langchain/src/chat_models/openai.ts:165

Properties

callbackManager

callbackManager: CallbackManager

Inherited from

BaseChatModel.callbackManager

Defined in

langchain/src/base_language/index.ts:34


caller

Protected caller: AsyncCaller

The async caller should be used by subclasses to make any async calls, which will thus benefit from the concurrency and retry logic.

Inherited from

BaseChatModel.caller

Defined in

langchain/src/base_language/index.ts:40


frequencyPenalty

frequencyPenalty: number = 0

Penalizes repeated tokens according to frequency

Implementation of

OpenAIInput.frequencyPenalty

Defined in

langchain/src/chat_models/openai.ts:141


logitBias

Optional logitBias: Record<string, number>

Dictionary used to adjust the probability of specific tokens being generated

Implementation of

OpenAIInput.logitBias

Defined in

langchain/src/chat_models/openai.ts:147


maxTokens

Optional maxTokens: number

Maximum number of tokens to generate in the completion. If not specified, defaults to the maximum number of tokens allowed by the model.

Implementation of

OpenAIInput.maxTokens

Defined in

langchain/src/chat_models/openai.ts:159


modelKwargs

Optional modelKwargs: Kwargs

Holds any additional parameters that are valid to pass to openai.create that are not explicitly specified on this class.

Implementation of

OpenAIInput.modelKwargs

Defined in

langchain/src/chat_models/openai.ts:151


modelName

modelName: string = "gpt-3.5-turbo"

Model name to use

Implementation of

OpenAIInput.modelName

Defined in

langchain/src/chat_models/openai.ts:149


n

n: number = 1

Number of chat completions to generate for each prompt

Implementation of

OpenAIInput.n

Defined in

langchain/src/chat_models/openai.ts:145


presencePenalty

presencePenalty: number = 0

Penalizes repeated tokens

Implementation of

OpenAIInput.presencePenalty

Defined in

langchain/src/chat_models/openai.ts:143


stop

Optional stop: string[]

List of stop words to use when generating

Implementation of

OpenAIInput.stop

Defined in

langchain/src/chat_models/openai.ts:153


streaming

streaming: boolean = false

Whether to stream the results or not. Enabling disables tokenUsage reporting

Implementation of

OpenAIInput.streaming

Defined in

langchain/src/chat_models/openai.ts:157


temperature

temperature: number = 1

Sampling temperature to use, between 0 and 2, defaults to 1

Implementation of

OpenAIInput.temperature

Defined in

langchain/src/chat_models/openai.ts:137


timeout

Optional timeout: number

Timeout to use when making requests to OpenAI.

Implementation of

OpenAIInput.timeout

Defined in

langchain/src/chat_models/openai.ts:155


topP

topP: number = 1

Total probability mass of tokens to consider at each step, between 0 and 1, defaults to 1

Implementation of

OpenAIInput.topP

Defined in

langchain/src/chat_models/openai.ts:139


verbose

verbose: boolean

Whether to print out response text.

Inherited from

BaseChatModel.verbose

Defined in

langchain/src/base_language/index.ts:32

Methods

_combineLLMOutput

_combineLLMOutput(...llmOutputs): OpenAILLMOutput

Parameters

NameType
...llmOutputsOpenAILLMOutput[]

Returns

OpenAILLMOutput

Overrides

BaseChatModel._combineLLMOutput

Defined in

langchain/src/chat_models/openai.ts:452


_generate

_generate(messages, stop?): Promise<ChatResult>

Call out to OpenAI's endpoint with k unique prompts

Example

import { OpenAI } from "langchain/llms";
const openai = new OpenAI();
const response = await openai.generate(["Tell me a joke."]);

Parameters

NameTypeDescription
messagesBaseChatMessage[]The messages to pass into the model.
stop?string[]Optional list of stop words to use when generating.

Returns

Promise<ChatResult>

The full LLM output.

Overrides

BaseChatModel._generate

Defined in

langchain/src/chat_models/openai.ts:258


_identifyingParams

_identifyingParams(): Object

Get the identifying parameters of the LLM.

Returns

Object

NameType
model_namestring

Overrides

BaseChatModel._identifyingParams

Defined in

langchain/src/chat_models/openai.ts:228


_llmType

_llmType(): string

Returns

string

Overrides

BaseChatModel._llmType

Defined in

langchain/src/chat_models/openai.ts:448


_modelType

_modelType(): string

Returns

string

Inherited from

BaseChatModel._modelType

Defined in

langchain/src/chat_models/base.ts:76


call

call(messages, stop?): Promise<BaseChatMessage>

Parameters

NameType
messagesBaseChatMessage[]
stop?string[]

Returns

Promise<BaseChatMessage>

Inherited from

BaseChatModel.call

Defined in

langchain/src/chat_models/base.ts:97


callPrompt

callPrompt(promptValue, stop?): Promise<BaseChatMessage>

Parameters

NameType
promptValueBasePromptValue
stop?string[]

Returns

Promise<BaseChatMessage>

Inherited from

BaseChatModel.callPrompt

Defined in

langchain/src/chat_models/base.ts:106


generate

generate(messages, stop?): Promise<LLMResult>

Parameters

NameType
messagesBaseChatMessage[][]
stop?string[]

Returns

Promise<LLMResult>

Inherited from

BaseChatModel.generate

Defined in

langchain/src/chat_models/base.ts:39


generatePrompt

generatePrompt(promptValues, stop?): Promise<LLMResult>

Parameters

NameType
promptValuesBasePromptValue[]
stop?string[]

Returns

Promise<LLMResult>

Inherited from

BaseChatModel.generatePrompt

Defined in

langchain/src/chat_models/base.ts:82


getNumTokens

getNumTokens(text): Promise<number>

Parameters

NameType
textstring

Returns

Promise<number>

Inherited from

BaseChatModel.getNumTokens

Defined in

langchain/src/base_language/index.ts:62


getNumTokensFromMessages

getNumTokensFromMessages(messages): Promise<{ countPerMessage: number[] ; totalCount: number }>

Parameters

NameType
messagesBaseChatMessage[]

Returns

Promise<{ countPerMessage: number[] ; totalCount: number }>

Defined in

langchain/src/chat_models/openai.ts:394


identifyingParams

identifyingParams(): Object

Get the identifying parameters for the model

Returns

Object

NameType
model_namestring

Defined in

langchain/src/chat_models/openai.ts:239


invocationParams

invocationParams(): Omit<CreateChatCompletionRequest, "messages"> & Kwargs

Get the parameters used to invoke the model

Returns

Omit<CreateChatCompletionRequest, "messages"> & Kwargs

Defined in

langchain/src/chat_models/openai.ts:212


serialize

serialize(): SerializedLLM

Return a json-like object representing this LLM.

Returns

SerializedLLM

Inherited from

BaseChatModel.serialize

Defined in

langchain/src/base_language/index.ts:92


deserialize

Static deserialize(data): Promise<BaseLanguageModel>

Load an LLM from a json-like object describing it.

Parameters

NameType
dataSerializedLLM

Returns

Promise<BaseLanguageModel>

Inherited from

BaseChatModel.deserialize

Defined in

langchain/src/base_language/index.ts:103