Skip to main content

Interface: OpenAIInput

llms.internal.OpenAIInput

Input to OpenAI class.

Hierarchy

Implemented by

Properties

batchSize

batchSize: number

Batch size to use when passing multiple documents to generate

Defined in

langchain/src/llms/openai.ts:65


bestOf

bestOf: number

Generates bestOf completions server side and returns the "best"

Inherited from

ModelParams.bestOf

Defined in

langchain/src/llms/openai.ts:41


frequencyPenalty

frequencyPenalty: number

Penalizes repeated tokens according to frequency

Inherited from

ModelParams.frequencyPenalty

Defined in

langchain/src/llms/openai.ts:32


logitBias

Optional logitBias: Record<string, number>

Dictionary used to adjust the probability of specific tokens being generated

Inherited from

ModelParams.logitBias

Defined in

langchain/src/llms/openai.ts:44


maxTokens

maxTokens: number

Maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the model's maximum context size.

Inherited from

ModelParams.maxTokens

Defined in

langchain/src/llms/openai.ts:26


modelKwargs

Optional modelKwargs: Kwargs

Holds any additional parameters that are valid to pass to openai.createCompletion that are not explicitly specified on this class.

Defined in

langchain/src/llms/openai.ts:62


modelName

modelName: string

Model name to use

Defined in

langchain/src/llms/openai.ts:56


n

n: number

Number of completions to generate for each prompt

Inherited from

ModelParams.n

Defined in

langchain/src/llms/openai.ts:38


presencePenalty

presencePenalty: number

Penalizes repeated tokens

Inherited from

ModelParams.presencePenalty

Defined in

langchain/src/llms/openai.ts:35


stop

Optional stop: string[]

List of stop words to use when generating

Defined in

langchain/src/llms/openai.ts:68


streaming

streaming: boolean

Whether to stream the results or not. Enabling disables tokenUsage reporting

Inherited from

ModelParams.streaming

Defined in

langchain/src/llms/openai.ts:47


temperature

temperature: number

Sampling temperature to use

Inherited from

ModelParams.temperature

Defined in

langchain/src/llms/openai.ts:20


timeout

Optional timeout: number

Timeout to use when making requests to OpenAI.

Defined in

langchain/src/llms/openai.ts:73


topP

topP: number

Total probability mass of tokens to consider at each step

Inherited from

ModelParams.topP

Defined in

langchain/src/llms/openai.ts:29