Table of Contents

Class Model

Namespace
OpenAI.Models
Assembly
OpenAI-DotNet.dll
public sealed class Model
Inheritance
Model
Inherited Members

Constructors

Model(string, string)

Constructor.

public Model(string id, string ownedBy = null)

Parameters

id string

Model id.

ownedBy string

Optional, owned by id.

Properties

Babbage

Replacement for the GPT-3 ada and babbage base models.

public static Model Babbage { get; }

Property Value

Model

ChatGPT4o

ChatGPT-4o points to the GPT-4o snapshot currently used in ChatGPT. GPT-4o is our versatile, high-intelligence flagship model. It accepts both text and image inputs, and produces text outputs. It is the best model for most tasks, and is our most capable model outside of our o-series models.

public static Model ChatGPT4o { get; }

Property Value

Model

Remarks

  • Context Window: 128,000 tokens
  • Max Output Tokens: 16,384 tokens

CreatedAt

[JsonIgnore]
public DateTime CreatedAt { get; }

Property Value

DateTime

CreatedAtUnixTimeSeconds

[JsonInclude]
[JsonPropertyName("created")]
public int CreatedAtUnixTimeSeconds { get; }

Property Value

int

DallE_2

DALL·E is an AI system that creates realistic images and art from a natural language description. Older than DALL·E 3, DALL·E 2 offers more control in prompting and more requests at once.

public static Model DallE_2 { get; }

Property Value

Model

DallE_3

DALL·E is an AI system that creates realistic images and art from a natural language description. DALL·E 3 currently supports the ability, given a prompt, to create a new image with a specific size.

public static Model DallE_3 { get; }

Property Value

Model

Davinci

Replacement for the GPT-3 curie and davinci base models.

public static Model Davinci { get; }

Property Value

Model

Embedding_3_Large

Most capable embedding model for both english and non-english tasks.

public static Model Embedding_3_Large { get; }

Property Value

Model

Remarks

Output Dimension: 3,072

Embedding_3_Small

A highly efficient model which provides a significant upgrade over its predecessor, the text-embedding-ada-002 model.

public static Model Embedding_3_Small { get; }

Property Value

Model

Remarks

Output Dimension: 1,536

Embedding_Ada_002

The default model for EmbeddingsEndpoint.

public static Model Embedding_Ada_002 { get; }

Property Value

Model

Remarks

Output Dimension: 1,536

GPT3_5_Turbo

GPT-3.5 Turbo models can understand and generate natural language or code and have been optimized for chat using the Chat Completions API but work well for non-chat tasks as well. As of July 2024, use gpt-4o-mini in place of GPT-3.5 Turbo, as it is cheaper, more capable, multimodal, and just as fast. GPT-3.5 Turbo is still available for use in the API.

public static Model GPT3_5_Turbo { get; }

Property Value

Model

Remarks

  • Context Window: 16,385 tokens
  • Max Output Tokens: 4,096 max output tokens

GPT3_5_Turbo_16K

Same capabilities as the base gpt-3.5-turbo mode but with 4x the context length. Tokens are 2x the price of gpt-3.5-turbo. Will be updated with our latest model iteration.

public static Model GPT3_5_Turbo_16K { get; }

Property Value

Model

GPT4

GPT-4 is an older version of a high-intelligence GPT model, usable in Chat Completions.

public static Model GPT4 { get; }

Property Value

Model

Remarks

  • Context Window: 8,192 tokens
  • Max Output Tokens: 8,192 max output tokens

GPT4_1

GPT-4.1 is our flagship model for complex tasks. It is well suited for problem solving across domains.

public static Model GPT4_1 { get; }

Property Value

Model

Remarks

  • Context Window: 1,047,576 context window
  • Max Output Tokens: 32,768 max output tokens

GPT4_1_Mini

GPT-4.1 mini provides a balance between intelligence, speed, and cost that makes it an attractive model for many use cases.

public static Model GPT4_1_Mini { get; }

Property Value

Model

Remarks

  • Context Window: 1,047,576 context window
  • Max Output Tokens: 32,768 max output tokens

GPT4_1_Nano

GPT-4.1 nano is the fastest, most cost-effective GPT-4.1 model.

public static Model GPT4_1_Nano { get; }

Property Value

Model

Remarks

  • Context Window: 1,047,576 context window
  • Max Output Tokens: 32,768 max output tokens

GPT4_32K

Same capabilities as the base gpt-4 mode but with 4x the context length. Will be updated with our latest model iteration. Tokens are 2x the price of gpt-4.

public static Model GPT4_32K { get; }

Property Value

Model

GPT4_5

public static Model GPT4_5 { get; }

Property Value

Model

GPT4_Turbo

GPT-4 Turbo is the next generation of GPT-4, an older high-intelligence GPT model. It was designed to be a cheaper, better version of GPT-4. Today, we recommend using a newer model like GPT-4o.

public static Model GPT4_Turbo { get; }

Property Value

Model

Remarks

  • Context Window: 128,000 tokens
  • Max Output Tokens: 4,096 max output tokens

GPT4o

GPT-4o (“o” for “omni”) is our versatile, high-intelligence flagship model. It accepts both text and image inputs, and produces text outputs (including Structured Outputs). It is the best model for most tasks, and is our most capable model outside of our o-series models.

public static Model GPT4o { get; }

Property Value

Model

Remarks

  • Context Window: 128,000 tokens
  • Max Output Tokens: 16,384 tokens

GPT4oAudio

This is a preview release of the GPT-4o Audio models. These models accept audio inputs and outputs, and can be used in the Chat Completions REST API.

public static Model GPT4oAudio { get; }

Property Value

Model

Remarks

  • Context Window: 128,000 tokens
  • Max Output Tokens: 16,384 max output tokens

GPT4oAudioMini

This is a preview release of the smaller GPT-4o Audio mini model. It's designed to input audio or create audio outputs via the REST API.

public static Model GPT4oAudioMini { get; }

Property Value

Model

Remarks

  • Context Window: 128,000 tokens
  • Max Output Tokens: 16,384 max output tokens

GPT4oMini

GPT-4o mini (“o” for “omni”) is a fast, affordable small model for focused tasks. It accepts both text and image inputs, and produces text outputs (including Structured Outputs). It is ideal for fine-tuning, and model outputs from a larger model like GPT-4o can be distilled to GPT-4o-mini to produce similar results at lower cost and latency.

public static Model GPT4oMini { get; }

Property Value

Model

Remarks

  • Context Window: 128,000 tokens
  • Max Output Tokens: 16,384 max output tokens

GPT4oRealtime

This is a preview release of the GPT-4o Realtime model, capable of responding to audio and text inputs in realtime over WebRTC or a WebSocket interface.

public static Model GPT4oRealtime { get; }

Property Value

Model

Remarks

  • Context Window: 128,000 tokens
  • Max Output Tokens: 4,096 tokens

GPT4oRealtimeMini

This is a preview release of the GPT-4o-mini Realtime model, capable of responding to audio and text inputs in realtime over WebRTC or a WebSocket interface.

public static Model GPT4oRealtimeMini { get; }

Property Value

Model

Remarks

  • Context Window: 128,000 tokens
  • Max Output Tokens: 4,096 tokens

GPT_Image_1

GPT Image 1 is our new state-of-the-art image generation model. It is a natively multimodal language model that accepts both text and image inputs, and produces image outputs.

public static Model GPT_Image_1 { get; }

Property Value

Model

Id

[JsonInclude]
[JsonPropertyName("id")]
public string Id { get; }

Property Value

string

Moderation_Latest

[Obsolete("use OmniModerationLatest")]
public static Model Moderation_Latest { get; }

Property Value

Model

Moderation_Stable

[Obsolete("use OmniModerationLatest")]
public static Model Moderation_Stable { get; }

Property Value

Model

O1

The o1 series of models are trained with reinforcement learning to perform complex reasoning. o1 models think before they answer, producing a long internal chain of thought before responding to the user.

public static Model O1 { get; }

Property Value

Model

Remarks

  • Context Window: 200,000 tokens
  • Max Output Tokens: 100,000 tokens

O1Mini

The o1 reasoning model is designed to solve hard problems across domains. o1-mini is a faster and more affordable reasoning model, but we recommend using the newer o3-mini model that features higher intelligence at the same latency and price as o1-mini.

[Obsolete("Deprecated")]
public static Model O1Mini { get; }

Property Value

Model

Remarks

  • Context Window: 128,000 tokens
  • Max Output Tokens: 65,536 max output tokens

O1Pro

The o1 series of models are trained with reinforcement learning to think before they answer and perform complex reasoning. The o1-pro model uses more compute to think harder and provide consistently better answers. o1-pro is available in the Responses API only to enable support for multi-turn model interactions before responding to API requests, and other advanced API features in the future.

public static Model O1Pro { get; }

Property Value

Model

Remarks

  • Context Window: 200,000 tokens
  • Max Output Tokens: 100,000 tokens

O3

o3 is a well-rounded and powerful model across domains. It sets a new standard for math, science, coding, and visual reasoning tasks. It also excels at technical writing and instruction-following. Use it to think through multi-step problems that involve analysis across text, code, and images.

public static Model O3 { get; }

Property Value

Model

Remarks

  • Context Window: 200,000 tokens
  • Max Output Tokens: 100,000 tokens

O3Mini

o3-mini is our newest small reasoning model, providing high intelligence at the same cost and latency targets of o1-mini. o3-mini supports key developer features, like Structured Outputs, function calling, and Batch API.

public static Model O3Mini { get; }

Property Value

Model

Remarks

  • Context Window: 200,000 tokens
  • Max Output Tokens: 100,000 tokens

O4Mini

o4-mini is our latest small o-series model. It's optimized for fast, effective reasoning with exceptionally efficient performance in coding and visual tasks.

public static Model O4Mini { get; }

Property Value

Model

Remarks

  • Context Window: 200,000 tokens
  • Max Output Tokens: 100,000 tokens

Object

[JsonInclude]
[JsonPropertyName("object")]
public string Object { get; }

Property Value

string

OmniModerationLatest

public static Model OmniModerationLatest { get; }

Property Value

Model

OwnedBy

[JsonInclude]
[JsonPropertyName("owned_by")]
public string OwnedBy { get; }

Property Value

string

Parent

[JsonInclude]
[JsonPropertyName("parent")]
public string Parent { get; }

Property Value

string

Permissions

[JsonInclude]
[JsonPropertyName("permission")]
public IReadOnlyList<Permission> Permissions { get; }

Property Value

IReadOnlyList<Permission>

Root

[JsonInclude]
[JsonPropertyName("root")]
public string Root { get; }

Property Value

string

TTS_1

TTS is a model that converts text to natural sounding spoken text. The tts-1-hd model is optimized for high quality text-to-speech use cases. Use it with the Speech endpoint in the Audio API.

public static Model TTS_1 { get; }

Property Value

Model

TTS_1HD

TTS is a model that converts text to natural sounding spoken text. The tts-1-hd model is optimized for high quality text-to-speech use cases. Use it with the Speech endpoint in the Audio API.

public static Model TTS_1HD { get; }

Property Value

Model

TTS_GPT_4o_Mini

GPT-4o mini TTS is a text-to-speech model built on GPT-4o mini, a fast and powerful language model. Use it to convert text to natural sounding spoken text.

public static Model TTS_GPT_4o_Mini { get; }

Property Value

Model

Remarks

The maximum number of input tokens is 2000.

Transcribe_GPT_4o

GPT-4o Transcribe is a speech-to-text model that uses GPT-4o to transcribe audio. It offers improvements to word error rate and better language recognition and accuracy compared to original Whisper models. Use it for more accurate transcripts.

public static Model Transcribe_GPT_4o { get; }

Property Value

Model

Transcribe_GPT_4o_Mini

GPT-4o mini Transcribe is a speech-to-text model that uses GPT-4o mini to transcribe audio. It offers improvements to word error rate and better language recognition and accuracy compared to original Whisper models. Use it for more accurate transcripts.

public static Model Transcribe_GPT_4o_Mini { get; }

Property Value

Model

Whisper1

Whisper is a general-purpose speech recognition model, trained on a large dataset of diverse audio. You can also use it as a multitask model to perform multilingual speech recognition as well as speech translation and language identification.

public static Model Whisper1 { get; }

Property Value

Model

Methods

ToString()

Returns a string that represents the current object.

public override string ToString()

Returns

string

A string that represents the current object.

Operators

implicit operator string(Model)

Allows a model to be implicitly cast to the string of its id.

public static implicit operator string(Model model)

Parameters

model Model

The Model to cast to a string.

Returns

string

implicit operator Model(string)

Allows a string to be implicitly cast as a Model

public static implicit operator Model(string name)

Parameters

name string

Returns

Model