Class Model
Represents a language model.
https://platform.openai.com/docs/models/model-endpoint-compatability
public sealed class Model- Inheritance
- 
      
      Model
- Inherited Members
Constructors
Model(string, string)
Constructor.
public Model(string id, string ownedBy = null)Parameters
Properties
Babbage
Replacement for the GPT-3 ada and babbage base models.
public static Model Babbage { get; }Property Value
ChatGPT4o
ChatGPT-4o points to the GPT-4o snapshot currently used in ChatGPT. GPT-4o is our versatile, high-intelligence flagship model. It accepts both text and image inputs, and produces text outputs. It is the best model for most tasks, and is our most capable model outside of our o-series models.
public static Model ChatGPT4o { get; }Property Value
Remarks
- Context Window: 128,000 tokens
- Max Output Tokens: 16,384 tokens
Codex_Mini_Latest
codex-mini-latest is a fine-tuned version of o4-mini specifically for use in Codex CLI.
public static Model Codex_Mini_Latest { get; }Property Value
Remarks
- Context Window: 200,000 tokens
- Max Output Tokens: 100,000 tokens
CreatedAt
[JsonIgnore]
public DateTime CreatedAt { get; }Property Value
CreatedAtUnixTimeSeconds
[JsonInclude]
[JsonPropertyName("created")]
public int CreatedAtUnixTimeSeconds { get; }Property Value
DallE_2
DALL·E is an AI system that creates realistic images and art from a natural language description. Older than DALL·E 3, DALL·E 2 offers more control in prompting and more requests at once.
public static Model DallE_2 { get; }Property Value
DallE_3
DALL·E is an AI system that creates realistic images and art from a natural language description. DALL·E 3 currently supports the ability, given a prompt, to create a new image with a specific size.
public static Model DallE_3 { get; }Property Value
Davinci
Replacement for the GPT-3 curie and davinci base models.
public static Model Davinci { get; }Property Value
Embedding_3_Large
Most capable embedding model for both english and non-english tasks.
public static Model Embedding_3_Large { get; }Property Value
Remarks
Output Dimension: 3,072
Embedding_3_Small
A highly efficient model which provides a significant upgrade over its predecessor, the text-embedding-ada-002 model.
public static Model Embedding_3_Small { get; }Property Value
Remarks
Output Dimension: 1,536
Embedding_Ada_002
The default model for EmbeddingsEndpoint.
public static Model Embedding_Ada_002 { get; }Property Value
Remarks
Output Dimension: 1,536
GPT3_5_Turbo
GPT-3.5 Turbo models can understand and generate natural language or code and have been optimized for chat using the Chat Completions API but work well for non-chat tasks as well. As of July 2024, use gpt-4o-mini in place of GPT-3.5 Turbo, as it is cheaper, more capable, multimodal, and just as fast. GPT-3.5 Turbo is still available for use in the API.
public static Model GPT3_5_Turbo { get; }Property Value
Remarks
- Context Window: 16,385 tokens
- Max Output Tokens: 4,096 max output tokens
GPT3_5_Turbo_16K
Same capabilities as the base gpt-3.5-turbo mode but with 4x the context length. Tokens are 2x the price of gpt-3.5-turbo. Will be updated with our latest model iteration.
public static Model GPT3_5_Turbo_16K { get; }Property Value
GPT4
GPT-4 is an older version of a high-intelligence GPT model, usable in Chat Completions.
public static Model GPT4 { get; }Property Value
Remarks
- Context Window: 8,192 tokens
- Max Output Tokens: 8,192 max output tokens
GPT4_1
GPT-4.1 is our flagship model for complex tasks. It is well suited for problem solving across domains.
public static Model GPT4_1 { get; }Property Value
Remarks
- Context Window: 1,047,576 context window
- Max Output Tokens: 32,768 max output tokens
GPT4_1_Mini
GPT-4.1 mini provides a balance between intelligence, speed, and cost that makes it an attractive model for many use cases.
public static Model GPT4_1_Mini { get; }Property Value
Remarks
- Context Window: 1,047,576 context window
- Max Output Tokens: 32,768 max output tokens
GPT4_1_Nano
GPT-4.1 nano is the fastest, most cost-effective GPT-4.1 model.
public static Model GPT4_1_Nano { get; }Property Value
Remarks
- Context Window: 1,047,576 context window
- Max Output Tokens: 32,768 max output tokens
GPT4_32K
Same capabilities as the base gpt-4 mode but with 4x the context length. Will be updated with our latest model iteration. Tokens are 2x the price of gpt-4.
public static Model GPT4_32K { get; }Property Value
GPT4_5
Deprecated - a research preview of GPT-4.5. We recommend using gpt-4.1 or o3 models instead for most use cases.
[Obsolete("Deprecated")]
public static Model GPT4_5 { get; }Property Value
Remarks
- Context Window: 128,000 context window
- Max Output Tokens: 16,384 max output tokens
GPT4_Turbo
GPT-4 Turbo is the next generation of GPT-4, an older high-intelligence GPT model. It was designed to be a cheaper, better version of GPT-4. Today, we recommend using a newer model like GPT-4o.
public static Model GPT4_Turbo { get; }Property Value
Remarks
- Context Window: 128,000 tokens
- Max Output Tokens: 4,096 max output tokens
GPT4o
GPT-4o (“o” for “omni”) is our versatile, high-intelligence flagship model. It accepts both text and image inputs, and produces text outputs (including Structured Outputs). It is the best model for most tasks, and is our most capable model outside of our o-series models.
public static Model GPT4o { get; }Property Value
Remarks
- Context Window: 128,000 tokens
- Max Output Tokens: 16,384 tokens
GPT4oAudio
This is a preview release of the GPT-4o Audio models. These models accept audio inputs and outputs, and can be used in the Chat Completions REST API.
public static Model GPT4oAudio { get; }Property Value
Remarks
- Context Window: 128,000 tokens
- Max Output Tokens: 16,384 max output tokens
GPT4oAudioMini
This is a preview release of the smaller GPT-4o Audio mini model. It's designed to input audio or create audio outputs via the REST API.
public static Model GPT4oAudioMini { get; }Property Value
Remarks
- Context Window: 128,000 tokens
- Max Output Tokens: 16,384 max output tokens
GPT4oMini
GPT-4o mini (“o” for “omni”) is a fast, affordable small model for focused tasks. It accepts both text and image inputs, and produces text outputs (including Structured Outputs). It is ideal for fine-tuning, and model outputs from a larger model like GPT-4o can be distilled to GPT-4o-mini to produce similar results at lower cost and latency.
public static Model GPT4oMini { get; }Property Value
Remarks
- Context Window: 128,000 tokens
- Max Output Tokens: 16,384 max output tokens
GPT4oRealtime
This is a preview release of the GPT-4o Realtime model, capable of responding to audio and text inputs in realtime over WebRTC or a WebSocket interface.
public static Model GPT4oRealtime { get; }Property Value
Remarks
- Context Window: 128,000 tokens
- Max Output Tokens: 4,096 tokens
GPT4oRealtimeMini
This is a preview release of the GPT-4o-mini Realtime model, capable of responding to audio and text inputs in realtime over WebRTC or a WebSocket interface.
public static Model GPT4oRealtimeMini { get; }Property Value
Remarks
- Context Window: 128,000 tokens
- Max Output Tokens: 4,096 tokens
GPT5
GPT-5 is our flagship model for coding, reasoning, and agentic tasks across domains.
public static Model GPT5 { get; }Property Value
Remarks
- Context Window: 400,000 context window
- Max Output Tokens: 128,000 max output tokens
GPT5_Chat
GPT-5 Chat points to the GPT-5 snapshot currently used in ChatGPT. We recommend GPT-5 for most API usage, but feel free to use this GPT-5 Chat model to test our latest improvements for chat use cases.
public static Model GPT5_Chat { get; }Property Value
Remarks
- Context Window: 128,000 context window
- Max Output Tokens: 16,384 max output tokens
GPT5_Codex
GPT-5-Codex is a version of GPT-5 optimized for agentic coding tasks in Codex or similar environments. It's available in the Responses API only and the underlying model snapshot will be regularly updated.
public static Model GPT5_Codex { get; }Property Value
Remarks
- Context Window: 400,000 tokens
- Max Output Tokens: 128,000 tokens
GPT5_Mini
GPT-5 mini is a faster, more cost-efficient version of GPT-5. It's great for well-defined tasks and precise prompts.
public static Model GPT5_Mini { get; }Property Value
Remarks
- Context Window: 400,000 context window
- Max Output Tokens: 128,000 max output tokens
GPT5_Nano
GPT-5 Nano is our fastest, cheapest version of GPT-5. It's great for summarization and classification tasks.
public static Model GPT5_Nano { get; }Property Value
Remarks
- Context Window: 400,000 context window
- Max Output Tokens: 128,000 max output tokens
GPT_Audio
The gpt-audio model is our first generally available audio model. It accepts audio inputs and outputs, and can be used in the Chat Completions REST API.
public static Model GPT_Audio { get; }Property Value
Remarks
- Context Window: 128,000 tokens
- Max Output Tokens: 16,384 max output tokens
GPT_Image_1
GPT Image 1 is our new state-of-the-art image generation model. It is a natively multimodal language model that accepts both text and image inputs, and produces image outputs.
public static Model GPT_Image_1 { get; }Property Value
GPT_OSS_120B
gpt-oss-120b is our most powerful open-weight model, which fits into a single H100 GPU (117B parameters with 5.1B active parameters).
public static Model GPT_OSS_120B { get; }Property Value
Remarks
- Context Window: 131,072 context window
- Max Output Tokens: 131,072 max output tokens
GPT_OSS_20B
gpt-oss-20b is our medium-sized open-weight model for low latency, local, or specialized use-cases (21B parameters with 3.6B active parameters).
public static Model GPT_OSS_20B { get; }Property Value
Remarks
- Context Window: 131,072 context window
- Max Output Tokens: 131,072 max output tokens
GPT_Realtime
This is our first general-availability realtime model, capable of responding to audio and text inputs in realtime over WebRTC, WebSocket, or SIP connections.
public static Model GPT_Realtime { get; }Property Value
Remarks
- Context Window: 32,000 tokens
- Max Output Tokens: 4,096 tokens
Id
[JsonInclude]
[JsonPropertyName("id")]
public string Id { get; }Property Value
Moderation_Latest
[Obsolete("use OmniModerationLatest")]
public static Model Moderation_Latest { get; }Property Value
Moderation_Stable
[Obsolete("use OmniModerationLatest")]
public static Model Moderation_Stable { get; }Property Value
O1
The o1 series of models are trained with reinforcement learning to perform complex reasoning. o1 models think before they answer, producing a long internal chain of thought before responding to the user.
public static Model O1 { get; }Property Value
Remarks
- Context Window: 200,000 tokens
- Max Output Tokens: 100,000 tokens
O1Mini
The o1 reasoning model is designed to solve hard problems across domains. o1-mini is a faster and more affordable reasoning model, but we recommend using the newer o3-mini model that features higher intelligence at the same latency and price as o1-mini.
[Obsolete("Deprecated")]
public static Model O1Mini { get; }Property Value
Remarks
- Context Window: 128,000 tokens
- Max Output Tokens: 65,536 max output tokens
O1Pro
The o1 series of models are trained with reinforcement learning to think before they answer and perform complex reasoning. The o1-pro model uses more compute to think harder and provide consistently better answers. o1-pro is available in the Responses API only to enable support for multi-turn model interactions before responding to API requests, and other advanced API features in the future.
public static Model O1Pro { get; }Property Value
Remarks
- Context Window: 200,000 tokens
- Max Output Tokens: 100,000 tokens
O3
o3 is a well-rounded and powerful model across domains. It sets a new standard for math, science, coding, and visual reasoning tasks. It also excels at technical writing and instruction-following. Use it to think through multi-step problems that involve analysis across text, code, and images.
public static Model O3 { get; }Property Value
Remarks
- Context Window: 200,000 tokens
- Max Output Tokens: 100,000 tokens
O3Mini
o3-mini is our newest small reasoning model, providing high intelligence at the same cost and latency targets of o1-mini. o3-mini supports key developer features, like Structured Outputs, function calling, and Batch API.
public static Model O3Mini { get; }Property Value
Remarks
- Context Window: 200,000 tokens
- Max Output Tokens: 100,000 tokens
O3Pro
The o-series of models are trained with reinforcement learning to think before they answer and perform complex reasoning.
The o3-pro model uses more compute to think harder and provide consistently better answers.
o3-pro is available in the Responses API only to enable support for multi-turn model interactions before responding to API requests,
and other advanced API features in the future. Since o3-pro is designed to tackle tough problems, some requests may take several minutes to finish.
To avoid timeouts, try using background mode.
public static Model O3Pro { get; }Property Value
Remarks
- Context Window: 200,000 tokens
- Max Output Tokens: 100,000 tokens
O4Mini
o4-mini is our latest small o-series model. It's optimized for fast, effective reasoning with exceptionally efficient performance in coding and visual tasks.
public static Model O4Mini { get; }Property Value
Remarks
- Context Window: 200,000 tokens
- Max Output Tokens: 100,000 tokens
Object
[JsonInclude]
[JsonPropertyName("object")]
public string Object { get; }Property Value
OmniModerationLatest
public static Model OmniModerationLatest { get; }Property Value
OwnedBy
[JsonInclude]
[JsonPropertyName("owned_by")]
public string OwnedBy { get; }Property Value
Parent
[JsonInclude]
[JsonPropertyName("parent")]
public string Parent { get; }Property Value
Permissions
[JsonInclude]
[JsonPropertyName("permission")]
public IReadOnlyList<Permission> Permissions { get; }Property Value
Root
[JsonInclude]
[JsonPropertyName("root")]
public string Root { get; }Property Value
TTS_1
TTS is a model that converts text to natural sounding spoken text. The tts-1-hd model is optimized for high quality text-to-speech use cases. Use it with the Speech endpoint in the Audio API.
public static Model TTS_1 { get; }Property Value
TTS_1HD
TTS is a model that converts text to natural sounding spoken text. The tts-1-hd model is optimized for high quality text-to-speech use cases. Use it with the Speech endpoint in the Audio API.
public static Model TTS_1HD { get; }Property Value
TTS_GPT_4o_Mini
GPT-4o mini TTS is a text-to-speech model built on GPT-4o mini, a fast and powerful language model. Use it to convert text to natural sounding spoken text.
public static Model TTS_GPT_4o_Mini { get; }Property Value
Remarks
The maximum number of input tokens is 2000.
Transcribe_GPT_4o
GPT-4o Transcribe is a speech-to-text model that uses GPT-4o to transcribe audio. It offers improvements to word error rate and better language recognition and accuracy compared to original Whisper models. Use it for more accurate transcripts.
public static Model Transcribe_GPT_4o { get; }Property Value
Transcribe_GPT_4o_Mini
GPT-4o mini Transcribe is a speech-to-text model that uses GPT-4o mini to transcribe audio. It offers improvements to word error rate and better language recognition and accuracy compared to original Whisper models. Use it for more accurate transcripts.
public static Model Transcribe_GPT_4o_Mini { get; }Property Value
Whisper1
Whisper is a general-purpose speech recognition model, trained on a large dataset of diverse audio. You can also use it as a multitask model to perform multilingual speech recognition as well as speech translation and language identification.
public static Model Whisper1 { get; }Property Value
Methods
ToString()
Returns a string that represents the current object.
public override string ToString()Returns
- string
- A string that represents the current object. 
Operators
implicit operator string(Model)
Allows a model to be implicitly cast to the string of its id.
public static implicit operator string(Model model)Parameters
Returns
implicit operator Model(string)
Allows a string to be implicitly cast as a Model
public static implicit operator Model(string name)Parameters
- namestring