Class CreateAssistantRequest
- Namespace
- OpenAI.Assistants
- Assembly
- OpenAI-DotNet.dll
public sealed class CreateAssistantRequest
- Inheritance
-
CreateAssistantRequest
- Inherited Members
Constructors
CreateAssistantRequest(AssistantResponse, string, string, string, string, IEnumerable<Tool>, ToolResources, IReadOnlyDictionary<string, string>, double?, double?, JsonSchema, ChatResponseFormat?)
[Obsolete("use new .ctr")]
public CreateAssistantRequest(AssistantResponse assistant, string model, string name, string description, string instructions, IEnumerable<Tool> tools, ToolResources toolResources, IReadOnlyDictionary<string, string> metadata, double? temperature, double? topP, JsonSchema jsonSchema, ChatResponseFormat? responseFormat = null)
Parameters
assistant
AssistantResponsemodel
stringname
stringdescription
stringinstructions
stringtools
IEnumerable<Tool>toolResources
ToolResourcesmetadata
IReadOnlyDictionary<string, string>temperature
double?topP
double?jsonSchema
JsonSchemaresponseFormat
ChatResponseFormat?
CreateAssistantRequest(AssistantResponse, string, string, string, string, IEnumerable<Tool>, ToolResources, IReadOnlyDictionary<string, string>, double?, double?, ReasoningEffort, JsonSchema, ChatResponseFormat?)
Constructor
public CreateAssistantRequest(AssistantResponse assistant, string model = null, string name = null, string description = null, string instructions = null, IEnumerable<Tool> tools = null, ToolResources toolResources = null, IReadOnlyDictionary<string, string> metadata = null, double? temperature = null, double? topP = null, ReasoningEffort reasoningEffort = (ReasoningEffort)0, JsonSchema jsonSchema = null, ChatResponseFormat? responseFormat = null)
Parameters
assistant
AssistantResponsemodel
stringID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
name
stringThe name of the assistant. The maximum length is 256 characters.
description
stringThe description of the assistant. The maximum length is 512 characters.
instructions
stringThe system instructions that the assistant uses. The maximum length is 32768 characters.
tools
IEnumerable<Tool>A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types 'code_interpreter', 'retrieval', or 'function'.
toolResources
ToolResourcesA set of resources that are used by Assistants and Threads. The resources are specific to the type of tool. For example, the CodeInterpreter requres a list of file ids, While the FileSearch requires a list vector store ids.
metadata
IReadOnlyDictionary<string, string>Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.
temperature
double?What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
topP
double?An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
reasoningEffort
ReasoningEffortConstrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.
jsonSchema
JsonSchemaThe JsonSchema to use for structured JSON outputs.
https://platform.openai.com/docs/guides/structured-outputs
https://json-schema.org/overview/what-is-jsonschemaresponseFormat
ChatResponseFormat?Specifies the format that the model must output. Setting to Json enables JSON mode, which guarantees the message the model generates is valid JSON.
Important: When using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.
CreateAssistantRequest(string, string, string, string, IEnumerable<Tool>, ToolResources, IReadOnlyDictionary<string, string>, double?, double?, ReasoningEffort, JsonSchema, ChatResponseFormat)
Constructor.
public CreateAssistantRequest(string model = null, string name = null, string description = null, string instructions = null, IEnumerable<Tool> tools = null, ToolResources toolResources = null, IReadOnlyDictionary<string, string> metadata = null, double? temperature = null, double? topP = null, ReasoningEffort reasoningEffort = (ReasoningEffort)0, JsonSchema jsonSchema = null, ChatResponseFormat responseFormat = ChatResponseFormat.Text)
Parameters
model
stringID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
name
stringThe name of the assistant. The maximum length is 256 characters.
description
stringThe description of the assistant. The maximum length is 512 characters.
instructions
stringThe system instructions that the assistant uses. The maximum length is 256,000 characters.
tools
IEnumerable<Tool>A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types 'code_interpreter', 'retrieval', or 'function'.
toolResources
ToolResourcesA set of resources that are used by Assistants and Threads. The resources are specific to the type of tool. For example, the CodeInterpreter requres a list of file ids, While the FileSearch requires a list vector store ids.
metadata
IReadOnlyDictionary<string, string>Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.
temperature
double?What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
topP
double?An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
reasoningEffort
ReasoningEffortConstrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.
jsonSchema
JsonSchemaThe JsonSchema to use for structured JSON outputs.
https://platform.openai.com/docs/guides/structured-outputs
https://json-schema.org/overview/what-is-jsonschemaresponseFormat
ChatResponseFormatSpecifies the format that the model must output. Setting to Json or JsonSchema enables JSON mode, which guarantees the message the model generates is valid JSON.
Important: When using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.
Properties
Description
The description of the assistant. The maximum length is 512 characters.
[JsonPropertyName("description")]
public string Description { get; }
Property Value
Instructions
The system instructions that the assistant uses. The maximum length is 256,000 characters.
[JsonPropertyName("instructions")]
public string Instructions { get; }
Property Value
Metadata
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.
[JsonPropertyName("metadata")]
[JsonIgnore(Condition = JsonIgnoreCondition.WhenWritingNull)]
public IReadOnlyDictionary<string, string> Metadata { get; }
Property Value
Model
ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
[JsonPropertyName("model")]
public string Model { get; }
Property Value
Name
The name of the assistant. The maximum length is 256 characters.
[JsonPropertyName("name")]
public string Name { get; }
Property Value
ReasoningEffort
Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.
[JsonInclude]
[JsonPropertyName("reasoning_effort")]
[JsonIgnore(Condition = JsonIgnoreCondition.WhenWritingDefault)]
public ReasoningEffort ReasoningEffort { get; }
Property Value
ResponseFormat
[JsonIgnore]
public ChatResponseFormat ResponseFormat { get; }
Property Value
ResponseFormatObject
Specifies the format that the model must output. Setting to Json or JsonSchema enables JSON mode, which guarantees the message the model generates is valid JSON.
[JsonPropertyName("response_format")]
[JsonConverter(typeof(ResponseFormatConverter))]
[JsonIgnore(Condition = JsonIgnoreCondition.WhenWritingDefault)]
public ResponseFormatObject ResponseFormatObject { get; }
Property Value
Remarks
Important: When using JSON mode you must still instruct the model to produce JSON yourself via some conversation message, for example via your system message. If you don't do this, the model may generate an unending stream of whitespace until the generation reaches the token limit, which may take a lot of time and give the appearance of a "stuck" request. Also note that the message content may be partial (i.e. cut off) if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.
Temperature
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
[JsonPropertyName("temperature")]
[JsonIgnore(Condition = JsonIgnoreCondition.WhenWritingNull)]
public double? Temperature { get; }
Property Value
ToolResources
A set of resources that are used by Assistants and Threads. The resources are specific to the type of tool. For example, the CodeInterpreter requres a list of file ids, While the FileSearch requires a list vector store ids.
[JsonPropertyName("tool_resources")]
[JsonIgnore(Condition = JsonIgnoreCondition.WhenWritingNull)]
public ToolResources ToolResources { get; }
Property Value
Tools
A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types 'code_interpreter', 'retrieval', or 'function'.
[JsonPropertyName("tools")]
[JsonIgnore(Condition = JsonIgnoreCondition.WhenWritingNull)]
public IReadOnlyList<Tool> Tools { get; }
Property Value
TopP
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
[JsonPropertyName("top_p")]
[JsonIgnore(Condition = JsonIgnoreCondition.WhenWritingNull)]
public double? TopP { get; }