Class: Anthropic::Models::Beta::MessageCreateParams
- Inherits:
-
Internal::Type::BaseModel
- Object
- Internal::Type::BaseModel
- Anthropic::Models::Beta::MessageCreateParams
- Extended by:
- Internal::Type::RequestParameters::Converter
- Includes:
- Internal::Type::RequestParameters
- Defined in:
- lib/anthropic/models/beta/message_create_params.rb
Overview
Defined Under Namespace
Modules: Container, ServiceTier, System
Instance Attribute Summary collapse
-
#betas ⇒ Array<String, Symbol, Anthropic::Models::AnthropicBeta>?
Optional header to specify the beta version(s) you want to use.
-
#container ⇒ Anthropic::Models::Beta::BetaContainerParams, ...
Container identifier for reuse across requests.
-
#context_management ⇒ Anthropic::Models::Beta::BetaContextManagementConfig?
Context management configuration.
-
#max_tokens ⇒ Integer
The maximum number of tokens to generate before stopping.
-
#mcp_servers ⇒ Array<Anthropic::Models::Beta::BetaRequestMCPServerURLDefinition>?
MCP servers to be utilized in this request.
-
#messages ⇒ Array<Anthropic::Models::Beta::BetaMessageParam>
Input messages.
-
#metadata ⇒ Anthropic::Models::Beta::BetaMetadata?
An object describing metadata about the request.
-
#model ⇒ Symbol, ...
The model that will complete your prompt.nnSee [models](docs.anthropic.com/en/docs/models-overview) for additional details and options.
-
#output_config ⇒ Anthropic::Models::Beta::BetaOutputConfig?
Configuration options for the model’s output.
-
#output_format ⇒ Anthropic::Models::Beta::BetaJSONOutputFormat?
A schema to specify Claude’s output format in responses.
-
#service_tier ⇒ Symbol, ...
Determines whether to use priority capacity (if available) or standard capacity for this request.
-
#stop_sequences ⇒ Array<String>?
Custom text sequences that will cause the model to stop generating.
-
#system_ ⇒ String, ...
System prompt.
-
#temperature ⇒ Float?
Amount of randomness injected into the response.
-
#thinking ⇒ Anthropic::Models::Beta::BetaThinkingConfigEnabled, ...
Configuration for enabling Claude’s extended thinking.
-
#tool_choice ⇒ Anthropic::Models::Beta::BetaToolChoiceAuto, ...
How the model should use the provided tools.
-
#tools ⇒ Array<Anthropic::Models::Beta::BetaTool, Anthropic::Models::Beta::BetaToolBash20241022, Anthropic::Models::Beta::BetaToolBash20250124, Anthropic::Models::Beta::BetaCodeExecutionTool20250522, Anthropic::Models::Beta::BetaCodeExecutionTool20250825, Anthropic::Models::Beta::BetaToolComputerUse20241022, Anthropic::Models::Beta::BetaMemoryTool20250818, Anthropic::Models::Beta::BetaToolComputerUse20250124, Anthropic::Models::Beta::BetaToolTextEditor20241022, Anthropic::Models::Beta::BetaToolComputerUse20251124, Anthropic::Models::Beta::BetaToolTextEditor20250124, Anthropic::Models::Beta::BetaToolTextEditor20250429, Anthropic::Models::Beta::BetaToolTextEditor20250728, Anthropic::Models::Beta::BetaWebSearchTool20250305, Anthropic::Models::Beta::BetaWebFetchTool20250910, Anthropic::Models::Beta::BetaToolSearchToolBm25_20251119, Anthropic::Models::Beta::BetaToolSearchToolRegex20251119, Anthropic::Models::Beta::BetaMCPToolset>?
Definitions of tools that the model may use.
-
#top_k ⇒ Integer?
Only sample from the top K options for each subsequent token.
-
#top_p ⇒ Float?
Use nucleus sampling.
Attributes included from Internal::Type::RequestParameters
Class Method Summary collapse
Instance Method Summary collapse
-
#initialize(max_tokens: , messages: , model: , container: nil, context_management: nil, mcp_servers: nil, metadata: nil, output_config: nil, output_format: nil, service_tier: nil, stop_sequences: nil, system_: nil, temperature: nil, thinking: nil, tool_choice: nil, tools: nil, top_k: nil, top_p: nil, betas: nil, request_options: {}) ⇒ Object
constructor
Some parameter documentations has been truncated, see MessageCreateParams for more details.
Methods included from Internal::Type::RequestParameters::Converter
Methods included from Internal::Type::RequestParameters
Methods inherited from Internal::Type::BaseModel
==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml
Methods included from Internal::Type::Converter
#coerce, coerce, #dump, dump, inspect, #inspect, meta_info, new_coerce_state, type_info
Methods included from Internal::Util::SorbetRuntimeSupport
#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type
Constructor Details
#initialize(max_tokens: , messages: , model: , container: nil, context_management: nil, mcp_servers: nil, metadata: nil, output_config: nil, output_format: nil, service_tier: nil, stop_sequences: nil, system_: nil, temperature: nil, thinking: nil, tool_choice: nil, tools: nil, top_k: nil, top_p: nil, betas: nil, request_options: {}) ⇒ Object
Some parameter documentations has been truncated, see Anthropic::Models::Beta::MessageCreateParams for more details.
|
|
# File 'lib/anthropic/models/beta/message_create_params.rb', line 324
|
Instance Attribute Details
#betas ⇒ Array<String, Symbol, Anthropic::Models::AnthropicBeta>?
Optional header to specify the beta version(s) you want to use.
322 |
# File 'lib/anthropic/models/beta/message_create_params.rb', line 322 optional :betas, -> { Anthropic::Internal::Type::ArrayOf[union: Anthropic::AnthropicBeta] } |
#container ⇒ Anthropic::Models::Beta::BetaContainerParams, ...
Container identifier for reuse across requests.
107 |
# File 'lib/anthropic/models/beta/message_create_params.rb', line 107 optional :container, union: -> { Anthropic::Beta::MessageCreateParams::Container }, nil?: true |
#context_management ⇒ Anthropic::Models::Beta::BetaContextManagementConfig?
Context management configuration.
This allows you to control how Claude manages context across multiple requests, such as whether to clear function results or not.
116 |
# File 'lib/anthropic/models/beta/message_create_params.rb', line 116 optional :context_management, -> { Anthropic::Beta::BetaContextManagementConfig }, nil?: true |
#max_tokens ⇒ Integer
The maximum number of tokens to generate before stopping.
Note that our models may stop before reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate.
Different models have different maximum values for this parameter. See [models](docs.claude.com/en/docs/models-overview) for details.
23 |
# File 'lib/anthropic/models/beta/message_create_params.rb', line 23 required :max_tokens, Integer |
#mcp_servers ⇒ Array<Anthropic::Models::Beta::BetaRequestMCPServerURLDefinition>?
MCP servers to be utilized in this request
122 123 |
# File 'lib/anthropic/models/beta/message_create_params.rb', line 122 optional :mcp_servers, -> { Anthropic::Internal::Type::ArrayOf[Anthropic::Beta::BetaRequestMCPServerURLDefinition] } |
#messages ⇒ Array<Anthropic::Models::Beta::BetaMessageParam>
Input messages.
Our models are trained to operate on alternating ‘user` and `assistant` conversational turns. When creating a new `Message`, you specify the prior conversational turns with the `messages` parameter, and the model then generates the next `Message` in the conversation. Consecutive `user` or `assistant` turns in your request will be combined into a single turn.
Each input message must be an object with a ‘role` and `content`. You can specify a single `user`-role message, or you can include multiple `user` and `assistant` messages.
If the final message uses the ‘assistant` role, the response content will continue immediately from the content in that message. This can be used to constrain part of the model’s response.
Example with a single ‘user` message:
“‘json
- { “role”: “user”, “content”: “Hello, Claude” }
-
“‘
Example with multiple conversational turns:
“‘json [
{ "role": "user", "content": "Hello there." }, { "role": "assistant", "content": "Hi, I'm Claude. How can I help you?" }, { "role": "user", "content": "Can you explain LLMs in plain English?" }] “‘
Example with a partially-filled response from Claude:
“‘json [
{ "role": "user", "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun" }, { "role": "assistant", "content": "The best answer is (" }] “‘
Each input message ‘content` may be either a single `string` or an array of content blocks, where each block has a specific `type`. Using a `string` for `content` is shorthand for an array of one content block of type `“text”`. The following input messages are equivalent:
“‘json { “role”: “user”, “content”: “Hello, Claude” } “`
“‘json { “role”: “user”, “content”: [{ “type”: “text”, “text”: “Hello, Claude” }] } “`
See [input examples](docs.claude.com/en/api/messages-examples).
Note that if you want to include a [system prompt](docs.claude.com/en/docs/system-prompts), you can use the top-level ‘system` parameter — there is no `“system”` role for input messages in the Messages API.
There is a limit of 100,000 messages in a single request.
93 |
# File 'lib/anthropic/models/beta/message_create_params.rb', line 93 required :messages, -> { Anthropic::Internal::Type::ArrayOf[Anthropic::Beta::BetaMessageParam] } |
#metadata ⇒ Anthropic::Models::Beta::BetaMetadata?
An object describing metadata about the request.
129 |
# File 'lib/anthropic/models/beta/message_create_params.rb', line 129 optional :metadata, -> { Anthropic::Beta::BetaMetadata } |
#model ⇒ Symbol, ...
The model that will complete your prompt.nnSee [models](docs.anthropic.com/en/docs/models-overview) for additional details and options.
101 |
# File 'lib/anthropic/models/beta/message_create_params.rb', line 101 required :model, union: -> { Anthropic::Model } |
#output_config ⇒ Anthropic::Models::Beta::BetaOutputConfig?
Configuration options for the model’s output. Controls aspects like how much effort the model puts into its response.
136 |
# File 'lib/anthropic/models/beta/message_create_params.rb', line 136 optional :output_config, -> { Anthropic::Beta::BetaOutputConfig } |
#output_format ⇒ Anthropic::Models::Beta::BetaJSONOutputFormat?
A schema to specify Claude’s output format in responses.
142 |
# File 'lib/anthropic/models/beta/message_create_params.rb', line 142 optional :output_format, -> { Anthropic::Beta::BetaJSONOutputFormat }, nil?: true |
#service_tier ⇒ Symbol, ...
Determines whether to use priority capacity (if available) or standard capacity for this request.
Anthropic offers different levels of service for your API requests. See [service-tiers](docs.claude.com/en/api/service-tiers) for details.
152 |
# File 'lib/anthropic/models/beta/message_create_params.rb', line 152 optional :service_tier, enum: -> { Anthropic::Beta::MessageCreateParams::ServiceTier } |
#stop_sequences ⇒ Array<String>?
Custom text sequences that will cause the model to stop generating.
Our models will normally stop when they have naturally completed their turn, which will result in a response ‘stop_reason` of `“end_turn”`.
If you want the model to stop generating when it encounters custom strings of text, you can use the ‘stop_sequences` parameter. If the model encounters one of the custom sequences, the response `stop_reason` value will be `“stop_sequence”` and the response `stop_sequence` value will contain the matched stop sequence.
166 |
# File 'lib/anthropic/models/beta/message_create_params.rb', line 166 optional :stop_sequences, Anthropic::Internal::Type::ArrayOf[String] |
#system_ ⇒ String, ...
System prompt.
A system prompt is a way of providing context and instructions to Claude, such as specifying a particular goal or role. See our [guide to system prompts](docs.claude.com/en/docs/system-prompts).
176 |
# File 'lib/anthropic/models/beta/message_create_params.rb', line 176 optional :system_, union: -> { Anthropic::Beta::MessageCreateParams::System }, api_name: :system |
#temperature ⇒ Float?
Amount of randomness injected into the response.
Defaults to ‘1.0`. Ranges from `0.0` to `1.0`. Use `temperature` closer to `0.0` for analytical / multiple choice, and closer to `1.0` for creative and generative tasks.
Note that even with ‘temperature` of `0.0`, the results will not be fully deterministic.
189 |
# File 'lib/anthropic/models/beta/message_create_params.rb', line 189 optional :temperature, Float |
#thinking ⇒ Anthropic::Models::Beta::BetaThinkingConfigEnabled, ...
Configuration for enabling Claude’s extended thinking.
When enabled, responses include ‘thinking` content blocks showing Claude’s thinking process before the final answer. Requires a minimum budget of 1,024 tokens and counts towards your ‘max_tokens` limit.
See [extended thinking](docs.claude.com/en/docs/build-with-claude/extended-thinking) for details.
203 |
# File 'lib/anthropic/models/beta/message_create_params.rb', line 203 optional :thinking, union: -> { Anthropic::Beta::BetaThinkingConfigParam } |
#tool_choice ⇒ Anthropic::Models::Beta::BetaToolChoiceAuto, ...
How the model should use the provided tools. The model can use a specific tool, any available tool, decide by itself, or not use tools at all.
210 |
# File 'lib/anthropic/models/beta/message_create_params.rb', line 210 optional :tool_choice, union: -> { Anthropic::Beta::BetaToolChoice } |
#tools ⇒ Array<Anthropic::Models::Beta::BetaTool, Anthropic::Models::Beta::BetaToolBash20241022, Anthropic::Models::Beta::BetaToolBash20250124, Anthropic::Models::Beta::BetaCodeExecutionTool20250522, Anthropic::Models::Beta::BetaCodeExecutionTool20250825, Anthropic::Models::Beta::BetaToolComputerUse20241022, Anthropic::Models::Beta::BetaMemoryTool20250818, Anthropic::Models::Beta::BetaToolComputerUse20250124, Anthropic::Models::Beta::BetaToolTextEditor20241022, Anthropic::Models::Beta::BetaToolComputerUse20251124, Anthropic::Models::Beta::BetaToolTextEditor20250124, Anthropic::Models::Beta::BetaToolTextEditor20250429, Anthropic::Models::Beta::BetaToolTextEditor20250728, Anthropic::Models::Beta::BetaWebSearchTool20250305, Anthropic::Models::Beta::BetaWebFetchTool20250910, Anthropic::Models::Beta::BetaToolSearchToolBm25_20251119, Anthropic::Models::Beta::BetaToolSearchToolRegex20251119, Anthropic::Models::Beta::BetaMCPToolset>?
Definitions of tools that the model may use.
If you include ‘tools` in your API request, the model may return `tool_use` content blocks that represent the model’s use of those tools. You can then run those tools using the tool input generated by the model and then optionally return results back to the model using ‘tool_result` content blocks.
There are two types of tools: **client tools** and **server tools**. The behavior described below applies to client tools. For [server tools](docs.claude.com/en/docs/agents-and-tools/tool-use/overview#server-tools), see their individual documentation as each has its own behavior (e.g., the [web search tool](docs.claude.com/en/docs/agents-and-tools/tool-use/web-search-tool)).
Each tool definition includes:
-
‘name`: Name of the tool.
-
‘description`: Optional, but strongly-recommended description of the tool.
-
‘input_schema`: [JSON schema](json-schema.org/draft/2020-12) for the tool `input` shape that the model will produce in `tool_use` output content blocks.
For example, if you defined ‘tools` as:
“‘json [
{
"name": "get_stock_price",
"description": "Get the current stock price for a given ticker symbol.",
"input_schema": {
"type": "object",
"properties": {
"ticker": {
"type": "string",
"description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
}
},
"required": ["ticker"]
}
}
] “‘
And then asked the model “What’s the S&P 500 at today?”, the model might produce ‘tool_use` content blocks in the response like this:
“‘json [
{
"type": "tool_use",
"id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
"name": "get_stock_price",
"input": { "ticker": "^GSPC" }
}
] “‘
You might then run your ‘get_stock_price` tool with `“^GSPC”` as an input, and return the following back to the model in a subsequent `user` message:
“‘json [
{
"type": "tool_result",
"tool_use_id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
"content": "259.75 USD"
}
] “‘
Tools can be used for workflows that include running client-side tools and functions, or more generally whenever you want the model to produce a particular JSON structure of output.
See our [guide](docs.claude.com/en/docs/tool-use) for more details.
290 |
# File 'lib/anthropic/models/beta/message_create_params.rb', line 290 optional :tools, -> { Anthropic::Internal::Type::ArrayOf[union: Anthropic::Beta::BetaToolUnion] } |
#top_k ⇒ Integer?
Only sample from the top K options for each subsequent token.
Used to remove “long tail” low probability responses. [Learn more technical details here](towardsdatascience.com/how-to-sample-from-language-models-682bceb97277).
Recommended for advanced use cases only. You usually only need to use ‘temperature`.
302 |
# File 'lib/anthropic/models/beta/message_create_params.rb', line 302 optional :top_k, Integer |
#top_p ⇒ Float?
Use nucleus sampling.
In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by ‘top_p`. You should either alter `temperature` or `top_p`, but not both.
Recommended for advanced use cases only. You usually only need to use ‘temperature`.
316 |
# File 'lib/anthropic/models/beta/message_create_params.rb', line 316 optional :top_p, Float |
Class Method Details
.values ⇒ Array<Symbol>
|
|
# File 'lib/anthropic/models/beta/message_create_params.rb', line 392
|
.variants ⇒ Array(Anthropic::Models::Beta::BetaContainerParams, String)
|
|
# File 'lib/anthropic/models/beta/message_create_params.rb', line 377
|