Class: Anthropic::Models::Message
- Inherits:
-
Internal::Type::BaseModel
- Object
- Internal::Type::BaseModel
- Anthropic::Models::Message
- Defined in:
- lib/anthropic/models/message.rb
Overview
Instance Attribute Summary collapse
-
#content ⇒ Array<Anthropic::Models::TextBlock, Anthropic::Models::ThinkingBlock, Anthropic::Models::RedactedThinkingBlock, Anthropic::Models::ToolUseBlock, Anthropic::Models::ServerToolUseBlock, Anthropic::Models::WebSearchToolResultBlock>
Content generated by the model.
-
#id ⇒ String
Unique object identifier.
-
#model ⇒ Symbol, ...
The model that will complete your prompt.\n\nSee models for additional details and options.
-
#role ⇒ Symbol, :assistant
Conversational role of the generated message.
-
#stop_reason ⇒ Symbol, ...
The reason that we stopped.
-
#stop_sequence ⇒ String?
Which custom stop sequence was generated, if any.
-
#type ⇒ Symbol, :message
Object type.
-
#usage ⇒ Anthropic::Models::Usage
Billing and rate-limit usage.
Instance Method Summary collapse
-
#initialize(id: , content: , model: , stop_reason: , stop_sequence: , usage: , role: :assistant, type: :message) ⇒ void
constructor
Some parameter documentations has been truncated, see Message for more details.
Methods inherited from Internal::Type::BaseModel
==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml
Methods included from Internal::Type::Converter
#coerce, coerce, #dump, dump, inspect, #inspect, meta_info, new_coerce_state, type_info
Methods included from Internal::Util::SorbetRuntimeSupport
#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type
Constructor Details
#initialize(id: , content: , model: , stop_reason: , stop_sequence: , usage: , role: :assistant, type: :message) ⇒ void
Some parameter documentations has been truncated, see Anthropic::Models::Message for more details.
|
# File 'lib/anthropic/models/message.rb', line 127
|
Instance Attribute Details
#content ⇒ Array<Anthropic::Models::TextBlock, Anthropic::Models::ThinkingBlock, Anthropic::Models::RedactedThinkingBlock, Anthropic::Models::ToolUseBlock, Anthropic::Models::ServerToolUseBlock, Anthropic::Models::WebSearchToolResultBlock>
Content generated by the model.
This is an array of content blocks, each of which has a type
that determines
its shape.
Example:
[{ "type": "text", "text": "Hi, I'm Claude." }]
If the request input messages
ended with an assistant
turn, then the
response content
will continue directly from that last turn. You can use this
to constrain the model's output.
For example, if the input messages
were:
[
{
"role": "user",
"content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
},
{ "role": "assistant", "content": "The best answer is (" }
]
Then the response content
might be:
[{ "type": "text", "text": "B)" }]
52 |
# File 'lib/anthropic/models/message.rb', line 52 required :content, -> { Anthropic::Internal::Type::ArrayOf[union: Anthropic::ContentBlock] } |
#id ⇒ String
Unique object identifier.
The format and length of IDs may change over time.
15 |
# File 'lib/anthropic/models/message.rb', line 15 required :id, String |
#model ⇒ Symbol, ...
The model that will complete your prompt.\n\nSee models for additional details and options.
60 |
# File 'lib/anthropic/models/message.rb', line 60 required :model, union: -> { Anthropic::Model } |
#role ⇒ Symbol, :assistant
Conversational role of the generated message.
This will always be "assistant"
.
68 |
# File 'lib/anthropic/models/message.rb', line 68 required :role, const: :assistant |
#stop_reason ⇒ Symbol, ...
The reason that we stopped.
This may be one the following values:
"end_turn"
: the model reached a natural stopping point"max_tokens"
: we exceeded the requestedmax_tokens
or the model's maximum"stop_sequence"
: one of your provided customstop_sequences
was generated"tool_use"
: the model invoked one or more tools"pause_turn"
: we paused a long-running turn. You may provide the response back as-is in a subsequent request to let the model continue."refusal"
: when streaming classifiers intervene to handle potential policy violations
In non-streaming mode this value is always non-null. In streaming mode, it is
null in the message_start
event and non-null otherwise.
88 |
# File 'lib/anthropic/models/message.rb', line 88 required :stop_reason, enum: -> { Anthropic::StopReason }, nil?: true |
#stop_sequence ⇒ String?
Which custom stop sequence was generated, if any.
This value will be a non-null string if one of your custom stop sequences was generated.
97 |
# File 'lib/anthropic/models/message.rb', line 97 required :stop_sequence, String, nil?: true |
#type ⇒ Symbol, :message
Object type.
For Messages, this is always "message"
.
105 |
# File 'lib/anthropic/models/message.rb', line 105 required :type, const: :message |
#usage ⇒ Anthropic::Models::Usage
Billing and rate-limit usage.
Anthropic's API bills and rate-limits by token counts, as tokens represent the underlying cost to our systems.
Under the hood, the API transforms requests into a format suitable for the
model. The model's output then goes through a parsing stage before becoming an
API response. As a result, the token counts in usage
will not match one-to-one
with the exact visible content of an API request or response.
For example, output_tokens
will be non-zero, even for an empty string response
from Claude.
Total input tokens in a request is the summation of input_tokens
,
cache_creation_input_tokens
, and cache_read_input_tokens
.
125 |
# File 'lib/anthropic/models/message.rb', line 125 required :usage, -> { Anthropic::Usage } |