Class: Boxcars::Anthropic

Inherits:
Engine
  • Object
show all
Includes:
UnifiedObservability
Defined in:
lib/boxcars/engine/anthropic.rb

Overview

A engine that uses OpenAI’s API. rubocop:disable Metrics/ClassLength

Constant Summary collapse

DEFAULT_PARAMS =

The default parameters to use when asking the engine.

{
  model: "claude-3-5-sonnet-20240620",
  max_tokens: 4096,
  temperature: 0.1
}.freeze
DEFAULT_NAME =

the default name of the engine

"Anthropic engine"
DEFAULT_DESCRIPTION =

the default description of the engine

"useful for when you need to use Anthropic AI to answer questions. " \
"You should ask targeted questions"

Instance Attribute Summary collapse

Attributes inherited from Engine

#user_id

Instance Method Summary collapse

Methods inherited from Engine

#extract_answer

Constructor Details

#initialize(name: DEFAULT_NAME, description: DEFAULT_DESCRIPTION, prompts: [], **kwargs) ⇒ Anthropic

A engine is the driver for a single tool to run.

Parameters:

  • name (String) (defaults to: DEFAULT_NAME)

    The name of the engine. Defaults to “OpenAI engine”.

  • description (String) (defaults to: DEFAULT_DESCRIPTION)

    A description of the engine. Defaults to: useful for when you need to use AI to answer questions. You should ask targeted questions“.

  • prompts (Array<String>) (defaults to: [])

    The prompts to use when asking the engine. Defaults to [].



31
32
33
34
35
36
37
# File 'lib/boxcars/engine/anthropic.rb', line 31

def initialize(name: DEFAULT_NAME, description: DEFAULT_DESCRIPTION, prompts: [], **kwargs)
  user_id = kwargs.delete(:user_id)
  @llm_params = DEFAULT_PARAMS.merge(kwargs)
  @prompts = prompts
  @batch_size = 20
  super(description:, name:, user_id:)
end

Instance Attribute Details

#batch_sizeObject (readonly)

Returns the value of attribute batch_size.



11
12
13
# File 'lib/boxcars/engine/anthropic.rb', line 11

def batch_size
  @batch_size
end

#llm_paramsObject (readonly)

Returns the value of attribute llm_params.



11
12
13
# File 'lib/boxcars/engine/anthropic.rb', line 11

def llm_params
  @llm_params
end

#model_kwargsObject (readonly)

Returns the value of attribute model_kwargs.



11
12
13
# File 'lib/boxcars/engine/anthropic.rb', line 11

def model_kwargs
  @model_kwargs
end

#promptsObject (readonly)

Returns the value of attribute prompts.



11
12
13
# File 'lib/boxcars/engine/anthropic.rb', line 11

def prompts
  @prompts
end

Instance Method Details

#anthropic_client(anthropic_api_key: nil) ⇒ Object



43
44
45
# File 'lib/boxcars/engine/anthropic.rb', line 43

def anthropic_client(anthropic_api_key: nil)
  ::Anthropic::Client.new(access_token: anthropic_api_key)
end

#client(prompt:, inputs: {}, **kwargs) ⇒ Object

Get an answer from the engine.

Parameters:

  • prompt (String)

    The prompt to use when asking the engine.

  • anthropic_api_key (String)

    Optional api key to use when asking the engine. Defaults to Boxcars.configuration.anthropic_api_key.

  • kwargs (Hash)

    Additional parameters to pass to the engine if wanted.



52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
# File 'lib/boxcars/engine/anthropic.rb', line 52

def client(prompt:, inputs: {}, **kwargs)
  start_time = Time.now
  response_data = { response_obj: nil, parsed_json: nil, success: false, error: nil, status_code: nil }
  current_params = llm_params.merge(kwargs)
  current_prompt_object = prompt.is_a?(Array) ? prompt.first : prompt
  api_request_params = nil

  begin
    api_key = Boxcars.configuration.anthropic_api_key(**kwargs)
    aclient = anthropic_client(anthropic_api_key: api_key)
    api_request_params = convert_to_anthropic(current_prompt_object.as_messages(inputs).merge(current_params))

    if Boxcars.configuration.log_prompts
      if api_request_params[:messages].length < 2 && api_request_params[:system] && !api_request_params[:system].empty?
        Boxcars.debug(">>>>>> Role: system <<<<<<\n#{api_request_params[:system]}")
      end
      Boxcars.debug(api_request_params[:messages].last(2).map do |p|
        ">>>>>> Role: #{p[:role]} <<<<<<\n#{p[:content]}"
      end.join("\n"), :cyan)
    end

    raw_response = aclient.messages(parameters: api_request_params)
    _process_anthropic_response(raw_response, response_data)
  rescue StandardError => e
    _handle_anthropic_error(e, response_data)
  ensure
    call_context = {
      start_time:,
      prompt_object: current_prompt_object,
      inputs:,
      api_request_params:,
      current_params:
    }
    _track_anthropic_observability(call_context, response_data)
  end

  _anthropic_handle_call_outcome(response_data:)
end

#combine_assistant(params) ⇒ Object

rubocop:enable Metrics/AbcSize



210
211
212
213
214
# File 'lib/boxcars/engine/anthropic.rb', line 210

def combine_assistant(params)
  params[:messages] = combine_assistant_entries(params[:messages])
  params[:messages].last[:content].rstrip! if params[:messages].last[:role] == :assistant
  params
end

#combine_assistant_entries(hashes) ⇒ Object

if we have multiple assistant entries in a row, we need to combine them



217
218
219
220
221
222
223
224
225
226
227
# File 'lib/boxcars/engine/anthropic.rb', line 217

def combine_assistant_entries(hashes)
  combined_hashes = []
  hashes.each do |hash|
    if combined_hashes.empty? || combined_hashes.last[:role] != :assistant || hash[:role] != :assistant
      combined_hashes << hash
    else
      combined_hashes.last[:content].concat("\n", hash[:content].rstrip)
    end
  end
  combined_hashes
end

#conversation_model?(_model) ⇒ Boolean

Returns:

  • (Boolean)


39
40
41
# File 'lib/boxcars/engine/anthropic.rb', line 39

def conversation_model?(_model)
  true
end

#convert_to_anthropic(params) ⇒ Object

convert generic parameters to Anthopic specific ones rubocop:disable Metrics/AbcSize



202
203
204
205
206
207
# File 'lib/boxcars/engine/anthropic.rb', line 202

def convert_to_anthropic(params)
  params[:stop_sequences] = params.delete(:stop) if params.key?(:stop)
  params[:system] = params[:messages].shift[:content] if params.dig(:messages, 0, :role) == :system
  params[:messages].pop if params[:messages].last[:content].nil? || params[:messages].last[:content].strip.empty?
  combine_assistant(params)
end

#default_paramsObject

Get the default parameters for the engine.



107
108
109
# File 'lib/boxcars/engine/anthropic.rb', line 107

def default_params
  llm_params
end

#default_prefixesObject



229
230
231
# File 'lib/boxcars/engine/anthropic.rb', line 229

def default_prefixes
  { system: "Human: ", user: "Human: ", assistant: "Assistant: ", history: :history }
end

#engine_typeObject

the engine type



161
162
163
# File 'lib/boxcars/engine/anthropic.rb', line 161

def engine_type
  "claude"
end

#extract_model_version(model_string) ⇒ Object

Raises:



187
188
189
190
191
192
193
194
195
196
197
198
# File 'lib/boxcars/engine/anthropic.rb', line 187

def extract_model_version(model_string)
  # Use a regular expression to find the version number
  match = model_string.match(/claude-(\d+)(?:-(\d+))?/)

  raise ArgumentError, "No version number found in model string: #{model_string}" unless match

  major = match[1].to_i
  minor = match[2].to_i

  # Combine major and minor versions
  major + (minor.to_f / 10)
end

#generate(prompts:, stop: nil) ⇒ EngineResult

Call out to OpenAI’s endpoint with k unique prompts.

Parameters:

  • prompts (Array<String>)

    The prompts to pass into the model.

  • inputs (Array<String>)

    The inputs to subsitite into the prompt.

  • stop (Array<String>) (defaults to: nil)

    Optional list of stop words to use when generating.

Returns:



136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
# File 'lib/boxcars/engine/anthropic.rb', line 136

def generate(prompts:, stop: nil)
  params = {}
  params[:stop] = stop if stop
  choices = []
  # Get the token usage from the response.
  # Includes prompt, completion, and total tokens used.
  prompts.each_slice(batch_size) do |sub_prompts|
    sub_prompts.each do |sprompts, inputs|
      response = client(prompt: sprompts, inputs:, **params)
      validate_response!(response)
      choices << response
    end
  end

  n = params.fetch(:n, 1)
  generations = []
  prompts.each_with_index do |_prompt, i|
    sub_choices = choices[i * n, (i + 1) * n]
    generations.push(generation_info(sub_choices))
  end
  EngineResult.new(generations:, engine_output: { token_usage: {} })
end

#generation_info(sub_choices) ⇒ Array<Generation>

Get generation informaton

Parameters:

  • sub_choices (Array<Hash>)

    The choices to get generation info for.

Returns:

  • (Array<Generation>)

    The generation information.



114
115
116
117
118
119
120
121
122
123
124
# File 'lib/boxcars/engine/anthropic.rb', line 114

def generation_info(sub_choices)
  sub_choices.map do |choice|
    Generation.new(
      text: choice["completion"],
      generation_info: {
        finish_reason: choice.fetch("stop_reason", nil),
        logprobs: choice.fetch("logprobs", nil)
      }
    )
  end
end

#get_num_tokens(text:) ⇒ Object

calculate the number of tokens used



166
167
168
# File 'lib/boxcars/engine/anthropic.rb', line 166

def get_num_tokens(text:)
  text.split.length # TODO: hook up to token counting gem
end

#max_tokens_for_prompt(prompt_text) ⇒ Integer

Calculate the maximum number of tokens possible to generate for a prompt.

Parameters:

  • prompt_text (String)

    The prompt text to use.

Returns:

  • (Integer)

    the number of tokens possible to generate.



179
180
181
182
183
184
185
# File 'lib/boxcars/engine/anthropic.rb', line 179

def max_tokens_for_prompt(prompt_text)
  num_tokens = get_num_tokens(prompt_text)

  # get max context size for model by name
  max_size = modelname_to_contextsize(model_name)
  max_size - num_tokens
end

#modelname_to_contextsize(_modelname) ⇒ Object

lookup the context size for a model by name

Parameters:

  • modelname (String)

    The name of the model to lookup.



172
173
174
# File 'lib/boxcars/engine/anthropic.rb', line 172

def modelname_to_contextsize(_modelname)
  100000
end

#run(question) ⇒ Object

get an answer from the engine for a question.

Parameters:

  • question (String)

    The question to ask the engine.

  • kwargs (Hash)

    Additional parameters to pass to the engine if wanted.

Raises:



94
95
96
97
98
99
100
101
102
103
104
# File 'lib/boxcars/engine/anthropic.rb', line 94

def run(question, **)
  prompt = Prompt.new(template: question)
  response = client(prompt:, **)

  raise Error, "Anthropic: No response from API" unless response
  raise Error, "Anthropic: #{response['error']}" if response['error']

  answer = response['completion']
  Boxcars.debug("Answer: #{answer}", :cyan)
  answer
end

#validate_response!(response, must_haves: %w[completion])) ⇒ Object

validate_response! method uses the base implementation with Anthropic-specific must_haves



127
128
129
# File 'lib/boxcars/engine/anthropic.rb', line 127

def validate_response!(response, must_haves: %w[completion])
  super
end