Class: Vapi::OpenAiModel

Inherits:
Object
  • Object
show all
Defined in:
lib/vapi_server_sdk/types/open_ai_model.rb

Constant Summary collapse

OMIT =
Object.new

Instance Attribute Summary collapse

Class Method Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(model:, messages: OMIT, tools: OMIT, tool_ids: OMIT, knowledge_base: OMIT, knowledge_base_id: OMIT, fallback_models: OMIT, tool_strict_compatibility_mode: OMIT, temperature: OMIT, max_tokens: OMIT, emotion_recognition_enabled: OMIT, num_fast_turns: OMIT, additional_properties: nil) ⇒ Vapi::OpenAiModel

Parameters:

  • messages (Array<Vapi::OpenAiMessage>) (defaults to: OMIT)

    This is the starting state for the conversation.

  • tools (Array<Vapi::OpenAiModelToolsItem>) (defaults to: OMIT)

    These are the tools that the assistant can use during the call. To use existing tools, use ‘toolIds`. Both `tools` and `toolIds` can be used together.

  • tool_ids (Array<String>) (defaults to: OMIT)

    These are the tools that the assistant can use during the call. To use transient tools, use ‘tools`. Both `tools` and `toolIds` can be used together.

  • knowledge_base (Vapi::CreateCustomKnowledgeBaseDto) (defaults to: OMIT)

    These are the options for the knowledge base.

  • knowledge_base_id (String) (defaults to: OMIT)

    This is the ID of the knowledge base the model will use.

  • model (Vapi::OpenAiModelModel)

    This is the OpenAI model that will be used. When using Vapi OpenAI or your own Azure Credentials, you have the option to specify the region for the selected model. This shouldn’t be specified unless you have a specific reason to do so. Vapi will automatically find the fastest region that make sense. This is helpful when you are required to comply with Data Residency rules. Learn more about Azure regions here azure.microsoft.com/en-us/explore/global-infrastructure/data-residency/. @default undefined

  • fallback_models (Array<Vapi::OpenAiModelFallbackModelsItem>) (defaults to: OMIT)

    These are the fallback models that will be used if the primary model fails. This shouldn’t be specified unless you have a specific reason to do so. Vapi will automatically find the fastest fallbacks that make sense.

  • tool_strict_compatibility_mode (Vapi::OpenAiModelToolStrictCompatibilityMode) (defaults to: OMIT)

    Azure OpenAI doesn’t support ‘maxLength` right now -entra-id&pivots=programming-language-csharp#unsupported-type-specific-keywords. Need to strip.

    • ‘strip-parameters-with-unsupported-validation` will strip parameters with

    unsupported validation.

    • ‘strip-unsupported-validation` will keep the parameters but strip unsupported

    validation. @default ‘strip-unsupported-validation`

  • temperature (Float) (defaults to: OMIT)

    This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.

  • max_tokens (Float) (defaults to: OMIT)

    This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.

  • emotion_recognition_enabled (Boolean) (defaults to: OMIT)

    This determines whether we detect user’s emotion while they speak and send it as an additional info to model. Default ‘false` because the model is usually are good at understanding the user’s emotion from text. @default false

  • num_fast_turns (Float) (defaults to: OMIT)

    This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai. Default is 0. @default 0

  • additional_properties (OpenStruct) (defaults to: nil)

    Additional properties unmapped to the current class definition



122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 122

def initialize(model:, messages: OMIT, tools: OMIT, tool_ids: OMIT, knowledge_base: OMIT, knowledge_base_id: OMIT,
               fallback_models: OMIT, tool_strict_compatibility_mode: OMIT, temperature: OMIT, max_tokens: OMIT, emotion_recognition_enabled: OMIT, num_fast_turns: OMIT, additional_properties: nil)
  @messages = messages if messages != OMIT
  @tools = tools if tools != OMIT
  @tool_ids = tool_ids if tool_ids != OMIT
  @knowledge_base = knowledge_base if knowledge_base != OMIT
  @knowledge_base_id = knowledge_base_id if knowledge_base_id != OMIT
  @model = model
  @fallback_models = fallback_models if fallback_models != OMIT
  @tool_strict_compatibility_mode = tool_strict_compatibility_mode if tool_strict_compatibility_mode != OMIT
  @temperature = temperature if temperature != OMIT
  @max_tokens = max_tokens if max_tokens != OMIT
  @emotion_recognition_enabled = emotion_recognition_enabled if emotion_recognition_enabled != OMIT
  @num_fast_turns = num_fast_turns if num_fast_turns != OMIT
  @additional_properties = additional_properties
  @_field_set = {
    "messages": messages,
    "tools": tools,
    "toolIds": tool_ids,
    "knowledgeBase": knowledge_base,
    "knowledgeBaseId": knowledge_base_id,
    "model": model,
    "fallbackModels": fallback_models,
    "toolStrictCompatibilityMode": tool_strict_compatibility_mode,
    "temperature": temperature,
    "maxTokens": max_tokens,
    "emotionRecognitionEnabled": emotion_recognition_enabled,
    "numFastTurns": num_fast_turns
  }.reject do |_k, v|
    v == OMIT
  end
end

Instance Attribute Details

#additional_propertiesOpenStruct (readonly)

Returns Additional properties unmapped to the current class definition.

Returns:

  • (OpenStruct)

    Additional properties unmapped to the current class definition



70
71
72
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 70

def additional_properties
  @additional_properties
end

#emotion_recognition_enabledBoolean (readonly)

Returns This determines whether we detect user’s emotion while they speak and send it as an additional info to model. Default ‘false` because the model is usually are good at understanding the user’s emotion from text. @default false.

Returns:

  • (Boolean)

    This determines whether we detect user’s emotion while they speak and send it as an additional info to model. Default ‘false` because the model is usually are good at understanding the user’s emotion from text. @default false



62
63
64
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 62

def emotion_recognition_enabled
  @emotion_recognition_enabled
end

#fallback_modelsArray<Vapi::OpenAiModelFallbackModelsItem> (readonly)

Returns These are the fallback models that will be used if the primary model fails. This shouldn’t be specified unless you have a specific reason to do so. Vapi will automatically find the fastest fallbacks that make sense.

Returns:

  • (Array<Vapi::OpenAiModelFallbackModelsItem>)

    These are the fallback models that will be used if the primary model fails. This shouldn’t be specified unless you have a specific reason to do so. Vapi will automatically find the fastest fallbacks that make sense.



41
42
43
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 41

def fallback_models
  @fallback_models
end

#knowledge_baseVapi::CreateCustomKnowledgeBaseDto (readonly)

Returns These are the options for the knowledge base.

Returns:



25
26
27
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 25

def knowledge_base
  @knowledge_base
end

#knowledge_base_idString (readonly)

Returns This is the ID of the knowledge base the model will use.

Returns:

  • (String)

    This is the ID of the knowledge base the model will use.



27
28
29
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 27

def knowledge_base_id
  @knowledge_base_id
end

#max_tokensFloat (readonly)

Returns This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.

Returns:

  • (Float)

    This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.



56
57
58
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 56

def max_tokens
  @max_tokens
end

#messagesArray<Vapi::OpenAiMessage> (readonly)

Returns This is the starting state for the conversation.

Returns:



15
16
17
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 15

def messages
  @messages
end

#modelVapi::OpenAiModelModel (readonly)

Returns This is the OpenAI model that will be used. When using Vapi OpenAI or your own Azure Credentials, you have the option to specify the region for the selected model. This shouldn’t be specified unless you have a specific reason to do so. Vapi will automatically find the fastest region that make sense. This is helpful when you are required to comply with Data Residency rules. Learn more about Azure regions here azure.microsoft.com/en-us/explore/global-infrastructure/data-residency/. @default undefined.

Returns:

  • (Vapi::OpenAiModelModel)

    This is the OpenAI model that will be used. When using Vapi OpenAI or your own Azure Credentials, you have the option to specify the region for the selected model. This shouldn’t be specified unless you have a specific reason to do so. Vapi will automatically find the fastest region that make sense. This is helpful when you are required to comply with Data Residency rules. Learn more about Azure regions here azure.microsoft.com/en-us/explore/global-infrastructure/data-residency/. @default undefined



37
38
39
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 37

def model
  @model
end

#num_fast_turnsFloat (readonly)

Returns This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai. Default is 0. @default 0.

Returns:

  • (Float)

    This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai. Default is 0. @default 0



68
69
70
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 68

def num_fast_turns
  @num_fast_turns
end

#temperatureFloat (readonly)

Returns This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.

Returns:

  • (Float)

    This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.



53
54
55
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 53

def temperature
  @temperature
end

#tool_idsArray<String> (readonly)

Returns These are the tools that the assistant can use during the call. To use transient tools, use ‘tools`. Both `tools` and `toolIds` can be used together.

Returns:

  • (Array<String>)

    These are the tools that the assistant can use during the call. To use transient tools, use ‘tools`. Both `tools` and `toolIds` can be used together.



23
24
25
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 23

def tool_ids
  @tool_ids
end

#tool_strict_compatibility_modeVapi::OpenAiModelToolStrictCompatibilityMode (readonly)

Returns Azure OpenAI doesn’t support ‘maxLength` right now -entra-id&pivots=programming-language-csharp#unsupported-type-specific-keywords. Need to strip.

  • ‘strip-parameters-with-unsupported-validation` will strip parameters with

unsupported validation.

  • ‘strip-unsupported-validation` will keep the parameters but strip unsupported

validation. @default ‘strip-unsupported-validation`.

Returns:

  • (Vapi::OpenAiModelToolStrictCompatibilityMode)

    Azure OpenAI doesn’t support ‘maxLength` right now -entra-id&pivots=programming-language-csharp#unsupported-type-specific-keywords. Need to strip.

    • ‘strip-parameters-with-unsupported-validation` will strip parameters with

    unsupported validation.

    • ‘strip-unsupported-validation` will keep the parameters but strip unsupported

    validation. @default ‘strip-unsupported-validation`



50
51
52
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 50

def tool_strict_compatibility_mode
  @tool_strict_compatibility_mode
end

#toolsArray<Vapi::OpenAiModelToolsItem> (readonly)

Returns These are the tools that the assistant can use during the call. To use existing tools, use ‘toolIds`. Both `tools` and `toolIds` can be used together.

Returns:

  • (Array<Vapi::OpenAiModelToolsItem>)

    These are the tools that the assistant can use during the call. To use existing tools, use ‘toolIds`. Both `tools` and `toolIds` can be used together.



19
20
21
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 19

def tools
  @tools
end

Class Method Details

.from_json(json_object:) ⇒ Vapi::OpenAiModel

Deserialize a JSON object to an instance of OpenAiModel

Parameters:

  • json_object (String)

Returns:



159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 159

def self.from_json(json_object:)
  struct = JSON.parse(json_object, object_class: OpenStruct)
  parsed_json = JSON.parse(json_object)
  messages = parsed_json["messages"]&.map do |item|
    item = item.to_json
    Vapi::OpenAiMessage.from_json(json_object: item)
  end
  tools = parsed_json["tools"]&.map do |item|
    item = item.to_json
    Vapi::OpenAiModelToolsItem.from_json(json_object: item)
  end
  tool_ids = parsed_json["toolIds"]
  if parsed_json["knowledgeBase"].nil?
    knowledge_base = nil
  else
    knowledge_base = parsed_json["knowledgeBase"].to_json
    knowledge_base = Vapi::CreateCustomKnowledgeBaseDto.from_json(json_object: knowledge_base)
  end
  knowledge_base_id = parsed_json["knowledgeBaseId"]
  model = parsed_json["model"]
  fallback_models = parsed_json["fallbackModels"]
  tool_strict_compatibility_mode = parsed_json["toolStrictCompatibilityMode"]
  temperature = parsed_json["temperature"]
  max_tokens = parsed_json["maxTokens"]
  emotion_recognition_enabled = parsed_json["emotionRecognitionEnabled"]
  num_fast_turns = parsed_json["numFastTurns"]
  new(
    messages: messages,
    tools: tools,
    tool_ids: tool_ids,
    knowledge_base: knowledge_base,
    knowledge_base_id: knowledge_base_id,
    model: model,
    fallback_models: fallback_models,
    tool_strict_compatibility_mode: tool_strict_compatibility_mode,
    temperature: temperature,
    max_tokens: max_tokens,
    emotion_recognition_enabled: emotion_recognition_enabled,
    num_fast_turns: num_fast_turns,
    additional_properties: struct
  )
end

.validate_raw(obj:) ⇒ Void

Leveraged for Union-type generation, validate_raw attempts to parse the given

hash and check each fields type against the current object's property
definitions.

Parameters:

  • obj (Object)

Returns:

  • (Void)


215
216
217
218
219
220
221
222
223
224
225
226
227
228
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 215

def self.validate_raw(obj:)
  obj.messages&.is_a?(Array) != false || raise("Passed value for field obj.messages is not the expected type, validation failed.")
  obj.tools&.is_a?(Array) != false || raise("Passed value for field obj.tools is not the expected type, validation failed.")
  obj.tool_ids&.is_a?(Array) != false || raise("Passed value for field obj.tool_ids is not the expected type, validation failed.")
  obj.knowledge_base.nil? || Vapi::CreateCustomKnowledgeBaseDto.validate_raw(obj: obj.knowledge_base)
  obj.knowledge_base_id&.is_a?(String) != false || raise("Passed value for field obj.knowledge_base_id is not the expected type, validation failed.")
  obj.model.is_a?(Vapi::OpenAiModelModel) != false || raise("Passed value for field obj.model is not the expected type, validation failed.")
  obj.fallback_models&.is_a?(Array) != false || raise("Passed value for field obj.fallback_models is not the expected type, validation failed.")
  obj.tool_strict_compatibility_mode&.is_a?(Vapi::OpenAiModelToolStrictCompatibilityMode) != false || raise("Passed value for field obj.tool_strict_compatibility_mode is not the expected type, validation failed.")
  obj.temperature&.is_a?(Float) != false || raise("Passed value for field obj.temperature is not the expected type, validation failed.")
  obj.max_tokens&.is_a?(Float) != false || raise("Passed value for field obj.max_tokens is not the expected type, validation failed.")
  obj.emotion_recognition_enabled&.is_a?(Boolean) != false || raise("Passed value for field obj.emotion_recognition_enabled is not the expected type, validation failed.")
  obj.num_fast_turns&.is_a?(Float) != false || raise("Passed value for field obj.num_fast_turns is not the expected type, validation failed.")
end

Instance Method Details

#to_json(*_args) ⇒ String

Serialize an instance of OpenAiModel to a JSON object

Returns:

  • (String)


205
206
207
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 205

def to_json(*_args)
  @_field_set&.to_json
end