Class: Vapi::CustomLlmModel
- Inherits:
-
Object
- Object
- Vapi::CustomLlmModel
- Defined in:
- lib/vapi_server_sdk/types/custom_llm_model.rb
Constant Summary collapse
- OMIT =
Object.new
Instance Attribute Summary collapse
-
#additional_properties ⇒ OpenStruct
readonly
Additional properties unmapped to the current class definition.
-
#emotion_recognition_enabled ⇒ Boolean
readonly
This determines whether we detect user’s emotion while they speak and send it as an additional info to model.
-
#knowledge_base ⇒ Vapi::CreateCustomKnowledgeBaseDto
readonly
These are the options for the knowledge base.
-
#knowledge_base_id ⇒ String
readonly
This is the ID of the knowledge base the model will use.
-
#max_tokens ⇒ Float
readonly
This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation.
-
#messages ⇒ Array<Vapi::OpenAiMessage>
readonly
This is the starting state for the conversation.
-
#metadata_send_mode ⇒ Vapi::CustomLlmModelMetadataSendMode
readonly
This determines whether metadata is sent in requests to the custom provider.
-
#model ⇒ String
readonly
This is the name of the model.
-
#num_fast_turns ⇒ Float
readonly
This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model.
-
#temperature ⇒ Float
readonly
This is the temperature that will be used for calls.
-
#timeout_seconds ⇒ Float
readonly
This sets the timeout for the connection to the custom provider without needing to stream any tokens back.
-
#tool_ids ⇒ Array<String>
readonly
These are the tools that the assistant can use during the call.
-
#tools ⇒ Array<Vapi::CustomLlmModelToolsItem>
readonly
These are the tools that the assistant can use during the call.
-
#url ⇒ String
readonly
These is the URL we’ll use for the OpenAI client’s ‘baseURL`.
Class Method Summary collapse
-
.from_json(json_object:) ⇒ Vapi::CustomLlmModel
Deserialize a JSON object to an instance of CustomLlmModel.
-
.validate_raw(obj:) ⇒ Void
Leveraged for Union-type generation, validate_raw attempts to parse the given hash and check each fields type against the current object’s property definitions.
Instance Method Summary collapse
- #initialize(url:, model:, messages: OMIT, tools: OMIT, tool_ids: OMIT, knowledge_base: OMIT, knowledge_base_id: OMIT, metadata_send_mode: OMIT, timeout_seconds: OMIT, temperature: OMIT, max_tokens: OMIT, emotion_recognition_enabled: OMIT, num_fast_turns: OMIT, additional_properties: nil) ⇒ Vapi::CustomLlmModel constructor
-
#to_json(*_args) ⇒ String
Serialize an instance of CustomLlmModel to a JSON object.
Constructor Details
#initialize(url:, model:, messages: OMIT, tools: OMIT, tool_ids: OMIT, knowledge_base: OMIT, knowledge_base_id: OMIT, metadata_send_mode: OMIT, timeout_seconds: OMIT, temperature: OMIT, max_tokens: OMIT, emotion_recognition_enabled: OMIT, num_fast_turns: OMIT, additional_properties: nil) ⇒ Vapi::CustomLlmModel
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 |
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 109 def initialize(url:, model:, messages: OMIT, tools: OMIT, tool_ids: OMIT, knowledge_base: OMIT, knowledge_base_id: OMIT, metadata_send_mode: OMIT, timeout_seconds: OMIT, temperature: OMIT, max_tokens: OMIT, emotion_recognition_enabled: OMIT, num_fast_turns: OMIT, additional_properties: nil) @messages = if != OMIT @tools = tools if tools != OMIT @tool_ids = tool_ids if tool_ids != OMIT @knowledge_base = knowledge_base if knowledge_base != OMIT @knowledge_base_id = knowledge_base_id if knowledge_base_id != OMIT @metadata_send_mode = if != OMIT @url = url @timeout_seconds = timeout_seconds if timeout_seconds != OMIT @model = model @temperature = temperature if temperature != OMIT @max_tokens = max_tokens if max_tokens != OMIT @emotion_recognition_enabled = emotion_recognition_enabled if emotion_recognition_enabled != OMIT @num_fast_turns = num_fast_turns if num_fast_turns != OMIT @additional_properties = additional_properties @_field_set = { "messages": , "tools": tools, "toolIds": tool_ids, "knowledgeBase": knowledge_base, "knowledgeBaseId": knowledge_base_id, "metadataSendMode": , "url": url, "timeoutSeconds": timeout_seconds, "model": model, "temperature": temperature, "maxTokens": max_tokens, "emotionRecognitionEnabled": emotion_recognition_enabled, "numFastTurns": num_fast_turns }.reject do |_k, v| v == OMIT end end |
Instance Attribute Details
#additional_properties ⇒ OpenStruct (readonly)
Returns Additional properties unmapped to the current class definition.
63 64 65 |
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 63 def additional_properties @additional_properties end |
#emotion_recognition_enabled ⇒ Boolean (readonly)
Returns This determines whether we detect user’s emotion while they speak and send it as an additional info to model. Default ‘false` because the model is usually are good at understanding the user’s emotion from text. @default false.
55 56 57 |
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 55 def emotion_recognition_enabled @emotion_recognition_enabled end |
#knowledge_base ⇒ Vapi::CreateCustomKnowledgeBaseDto (readonly)
Returns These are the options for the knowledge base.
23 24 25 |
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 23 def knowledge_base @knowledge_base end |
#knowledge_base_id ⇒ String (readonly)
Returns This is the ID of the knowledge base the model will use.
25 26 27 |
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 25 def knowledge_base_id @knowledge_base_id end |
#max_tokens ⇒ Float (readonly)
Returns This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.
49 50 51 |
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 49 def max_tokens @max_tokens end |
#messages ⇒ Array<Vapi::OpenAiMessage> (readonly)
Returns This is the starting state for the conversation.
13 14 15 |
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 13 def @messages end |
#metadata_send_mode ⇒ Vapi::CustomLlmModelMetadataSendMode (readonly)
Returns This determines whether metadata is sent in requests to the custom provider.
-
‘off` will not send any metadata. payload will look like `{ messages }`
-
‘variable` will send `assistant.metadata` as a variable on the payload.
payload will look like ‘{ messages, metadata }`
-
‘destructured` will send `assistant.metadata` fields directly on the payload.
payload will look like ‘{ messages, …metadata }` Further, `variable` and `destructured` will send `call`, `phoneNumber`, and `customer` objects in the payload. Default is `variable`.
35 36 37 |
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 35 def @metadata_send_mode end |
#model ⇒ String (readonly)
Returns This is the name of the model. Ex. cognitivecomputations/dolphin-mixtral-8x7b.
43 44 45 |
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 43 def model @model end |
#num_fast_turns ⇒ Float (readonly)
Returns This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai. Default is 0. @default 0.
61 62 63 |
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 61 def num_fast_turns @num_fast_turns end |
#temperature ⇒ Float (readonly)
Returns This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.
46 47 48 |
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 46 def temperature @temperature end |
#timeout_seconds ⇒ Float (readonly)
Returns This sets the timeout for the connection to the custom provider without needing to stream any tokens back. Default is 20 seconds.
41 42 43 |
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 41 def timeout_seconds @timeout_seconds end |
#tool_ids ⇒ Array<String> (readonly)
Returns These are the tools that the assistant can use during the call. To use transient tools, use ‘tools`. Both `tools` and `toolIds` can be used together.
21 22 23 |
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 21 def tool_ids @tool_ids end |
#tools ⇒ Array<Vapi::CustomLlmModelToolsItem> (readonly)
Returns These are the tools that the assistant can use during the call. To use existing tools, use ‘toolIds`. Both `tools` and `toolIds` can be used together.
17 18 19 |
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 17 def tools @tools end |
#url ⇒ String (readonly)
Returns These is the URL we’ll use for the OpenAI client’s ‘baseURL`. Ex. openrouter.ai/api/v1.
38 39 40 |
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 38 def url @url end |
Class Method Details
.from_json(json_object:) ⇒ Vapi::CustomLlmModel
Deserialize a JSON object to an instance of CustomLlmModel
148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 |
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 148 def self.from_json(json_object:) struct = JSON.parse(json_object, object_class: OpenStruct) parsed_json = JSON.parse(json_object) = parsed_json["messages"]&.map do |item| item = item.to_json Vapi::OpenAiMessage.from_json(json_object: item) end tools = parsed_json["tools"]&.map do |item| item = item.to_json Vapi::CustomLlmModelToolsItem.from_json(json_object: item) end tool_ids = parsed_json["toolIds"] if parsed_json["knowledgeBase"].nil? knowledge_base = nil else knowledge_base = parsed_json["knowledgeBase"].to_json knowledge_base = Vapi::CreateCustomKnowledgeBaseDto.from_json(json_object: knowledge_base) end knowledge_base_id = parsed_json["knowledgeBaseId"] = parsed_json["metadataSendMode"] url = parsed_json["url"] timeout_seconds = parsed_json["timeoutSeconds"] model = parsed_json["model"] temperature = parsed_json["temperature"] max_tokens = parsed_json["maxTokens"] emotion_recognition_enabled = parsed_json["emotionRecognitionEnabled"] num_fast_turns = parsed_json["numFastTurns"] new( messages: , tools: tools, tool_ids: tool_ids, knowledge_base: knowledge_base, knowledge_base_id: knowledge_base_id, metadata_send_mode: , url: url, timeout_seconds: timeout_seconds, model: model, temperature: temperature, max_tokens: max_tokens, emotion_recognition_enabled: emotion_recognition_enabled, num_fast_turns: num_fast_turns, additional_properties: struct ) end |
.validate_raw(obj:) ⇒ Void
Leveraged for Union-type generation, validate_raw attempts to parse the given
hash and check each fields type against the current object's property
definitions.
206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 |
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 206 def self.validate_raw(obj:) obj.&.is_a?(Array) != false || raise("Passed value for field obj.messages is not the expected type, validation failed.") obj.tools&.is_a?(Array) != false || raise("Passed value for field obj.tools is not the expected type, validation failed.") obj.tool_ids&.is_a?(Array) != false || raise("Passed value for field obj.tool_ids is not the expected type, validation failed.") obj.knowledge_base.nil? || Vapi::CreateCustomKnowledgeBaseDto.validate_raw(obj: obj.knowledge_base) obj.knowledge_base_id&.is_a?(String) != false || raise("Passed value for field obj.knowledge_base_id is not the expected type, validation failed.") obj.&.is_a?(Vapi::CustomLlmModelMetadataSendMode) != false || raise("Passed value for field obj.metadata_send_mode is not the expected type, validation failed.") obj.url.is_a?(String) != false || raise("Passed value for field obj.url is not the expected type, validation failed.") obj.timeout_seconds&.is_a?(Float) != false || raise("Passed value for field obj.timeout_seconds is not the expected type, validation failed.") obj.model.is_a?(String) != false || raise("Passed value for field obj.model is not the expected type, validation failed.") obj.temperature&.is_a?(Float) != false || raise("Passed value for field obj.temperature is not the expected type, validation failed.") obj.max_tokens&.is_a?(Float) != false || raise("Passed value for field obj.max_tokens is not the expected type, validation failed.") obj.emotion_recognition_enabled&.is_a?(Boolean) != false || raise("Passed value for field obj.emotion_recognition_enabled is not the expected type, validation failed.") obj.num_fast_turns&.is_a?(Float) != false || raise("Passed value for field obj.num_fast_turns is not the expected type, validation failed.") end |
Instance Method Details
#to_json(*_args) ⇒ String
Serialize an instance of CustomLlmModel to a JSON object
196 197 198 |
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 196 def to_json(*_args) @_field_set&.to_json end |