Class: Vapi::GoogleModel

Inherits:
Object
  • Object
show all
Defined in:
lib/vapi_server_sdk/types/google_model.rb

Constant Summary collapse

OMIT =
Object.new

Instance Attribute Summary collapse

Class Method Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(model:, messages: OMIT, tools: OMIT, tool_ids: OMIT, knowledge_base: OMIT, knowledge_base_id: OMIT, realtime_config: OMIT, temperature: OMIT, max_tokens: OMIT, emotion_recognition_enabled: OMIT, num_fast_turns: OMIT, additional_properties: nil) ⇒ Vapi::GoogleModel

Parameters:

  • messages (Array<Vapi::OpenAiMessage>) (defaults to: OMIT)

    This is the starting state for the conversation.

  • tools (Array<Vapi::GoogleModelToolsItem>) (defaults to: OMIT)

    These are the tools that the assistant can use during the call. To use existing tools, use ‘toolIds`. Both `tools` and `toolIds` can be used together.

  • tool_ids (Array<String>) (defaults to: OMIT)

    These are the tools that the assistant can use during the call. To use transient tools, use ‘tools`. Both `tools` and `toolIds` can be used together.

  • knowledge_base (Vapi::CreateCustomKnowledgeBaseDto) (defaults to: OMIT)

    These are the options for the knowledge base.

  • knowledge_base_id (String) (defaults to: OMIT)

    This is the ID of the knowledge base the model will use.

  • model (Vapi::GoogleModelModel)

    This is the Google model that will be used.

  • realtime_config (Vapi::GoogleRealtimeConfig) (defaults to: OMIT)

    This is the session configuration for the Gemini Flash 2.0 Multimodal Live API. Only applicable if the model ‘gemini-2.0-flash-realtime-exp` is selected.

  • temperature (Float) (defaults to: OMIT)

    This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.

  • max_tokens (Float) (defaults to: OMIT)

    This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.

  • emotion_recognition_enabled (Boolean) (defaults to: OMIT)

    This determines whether we detect user’s emotion while they speak and send it as an additional info to model. Default ‘false` because the model is usually are good at understanding the user’s emotion from text. @default false

  • num_fast_turns (Float) (defaults to: OMIT)

    This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai. Default is 0. @default 0

  • additional_properties (OpenStruct) (defaults to: nil)

    Additional properties unmapped to the current class definition



86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
# File 'lib/vapi_server_sdk/types/google_model.rb', line 86

def initialize(model:, messages: OMIT, tools: OMIT, tool_ids: OMIT, knowledge_base: OMIT, knowledge_base_id: OMIT,
               realtime_config: OMIT, temperature: OMIT, max_tokens: OMIT, emotion_recognition_enabled: OMIT, num_fast_turns: OMIT, additional_properties: nil)
  @messages = messages if messages != OMIT
  @tools = tools if tools != OMIT
  @tool_ids = tool_ids if tool_ids != OMIT
  @knowledge_base = knowledge_base if knowledge_base != OMIT
  @knowledge_base_id = knowledge_base_id if knowledge_base_id != OMIT
  @model = model
  @realtime_config = realtime_config if realtime_config != OMIT
  @temperature = temperature if temperature != OMIT
  @max_tokens = max_tokens if max_tokens != OMIT
  @emotion_recognition_enabled = emotion_recognition_enabled if emotion_recognition_enabled != OMIT
  @num_fast_turns = num_fast_turns if num_fast_turns != OMIT
  @additional_properties = additional_properties
  @_field_set = {
    "messages": messages,
    "tools": tools,
    "toolIds": tool_ids,
    "knowledgeBase": knowledge_base,
    "knowledgeBaseId": knowledge_base_id,
    "model": model,
    "realtimeConfig": realtime_config,
    "temperature": temperature,
    "maxTokens": max_tokens,
    "emotionRecognitionEnabled": emotion_recognition_enabled,
    "numFastTurns": num_fast_turns
  }.reject do |_k, v|
    v == OMIT
  end
end

Instance Attribute Details

#additional_propertiesOpenStruct (readonly)

Returns Additional properties unmapped to the current class definition.

Returns:

  • (OpenStruct)

    Additional properties unmapped to the current class definition



51
52
53
# File 'lib/vapi_server_sdk/types/google_model.rb', line 51

def additional_properties
  @additional_properties
end

#emotion_recognition_enabledBoolean (readonly)

Returns This determines whether we detect user’s emotion while they speak and send it as an additional info to model. Default ‘false` because the model is usually are good at understanding the user’s emotion from text. @default false.

Returns:

  • (Boolean)

    This determines whether we detect user’s emotion while they speak and send it as an additional info to model. Default ‘false` because the model is usually are good at understanding the user’s emotion from text. @default false



43
44
45
# File 'lib/vapi_server_sdk/types/google_model.rb', line 43

def emotion_recognition_enabled
  @emotion_recognition_enabled
end

#knowledge_baseVapi::CreateCustomKnowledgeBaseDto (readonly)

Returns These are the options for the knowledge base.

Returns:



24
25
26
# File 'lib/vapi_server_sdk/types/google_model.rb', line 24

def knowledge_base
  @knowledge_base
end

#knowledge_base_idString (readonly)

Returns This is the ID of the knowledge base the model will use.

Returns:

  • (String)

    This is the ID of the knowledge base the model will use.



26
27
28
# File 'lib/vapi_server_sdk/types/google_model.rb', line 26

def knowledge_base_id
  @knowledge_base_id
end

#max_tokensFloat (readonly)

Returns This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.

Returns:

  • (Float)

    This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.



37
38
39
# File 'lib/vapi_server_sdk/types/google_model.rb', line 37

def max_tokens
  @max_tokens
end

#messagesArray<Vapi::OpenAiMessage> (readonly)

Returns This is the starting state for the conversation.

Returns:



14
15
16
# File 'lib/vapi_server_sdk/types/google_model.rb', line 14

def messages
  @messages
end

#modelVapi::GoogleModelModel (readonly)

Returns This is the Google model that will be used.

Returns:



28
29
30
# File 'lib/vapi_server_sdk/types/google_model.rb', line 28

def model
  @model
end

#num_fast_turnsFloat (readonly)

Returns This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai. Default is 0. @default 0.

Returns:

  • (Float)

    This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai. Default is 0. @default 0



49
50
51
# File 'lib/vapi_server_sdk/types/google_model.rb', line 49

def num_fast_turns
  @num_fast_turns
end

#realtime_configVapi::GoogleRealtimeConfig (readonly)

Returns This is the session configuration for the Gemini Flash 2.0 Multimodal Live API. Only applicable if the model ‘gemini-2.0-flash-realtime-exp` is selected.

Returns:

  • (Vapi::GoogleRealtimeConfig)

    This is the session configuration for the Gemini Flash 2.0 Multimodal Live API. Only applicable if the model ‘gemini-2.0-flash-realtime-exp` is selected.



31
32
33
# File 'lib/vapi_server_sdk/types/google_model.rb', line 31

def realtime_config
  @realtime_config
end

#temperatureFloat (readonly)

Returns This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.

Returns:

  • (Float)

    This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.



34
35
36
# File 'lib/vapi_server_sdk/types/google_model.rb', line 34

def temperature
  @temperature
end

#tool_idsArray<String> (readonly)

Returns These are the tools that the assistant can use during the call. To use transient tools, use ‘tools`. Both `tools` and `toolIds` can be used together.

Returns:

  • (Array<String>)

    These are the tools that the assistant can use during the call. To use transient tools, use ‘tools`. Both `tools` and `toolIds` can be used together.



22
23
24
# File 'lib/vapi_server_sdk/types/google_model.rb', line 22

def tool_ids
  @tool_ids
end

#toolsArray<Vapi::GoogleModelToolsItem> (readonly)

Returns These are the tools that the assistant can use during the call. To use existing tools, use ‘toolIds`. Both `tools` and `toolIds` can be used together.

Returns:

  • (Array<Vapi::GoogleModelToolsItem>)

    These are the tools that the assistant can use during the call. To use existing tools, use ‘toolIds`. Both `tools` and `toolIds` can be used together.



18
19
20
# File 'lib/vapi_server_sdk/types/google_model.rb', line 18

def tools
  @tools
end

Class Method Details

.from_json(json_object:) ⇒ Vapi::GoogleModel

Deserialize a JSON object to an instance of GoogleModel

Parameters:

  • json_object (String)

Returns:



121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
# File 'lib/vapi_server_sdk/types/google_model.rb', line 121

def self.from_json(json_object:)
  struct = JSON.parse(json_object, object_class: OpenStruct)
  parsed_json = JSON.parse(json_object)
  messages = parsed_json["messages"]&.map do |item|
    item = item.to_json
    Vapi::OpenAiMessage.from_json(json_object: item)
  end
  tools = parsed_json["tools"]&.map do |item|
    item = item.to_json
    Vapi::GoogleModelToolsItem.from_json(json_object: item)
  end
  tool_ids = parsed_json["toolIds"]
  if parsed_json["knowledgeBase"].nil?
    knowledge_base = nil
  else
    knowledge_base = parsed_json["knowledgeBase"].to_json
    knowledge_base = Vapi::CreateCustomKnowledgeBaseDto.from_json(json_object: knowledge_base)
  end
  knowledge_base_id = parsed_json["knowledgeBaseId"]
  model = parsed_json["model"]
  if parsed_json["realtimeConfig"].nil?
    realtime_config = nil
  else
    realtime_config = parsed_json["realtimeConfig"].to_json
    realtime_config = Vapi::GoogleRealtimeConfig.from_json(json_object: realtime_config)
  end
  temperature = parsed_json["temperature"]
  max_tokens = parsed_json["maxTokens"]
  emotion_recognition_enabled = parsed_json["emotionRecognitionEnabled"]
  num_fast_turns = parsed_json["numFastTurns"]
  new(
    messages: messages,
    tools: tools,
    tool_ids: tool_ids,
    knowledge_base: knowledge_base,
    knowledge_base_id: knowledge_base_id,
    model: model,
    realtime_config: realtime_config,
    temperature: temperature,
    max_tokens: max_tokens,
    emotion_recognition_enabled: emotion_recognition_enabled,
    num_fast_turns: num_fast_turns,
    additional_properties: struct
  )
end

.validate_raw(obj:) ⇒ Void

Leveraged for Union-type generation, validate_raw attempts to parse the given

hash and check each fields type against the current object's property
definitions.

Parameters:

  • obj (Object)

Returns:

  • (Void)


180
181
182
183
184
185
186
187
188
189
190
191
192
# File 'lib/vapi_server_sdk/types/google_model.rb', line 180

def self.validate_raw(obj:)
  obj.messages&.is_a?(Array) != false || raise("Passed value for field obj.messages is not the expected type, validation failed.")
  obj.tools&.is_a?(Array) != false || raise("Passed value for field obj.tools is not the expected type, validation failed.")
  obj.tool_ids&.is_a?(Array) != false || raise("Passed value for field obj.tool_ids is not the expected type, validation failed.")
  obj.knowledge_base.nil? || Vapi::CreateCustomKnowledgeBaseDto.validate_raw(obj: obj.knowledge_base)
  obj.knowledge_base_id&.is_a?(String) != false || raise("Passed value for field obj.knowledge_base_id is not the expected type, validation failed.")
  obj.model.is_a?(Vapi::GoogleModelModel) != false || raise("Passed value for field obj.model is not the expected type, validation failed.")
  obj.realtime_config.nil? || Vapi::GoogleRealtimeConfig.validate_raw(obj: obj.realtime_config)
  obj.temperature&.is_a?(Float) != false || raise("Passed value for field obj.temperature is not the expected type, validation failed.")
  obj.max_tokens&.is_a?(Float) != false || raise("Passed value for field obj.max_tokens is not the expected type, validation failed.")
  obj.emotion_recognition_enabled&.is_a?(Boolean) != false || raise("Passed value for field obj.emotion_recognition_enabled is not the expected type, validation failed.")
  obj.num_fast_turns&.is_a?(Float) != false || raise("Passed value for field obj.num_fast_turns is not the expected type, validation failed.")
end

Instance Method Details

#to_json(*_args) ⇒ String

Serialize an instance of GoogleModel to a JSON object

Returns:

  • (String)


170
171
172
# File 'lib/vapi_server_sdk/types/google_model.rb', line 170

def to_json(*_args)
  @_field_set&.to_json
end