Class: Vapi::Assistant

Inherits:
Object
  • Object
show all
Defined in:
lib/vapi_server_sdk/types/assistant.rb

Constant Summary collapse

OMIT =
Object.new

Instance Attribute Summary collapse

Class Method Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(id:, org_id:, created_at:, updated_at:, transcriber: OMIT, model: OMIT, voice: OMIT, first_message: OMIT, first_message_interruptions_enabled: OMIT, first_message_mode: OMIT, voicemail_detection: OMIT, client_messages: OMIT, server_messages: OMIT, silence_timeout_seconds: OMIT, max_duration_seconds: OMIT, background_sound: OMIT, background_denoising_enabled: OMIT, model_output_in_messages_enabled: OMIT, transport_configurations: OMIT, observability_plan: OMIT, credentials: OMIT, hooks: OMIT, name: OMIT, voicemail_message: OMIT, end_call_message: OMIT, end_call_phrases: OMIT, compliance_plan: OMIT, metadata: OMIT, background_speech_denoising_plan: OMIT, analysis_plan: OMIT, artifact_plan: OMIT, message_plan: OMIT, start_speaking_plan: OMIT, stop_speaking_plan: OMIT, monitor_plan: OMIT, credential_ids: OMIT, server: OMIT, keypad_input_plan: OMIT, additional_properties: nil) ⇒ Vapi::Assistant

Parameters:

  • transcriber (Vapi::AssistantTranscriber) (defaults to: OMIT)

    These are the options for the assistant’s transcriber.

  • model (Vapi::AssistantModel) (defaults to: OMIT)

    These are the options for the assistant’s LLM.

  • voice (Vapi::AssistantVoice) (defaults to: OMIT)

    These are the options for the assistant’s voice.

  • first_message (String) (defaults to: OMIT)

    This is the first message that the assistant will say. This can also be a URL to a containerized audio file (mp3, wav, etc.). If unspecified, assistant will wait for user to speak and use the model to respond once they speak.

  • first_message_interruptions_enabled (Boolean) (defaults to: OMIT)
  • first_message_mode (Vapi::AssistantFirstMessageMode) (defaults to: OMIT)

    This is the mode for the first message. Default is ‘assistant-speaks-first’. Use:

    • ‘assistant-speaks-first’ to have the assistant speak first.

    • ‘assistant-waits-for-user’ to have the assistant wait for the user to speak

    first.

    • ‘assistant-speaks-first-with-model-generated-message’ to have the assistant

    speak first with a message generated by the model based on the conversation state. (‘assistant.model.messages` at call start, `call.messages` at squad transfer points). @default ’assistant-speaks-first’

  • voicemail_detection (Vapi::AssistantVoicemailDetection) (defaults to: OMIT)

    These are the settings to configure or disable voicemail detection. Alternatively, voicemail detection can be configured using the model.tools=. This uses Twilio’s built-in detection while the VoicemailTool relies on the model to detect if a voicemail was reached. You can use neither of them, one of them, or both of them. By default, Twilio built-in detection is enabled while VoicemailTool is not.

  • client_messages (Array<Vapi::AssistantClientMessagesItem>) (defaults to: OMIT)

    These are the messages that will be sent to your Client SDKs. Default is update,transcript,tool-calls,user-interrupted,voice-input,workflow.node.started. You can check the shape of the messages in ClientMessage schema.

  • server_messages (Array<Vapi::AssistantServerMessagesItem>) (defaults to: OMIT)

    These are the messages that will be sent to your Server URL. Default is h-update,status-update,tool-calls,transfer-destination-request,user-interrupted. You can check the shape of the messages in ServerMessage schema.

  • silence_timeout_seconds (Float) (defaults to: OMIT)

    How many seconds of silence to wait before ending the call. Defaults to 30. @default 30

  • max_duration_seconds (Float) (defaults to: OMIT)

    This is the maximum number of seconds that the call will last. When the call reaches this duration, it will be ended. @default 600 (10 minutes)

  • background_sound (Vapi::AssistantBackgroundSound) (defaults to: OMIT)

    This is the background sound in the call. Default for phone calls is ‘office’ and default for web calls is ‘off’. You can also provide a custom sound by providing a URL to an audio file.

  • background_denoising_enabled (Boolean) (defaults to: OMIT)

    This enables filtering of noise and background speech while the user is talking. Default ‘false` while in beta. @default false

  • model_output_in_messages_enabled (Boolean) (defaults to: OMIT)

    This determines whether the model’s output is used in conversation history rather than the transcription of assistant’s speech. Default ‘false` while in beta. @default false

  • transport_configurations (Array<Vapi::TransportConfigurationTwilio>) (defaults to: OMIT)

    These are the configurations to be passed to the transport providers of assistant’s calls, like Twilio. You can store multiple configurations for different transport providers. For a call, only the configuration matching the call transport provider is used.

  • observability_plan (Vapi::LangfuseObservabilityPlan) (defaults to: OMIT)

    This is the plan for observability of assistant’s calls. Currently, only Langfuse is supported.

  • credentials (Array<Vapi::AssistantCredentialsItem>) (defaults to: OMIT)

    These are dynamic credentials that will be used for the assistant calls. By default, all the credentials are available for use in the call but you can supplement an additional credentials using this. Dynamic credentials override existing credentials.

  • hooks (Array<Vapi::AssistantHooksItem>) (defaults to: OMIT)

    This is a set of actions that will be performed on certain events.

  • name (String) (defaults to: OMIT)

    This is the name of the assistant. This is required when you want to transfer between assistants in a call.

  • voicemail_message (String) (defaults to: OMIT)

    This is the message that the assistant will say if the call is forwarded to voicemail. If unspecified, it will hang up.

  • end_call_message (String) (defaults to: OMIT)

    This is the message that the assistant will say if it ends the call. If unspecified, it will hang up without saying anything.

  • end_call_phrases (Array<String>) (defaults to: OMIT)

    This list contains phrases that, if spoken by the assistant, will trigger the call to be hung up. Case insensitive.

  • compliance_plan (Vapi::CompliancePlan) (defaults to: OMIT)
  • metadata (Hash{String => Object}) (defaults to: OMIT)

    This is for metadata you want to store on the assistant.

  • background_speech_denoising_plan (Vapi::BackgroundSpeechDenoisingPlan) (defaults to: OMIT)

    This enables filtering of noise and background speech while the user is talking. Features:

    • Smart denoising using Krisp

    • Fourier denoising

    Smart denoising can be combined with or used independently of Fourier denoising. Order of precedence:

    • Smart denoising

    • Fourier denoising

  • analysis_plan (Vapi::AnalysisPlan) (defaults to: OMIT)

    This is the plan for analysis of assistant’s calls. Stored in ‘call.analysis`.

  • artifact_plan (Vapi::ArtifactPlan) (defaults to: OMIT)

    This is the plan for artifacts generated during assistant’s calls. Stored in ‘call.artifact`.

  • message_plan (Vapi::MessagePlan) (defaults to: OMIT)

    This is the plan for static predefined messages that can be spoken by the assistant during the call, like ‘idleMessages`. Note: `firstMessage`, `voicemailMessage`, and `endCallMessage` are currently at the root level. They will be moved to `messagePlan` in the future, but will remain backwards compatible.

  • start_speaking_plan (Vapi::StartSpeakingPlan) (defaults to: OMIT)

    This is the plan for when the assistant should start talking. You should configure this if you’re running into these issues:

    • The assistant is too slow to start talking after the customer is done

    speaking.

    • The assistant is too fast to start talking after the customer is done

    speaking.

    • The assistant is so fast that it’s actually interrupting the customer.

  • stop_speaking_plan (Vapi::StopSpeakingPlan) (defaults to: OMIT)

    This is the plan for when assistant should stop talking on customer interruption. You should configure this if you’re running into these issues:

    • The assistant is too slow to recognize customer’s interruption.

    • The assistant is too fast to recognize customer’s interruption.

    • The assistant is getting interrupted by phrases that are just acknowledgments.

    • The assistant is getting interrupted by background noises.

    • The assistant is not properly stopping – it starts talking right after

    getting interrupted.

  • monitor_plan (Vapi::MonitorPlan) (defaults to: OMIT)

    This is the plan for real-time monitoring of the assistant’s calls. Usage:

    • To enable live listening of the assistant’s calls, set

    ‘monitorPlan.listenEnabled` to `true`.

    • To enable live control of the assistant’s calls, set

    ‘monitorPlan.controlEnabled` to `true`.

  • credential_ids (Array<String>) (defaults to: OMIT)

    These are the credentials that will be used for the assistant calls. By default, all the credentials are available for use in the call but you can provide a subset using this.

  • server (Vapi::Server) (defaults to: OMIT)

    This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema. The order of precedence is:

    1. assistant.server.url

    2. phoneNumber.serverUrl

    3. org.serverUrl

  • keypad_input_plan (Vapi::KeypadInputPlan) (defaults to: OMIT)
  • id (String)

    This is the unique identifier for the assistant.

  • org_id (String)

    This is the unique identifier for the org that this assistant belongs to.

  • created_at (DateTime)

    This is the ISO 8601 date-time string of when the assistant was created.

  • updated_at (DateTime)

    This is the ISO 8601 date-time string of when the assistant was last updated.

  • additional_properties (OpenStruct) (defaults to: nil)

    Additional properties unmapped to the current class definition



319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
# File 'lib/vapi_server_sdk/types/assistant.rb', line 319

def initialize(id:, org_id:, created_at:, updated_at:, transcriber: OMIT, model: OMIT, voice: OMIT, first_message: OMIT,
               first_message_interruptions_enabled: OMIT, first_message_mode: OMIT, voicemail_detection: OMIT, client_messages: OMIT, server_messages: OMIT, silence_timeout_seconds: OMIT, max_duration_seconds: OMIT, background_sound: OMIT, background_denoising_enabled: OMIT, model_output_in_messages_enabled: OMIT, transport_configurations: OMIT, observability_plan: OMIT, credentials: OMIT, hooks: OMIT, name: OMIT, voicemail_message: OMIT, end_call_message: OMIT, end_call_phrases: OMIT, compliance_plan: OMIT, metadata: OMIT, background_speech_denoising_plan: OMIT, analysis_plan: OMIT, artifact_plan: OMIT, message_plan: OMIT, start_speaking_plan: OMIT, stop_speaking_plan: OMIT, monitor_plan: OMIT, credential_ids: OMIT, server: OMIT, keypad_input_plan: OMIT, additional_properties: nil)
  @transcriber = transcriber if transcriber != OMIT
  @model = model if model != OMIT
  @voice = voice if voice != OMIT
  @first_message = first_message if first_message != OMIT
  if first_message_interruptions_enabled != OMIT
    @first_message_interruptions_enabled = first_message_interruptions_enabled
  end
  @first_message_mode = first_message_mode if first_message_mode != OMIT
  @voicemail_detection = voicemail_detection if voicemail_detection != OMIT
  @client_messages = client_messages if client_messages != OMIT
  @server_messages = server_messages if server_messages != OMIT
  @silence_timeout_seconds = silence_timeout_seconds if silence_timeout_seconds != OMIT
  @max_duration_seconds = max_duration_seconds if max_duration_seconds != OMIT
  @background_sound = background_sound if background_sound != OMIT
  @background_denoising_enabled = background_denoising_enabled if background_denoising_enabled != OMIT
  @model_output_in_messages_enabled = model_output_in_messages_enabled if model_output_in_messages_enabled != OMIT
  @transport_configurations = transport_configurations if transport_configurations != OMIT
  @observability_plan = observability_plan if observability_plan != OMIT
  @credentials = credentials if credentials != OMIT
  @hooks = hooks if hooks != OMIT
  @name = name if name != OMIT
  @voicemail_message = voicemail_message if voicemail_message != OMIT
  @end_call_message = end_call_message if end_call_message != OMIT
  @end_call_phrases = end_call_phrases if end_call_phrases != OMIT
  @compliance_plan = compliance_plan if compliance_plan != OMIT
  @metadata =  if  != OMIT
  @background_speech_denoising_plan = background_speech_denoising_plan if background_speech_denoising_plan != OMIT
  @analysis_plan = analysis_plan if analysis_plan != OMIT
  @artifact_plan = artifact_plan if artifact_plan != OMIT
  @message_plan = message_plan if message_plan != OMIT
  @start_speaking_plan = start_speaking_plan if start_speaking_plan != OMIT
  @stop_speaking_plan = stop_speaking_plan if stop_speaking_plan != OMIT
  @monitor_plan = monitor_plan if monitor_plan != OMIT
  @credential_ids = credential_ids if credential_ids != OMIT
  @server = server if server != OMIT
  @keypad_input_plan = keypad_input_plan if keypad_input_plan != OMIT
  @id = id
  @org_id = org_id
  @created_at = created_at
  @updated_at = updated_at
  @additional_properties = additional_properties
  @_field_set = {
    "transcriber": transcriber,
    "model": model,
    "voice": voice,
    "firstMessage": first_message,
    "firstMessageInterruptionsEnabled": first_message_interruptions_enabled,
    "firstMessageMode": first_message_mode,
    "voicemailDetection": voicemail_detection,
    "clientMessages": client_messages,
    "serverMessages": server_messages,
    "silenceTimeoutSeconds": silence_timeout_seconds,
    "maxDurationSeconds": max_duration_seconds,
    "backgroundSound": background_sound,
    "backgroundDenoisingEnabled": background_denoising_enabled,
    "modelOutputInMessagesEnabled": model_output_in_messages_enabled,
    "transportConfigurations": transport_configurations,
    "observabilityPlan": observability_plan,
    "credentials": credentials,
    "hooks": hooks,
    "name": name,
    "voicemailMessage": voicemail_message,
    "endCallMessage": end_call_message,
    "endCallPhrases": end_call_phrases,
    "compliancePlan": compliance_plan,
    "metadata": ,
    "backgroundSpeechDenoisingPlan": background_speech_denoising_plan,
    "analysisPlan": analysis_plan,
    "artifactPlan": artifact_plan,
    "messagePlan": message_plan,
    "startSpeakingPlan": start_speaking_plan,
    "stopSpeakingPlan": stop_speaking_plan,
    "monitorPlan": monitor_plan,
    "credentialIds": credential_ids,
    "server": server,
    "keypadInputPlan": keypad_input_plan,
    "id": id,
    "orgId": org_id,
    "createdAt": created_at,
    "updatedAt": updated_at
  }.reject do |_k, v|
    v == OMIT
  end
end

Instance Attribute Details

#additional_propertiesOpenStruct (readonly)

Returns Additional properties unmapped to the current class definition.

Returns:

  • (OpenStruct)

    Additional properties unmapped to the current class definition



190
191
192
# File 'lib/vapi_server_sdk/types/assistant.rb', line 190

def additional_properties
  @additional_properties
end

#analysis_planVapi::AnalysisPlan (readonly)

Returns This is the plan for analysis of assistant’s calls. Stored in ‘call.analysis`.

Returns:

  • (Vapi::AnalysisPlan)

    This is the plan for analysis of assistant’s calls. Stored in ‘call.analysis`.



133
134
135
# File 'lib/vapi_server_sdk/types/assistant.rb', line 133

def analysis_plan
  @analysis_plan
end

#artifact_planVapi::ArtifactPlan (readonly)

Returns This is the plan for artifacts generated during assistant’s calls. Stored in ‘call.artifact`.

Returns:

  • (Vapi::ArtifactPlan)

    This is the plan for artifacts generated during assistant’s calls. Stored in ‘call.artifact`.



136
137
138
# File 'lib/vapi_server_sdk/types/assistant.rb', line 136

def artifact_plan
  @artifact_plan
end

#background_denoising_enabledBoolean (readonly)

Returns This enables filtering of noise and background speech while the user is talking. Default ‘false` while in beta. @default false.

Returns:

  • (Boolean)

    This enables filtering of noise and background speech while the user is talking. Default ‘false` while in beta. @default false



85
86
87
# File 'lib/vapi_server_sdk/types/assistant.rb', line 85

def background_denoising_enabled
  @background_denoising_enabled
end

#background_soundVapi::AssistantBackgroundSound (readonly)

Returns This is the background sound in the call. Default for phone calls is ‘office’ and default for web calls is ‘off’. You can also provide a custom sound by providing a URL to an audio file.

Returns:

  • (Vapi::AssistantBackgroundSound)

    This is the background sound in the call. Default for phone calls is ‘office’ and default for web calls is ‘off’. You can also provide a custom sound by providing a URL to an audio file.



81
82
83
# File 'lib/vapi_server_sdk/types/assistant.rb', line 81

def background_sound
  @background_sound
end

#background_speech_denoising_planVapi::BackgroundSpeechDenoisingPlan (readonly)

Returns This enables filtering of noise and background speech while the user is talking. Features:

  • Smart denoising using Krisp

  • Fourier denoising

Smart denoising can be combined with or used independently of Fourier denoising. Order of precedence:

  • Smart denoising

  • Fourier denoising.

Returns:

  • (Vapi::BackgroundSpeechDenoisingPlan)

    This enables filtering of noise and background speech while the user is talking. Features:

    • Smart denoising using Krisp

    • Fourier denoising

    Smart denoising can be combined with or used independently of Fourier denoising. Order of precedence:

    • Smart denoising

    • Fourier denoising



131
132
133
# File 'lib/vapi_server_sdk/types/assistant.rb', line 131

def background_speech_denoising_plan
  @background_speech_denoising_plan
end

#client_messagesArray<Vapi::AssistantClientMessagesItem> (readonly)

Returns These are the messages that will be sent to your Client SDKs. Default is update,transcript,tool-calls,user-interrupted,voice-input,workflow.node.started. You can check the shape of the messages in ClientMessage schema.

Returns:

  • (Array<Vapi::AssistantClientMessagesItem>)

    These are the messages that will be sent to your Client SDKs. Default is update,transcript,tool-calls,user-interrupted,voice-input,workflow.node.started. You can check the shape of the messages in ClientMessage schema.



66
67
68
# File 'lib/vapi_server_sdk/types/assistant.rb', line 66

def client_messages
  @client_messages
end

#compliance_planVapi::CompliancePlan (readonly)



120
121
122
# File 'lib/vapi_server_sdk/types/assistant.rb', line 120

def compliance_plan
  @compliance_plan
end

#created_atDateTime (readonly)

Returns This is the ISO 8601 date-time string of when the assistant was created.

Returns:

  • (DateTime)

    This is the ISO 8601 date-time string of when the assistant was created.



186
187
188
# File 'lib/vapi_server_sdk/types/assistant.rb', line 186

def created_at
  @created_at
end

#credential_idsArray<String> (readonly)

Returns These are the credentials that will be used for the assistant calls. By default, all the credentials are available for use in the call but you can provide a subset using this.

Returns:

  • (Array<String>)

    These are the credentials that will be used for the assistant calls. By default, all the credentials are available for use in the call but you can provide a subset using this.



171
172
173
# File 'lib/vapi_server_sdk/types/assistant.rb', line 171

def credential_ids
  @credential_ids
end

#credentialsArray<Vapi::AssistantCredentialsItem> (readonly)

Returns These are dynamic credentials that will be used for the assistant calls. By default, all the credentials are available for use in the call but you can supplement an additional credentials using this. Dynamic credentials override existing credentials.

Returns:

  • (Array<Vapi::AssistantCredentialsItem>)

    These are dynamic credentials that will be used for the assistant calls. By default, all the credentials are available for use in the call but you can supplement an additional credentials using this. Dynamic credentials override existing credentials.



103
104
105
# File 'lib/vapi_server_sdk/types/assistant.rb', line 103

def credentials
  @credentials
end

#end_call_messageString (readonly)

Returns This is the message that the assistant will say if it ends the call. If unspecified, it will hang up without saying anything.

Returns:

  • (String)

    This is the message that the assistant will say if it ends the call. If unspecified, it will hang up without saying anything.



115
116
117
# File 'lib/vapi_server_sdk/types/assistant.rb', line 115

def end_call_message
  @end_call_message
end

#end_call_phrasesArray<String> (readonly)

Returns This list contains phrases that, if spoken by the assistant, will trigger the call to be hung up. Case insensitive.

Returns:

  • (Array<String>)

    This list contains phrases that, if spoken by the assistant, will trigger the call to be hung up. Case insensitive.



118
119
120
# File 'lib/vapi_server_sdk/types/assistant.rb', line 118

def end_call_phrases
  @end_call_phrases
end

#first_messageString (readonly)

Returns This is the first message that the assistant will say. This can also be a URL to a containerized audio file (mp3, wav, etc.). If unspecified, assistant will wait for user to speak and use the model to respond once they speak.

Returns:

  • (String)

    This is the first message that the assistant will say. This can also be a URL to a containerized audio file (mp3, wav, etc.). If unspecified, assistant will wait for user to speak and use the model to respond once they speak.



41
42
43
# File 'lib/vapi_server_sdk/types/assistant.rb', line 41

def first_message
  @first_message
end

#first_message_interruptions_enabledBoolean (readonly)

Returns:

  • (Boolean)


43
44
45
# File 'lib/vapi_server_sdk/types/assistant.rb', line 43

def first_message_interruptions_enabled
  @first_message_interruptions_enabled
end

#first_message_modeVapi::AssistantFirstMessageMode (readonly)

Returns This is the mode for the first message. Default is ‘assistant-speaks-first’. Use:

  • ‘assistant-speaks-first’ to have the assistant speak first.

  • ‘assistant-waits-for-user’ to have the assistant wait for the user to speak

first.

  • ‘assistant-speaks-first-with-model-generated-message’ to have the assistant

speak first with a message generated by the model based on the conversation state. (‘assistant.model.messages` at call start, `call.messages` at squad transfer points). @default ’assistant-speaks-first’.

Returns:

  • (Vapi::AssistantFirstMessageMode)

    This is the mode for the first message. Default is ‘assistant-speaks-first’. Use:

    • ‘assistant-speaks-first’ to have the assistant speak first.

    • ‘assistant-waits-for-user’ to have the assistant wait for the user to speak

    first.

    • ‘assistant-speaks-first-with-model-generated-message’ to have the assistant

    speak first with a message generated by the model based on the conversation state. (‘assistant.model.messages` at call start, `call.messages` at squad transfer points). @default ’assistant-speaks-first’



54
55
56
# File 'lib/vapi_server_sdk/types/assistant.rb', line 54

def first_message_mode
  @first_message_mode
end

#hooksArray<Vapi::AssistantHooksItem> (readonly)

Returns This is a set of actions that will be performed on certain events.

Returns:



105
106
107
# File 'lib/vapi_server_sdk/types/assistant.rb', line 105

def hooks
  @hooks
end

#idString (readonly)

Returns This is the unique identifier for the assistant.

Returns:

  • (String)

    This is the unique identifier for the assistant.



182
183
184
# File 'lib/vapi_server_sdk/types/assistant.rb', line 182

def id
  @id
end

#keypad_input_planVapi::KeypadInputPlan (readonly)



180
181
182
# File 'lib/vapi_server_sdk/types/assistant.rb', line 180

def keypad_input_plan
  @keypad_input_plan
end

#max_duration_secondsFloat (readonly)

Returns This is the maximum number of seconds that the call will last. When the call reaches this duration, it will be ended. @default 600 (10 minutes).

Returns:

  • (Float)

    This is the maximum number of seconds that the call will last. When the call reaches this duration, it will be ended. @default 600 (10 minutes)



77
78
79
# File 'lib/vapi_server_sdk/types/assistant.rb', line 77

def max_duration_seconds
  @max_duration_seconds
end

#message_planVapi::MessagePlan (readonly)

Returns This is the plan for static predefined messages that can be spoken by the assistant during the call, like ‘idleMessages`. Note: `firstMessage`, `voicemailMessage`, and `endCallMessage` are currently at the root level. They will be moved to `messagePlan` in the future, but will remain backwards compatible.

Returns:

  • (Vapi::MessagePlan)

    This is the plan for static predefined messages that can be spoken by the assistant during the call, like ‘idleMessages`. Note: `firstMessage`, `voicemailMessage`, and `endCallMessage` are currently at the root level. They will be moved to `messagePlan` in the future, but will remain backwards compatible.



142
143
144
# File 'lib/vapi_server_sdk/types/assistant.rb', line 142

def message_plan
  @message_plan
end

#metadataHash{String => Object} (readonly)

Returns This is for metadata you want to store on the assistant.

Returns:

  • (Hash{String => Object})

    This is for metadata you want to store on the assistant.



122
123
124
# File 'lib/vapi_server_sdk/types/assistant.rb', line 122

def 
  @metadata
end

#modelVapi::AssistantModel (readonly)

Returns These are the options for the assistant’s LLM.

Returns:



34
35
36
# File 'lib/vapi_server_sdk/types/assistant.rb', line 34

def model
  @model
end

#model_output_in_messages_enabledBoolean (readonly)

Returns This determines whether the model’s output is used in conversation history rather than the transcription of assistant’s speech. Default ‘false` while in beta. @default false.

Returns:

  • (Boolean)

    This determines whether the model’s output is used in conversation history rather than the transcription of assistant’s speech. Default ‘false` while in beta. @default false



90
91
92
# File 'lib/vapi_server_sdk/types/assistant.rb', line 90

def model_output_in_messages_enabled
  @model_output_in_messages_enabled
end

#monitor_planVapi::MonitorPlan (readonly)

Returns This is the plan for real-time monitoring of the assistant’s calls. Usage:

  • To enable live listening of the assistant’s calls, set

‘monitorPlan.listenEnabled` to `true`.

  • To enable live control of the assistant’s calls, set

‘monitorPlan.controlEnabled` to `true`.

Returns:

  • (Vapi::MonitorPlan)

    This is the plan for real-time monitoring of the assistant’s calls. Usage:

    • To enable live listening of the assistant’s calls, set

    ‘monitorPlan.listenEnabled` to `true`.

    • To enable live control of the assistant’s calls, set

    ‘monitorPlan.controlEnabled` to `true`.



167
168
169
# File 'lib/vapi_server_sdk/types/assistant.rb', line 167

def monitor_plan
  @monitor_plan
end

#nameString (readonly)

Returns This is the name of the assistant. This is required when you want to transfer between assistants in a call.

Returns:

  • (String)

    This is the name of the assistant. This is required when you want to transfer between assistants in a call.



108
109
110
# File 'lib/vapi_server_sdk/types/assistant.rb', line 108

def name
  @name
end

#observability_planVapi::LangfuseObservabilityPlan (readonly)

Returns This is the plan for observability of assistant’s calls. Currently, only Langfuse is supported.

Returns:



98
99
100
# File 'lib/vapi_server_sdk/types/assistant.rb', line 98

def observability_plan
  @observability_plan
end

#org_idString (readonly)

Returns This is the unique identifier for the org that this assistant belongs to.

Returns:

  • (String)

    This is the unique identifier for the org that this assistant belongs to.



184
185
186
# File 'lib/vapi_server_sdk/types/assistant.rb', line 184

def org_id
  @org_id
end

#serverVapi::Server (readonly)

Returns This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema. The order of precedence is:

  1. assistant.server.url

  2. phoneNumber.serverUrl

  3. org.serverUrl.

Returns:

  • (Vapi::Server)

    This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema. The order of precedence is:

    1. assistant.server.url

    2. phoneNumber.serverUrl

    3. org.serverUrl



178
179
180
# File 'lib/vapi_server_sdk/types/assistant.rb', line 178

def server
  @server
end

#server_messagesArray<Vapi::AssistantServerMessagesItem> (readonly)

Returns These are the messages that will be sent to your Server URL. Default is h-update,status-update,tool-calls,transfer-destination-request,user-interrupted. You can check the shape of the messages in ServerMessage schema.

Returns:

  • (Array<Vapi::AssistantServerMessagesItem>)

    These are the messages that will be sent to your Server URL. Default is h-update,status-update,tool-calls,transfer-destination-request,user-interrupted. You can check the shape of the messages in ServerMessage schema.



70
71
72
# File 'lib/vapi_server_sdk/types/assistant.rb', line 70

def server_messages
  @server_messages
end

#silence_timeout_secondsFloat (readonly)

Returns How many seconds of silence to wait before ending the call. Defaults to 30. @default 30.

Returns:

  • (Float)

    How many seconds of silence to wait before ending the call. Defaults to 30. @default 30



73
74
75
# File 'lib/vapi_server_sdk/types/assistant.rb', line 73

def silence_timeout_seconds
  @silence_timeout_seconds
end

#start_speaking_planVapi::StartSpeakingPlan (readonly)

Returns This is the plan for when the assistant should start talking. You should configure this if you’re running into these issues:

  • The assistant is too slow to start talking after the customer is done

speaking.

  • The assistant is too fast to start talking after the customer is done

speaking.

  • The assistant is so fast that it’s actually interrupting the customer.

Returns:

  • (Vapi::StartSpeakingPlan)

    This is the plan for when the assistant should start talking. You should configure this if you’re running into these issues:

    • The assistant is too slow to start talking after the customer is done

    speaking.

    • The assistant is too fast to start talking after the customer is done

    speaking.

    • The assistant is so fast that it’s actually interrupting the customer.



150
151
152
# File 'lib/vapi_server_sdk/types/assistant.rb', line 150

def start_speaking_plan
  @start_speaking_plan
end

#stop_speaking_planVapi::StopSpeakingPlan (readonly)

Returns This is the plan for when assistant should stop talking on customer interruption. You should configure this if you’re running into these issues:

  • The assistant is too slow to recognize customer’s interruption.

  • The assistant is too fast to recognize customer’s interruption.

  • The assistant is getting interrupted by phrases that are just acknowledgments.

  • The assistant is getting interrupted by background noises.

  • The assistant is not properly stopping – it starts talking right after

getting interrupted.

Returns:

  • (Vapi::StopSpeakingPlan)

    This is the plan for when assistant should stop talking on customer interruption. You should configure this if you’re running into these issues:

    • The assistant is too slow to recognize customer’s interruption.

    • The assistant is too fast to recognize customer’s interruption.

    • The assistant is getting interrupted by phrases that are just acknowledgments.

    • The assistant is getting interrupted by background noises.

    • The assistant is not properly stopping – it starts talking right after

    getting interrupted.



160
161
162
# File 'lib/vapi_server_sdk/types/assistant.rb', line 160

def stop_speaking_plan
  @stop_speaking_plan
end

#transcriberVapi::AssistantTranscriber (readonly)

Returns These are the options for the assistant’s transcriber.

Returns:



32
33
34
# File 'lib/vapi_server_sdk/types/assistant.rb', line 32

def transcriber
  @transcriber
end

#transport_configurationsArray<Vapi::TransportConfigurationTwilio> (readonly)

Returns These are the configurations to be passed to the transport providers of assistant’s calls, like Twilio. You can store multiple configurations for different transport providers. For a call, only the configuration matching the call transport provider is used.

Returns:

  • (Array<Vapi::TransportConfigurationTwilio>)

    These are the configurations to be passed to the transport providers of assistant’s calls, like Twilio. You can store multiple configurations for different transport providers. For a call, only the configuration matching the call transport provider is used.



95
96
97
# File 'lib/vapi_server_sdk/types/assistant.rb', line 95

def transport_configurations
  @transport_configurations
end

#updated_atDateTime (readonly)

Returns This is the ISO 8601 date-time string of when the assistant was last updated.

Returns:

  • (DateTime)

    This is the ISO 8601 date-time string of when the assistant was last updated.



188
189
190
# File 'lib/vapi_server_sdk/types/assistant.rb', line 188

def updated_at
  @updated_at
end

#voiceVapi::AssistantVoice (readonly)

Returns These are the options for the assistant’s voice.

Returns:



36
37
38
# File 'lib/vapi_server_sdk/types/assistant.rb', line 36

def voice
  @voice
end

#voicemail_detectionVapi::AssistantVoicemailDetection (readonly)

Returns These are the settings to configure or disable voicemail detection. Alternatively, voicemail detection can be configured using the model.tools=. This uses Twilio’s built-in detection while the VoicemailTool relies on the model to detect if a voicemail was reached. You can use neither of them, one of them, or both of them. By default, Twilio built-in detection is enabled while VoicemailTool is not.

Returns:

  • (Vapi::AssistantVoicemailDetection)

    These are the settings to configure or disable voicemail detection. Alternatively, voicemail detection can be configured using the model.tools=. This uses Twilio’s built-in detection while the VoicemailTool relies on the model to detect if a voicemail was reached. You can use neither of them, one of them, or both of them. By default, Twilio built-in detection is enabled while VoicemailTool is not.



62
63
64
# File 'lib/vapi_server_sdk/types/assistant.rb', line 62

def voicemail_detection
  @voicemail_detection
end

#voicemail_messageString (readonly)

Returns This is the message that the assistant will say if the call is forwarded to voicemail. If unspecified, it will hang up.

Returns:

  • (String)

    This is the message that the assistant will say if the call is forwarded to voicemail. If unspecified, it will hang up.



112
113
114
# File 'lib/vapi_server_sdk/types/assistant.rb', line 112

def voicemail_message
  @voicemail_message
end

Class Method Details

.from_json(json_object:) ⇒ Vapi::Assistant

Deserialize a JSON object to an instance of Assistant

Parameters:

  • json_object (String)

Returns:



410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
# File 'lib/vapi_server_sdk/types/assistant.rb', line 410

def self.from_json(json_object:)
  struct = JSON.parse(json_object, object_class: OpenStruct)
  parsed_json = JSON.parse(json_object)
  if parsed_json["transcriber"].nil?
    transcriber = nil
  else
    transcriber = parsed_json["transcriber"].to_json
    transcriber = Vapi::AssistantTranscriber.from_json(json_object: transcriber)
  end
  if parsed_json["model"].nil?
    model = nil
  else
    model = parsed_json["model"].to_json
    model = Vapi::AssistantModel.from_json(json_object: model)
  end
  if parsed_json["voice"].nil?
    voice = nil
  else
    voice = parsed_json["voice"].to_json
    voice = Vapi::AssistantVoice.from_json(json_object: voice)
  end
  first_message = parsed_json["firstMessage"]
  first_message_interruptions_enabled = parsed_json["firstMessageInterruptionsEnabled"]
  first_message_mode = parsed_json["firstMessageMode"]
  if parsed_json["voicemailDetection"].nil?
    voicemail_detection = nil
  else
    voicemail_detection = parsed_json["voicemailDetection"].to_json
    voicemail_detection = Vapi::AssistantVoicemailDetection.from_json(json_object: voicemail_detection)
  end
  client_messages = parsed_json["clientMessages"]
  server_messages = parsed_json["serverMessages"]
  silence_timeout_seconds = parsed_json["silenceTimeoutSeconds"]
  max_duration_seconds = parsed_json["maxDurationSeconds"]
  if parsed_json["backgroundSound"].nil?
    background_sound = nil
  else
    background_sound = parsed_json["backgroundSound"].to_json
    background_sound = Vapi::AssistantBackgroundSound.from_json(json_object: background_sound)
  end
  background_denoising_enabled = parsed_json["backgroundDenoisingEnabled"]
  model_output_in_messages_enabled = parsed_json["modelOutputInMessagesEnabled"]
  transport_configurations = parsed_json["transportConfigurations"]&.map do |item|
    item = item.to_json
    Vapi::TransportConfigurationTwilio.from_json(json_object: item)
  end
  if parsed_json["observabilityPlan"].nil?
    observability_plan = nil
  else
    observability_plan = parsed_json["observabilityPlan"].to_json
    observability_plan = Vapi::LangfuseObservabilityPlan.from_json(json_object: observability_plan)
  end
  credentials = parsed_json["credentials"]&.map do |item|
    item = item.to_json
    Vapi::AssistantCredentialsItem.from_json(json_object: item)
  end
  hooks = parsed_json["hooks"]&.map do |item|
    item = item.to_json
    Vapi::AssistantHooksItem.from_json(json_object: item)
  end
  name = parsed_json["name"]
  voicemail_message = parsed_json["voicemailMessage"]
  end_call_message = parsed_json["endCallMessage"]
  end_call_phrases = parsed_json["endCallPhrases"]
  if parsed_json["compliancePlan"].nil?
    compliance_plan = nil
  else
    compliance_plan = parsed_json["compliancePlan"].to_json
    compliance_plan = Vapi::CompliancePlan.from_json(json_object: compliance_plan)
  end
   = parsed_json["metadata"]
  if parsed_json["backgroundSpeechDenoisingPlan"].nil?
    background_speech_denoising_plan = nil
  else
    background_speech_denoising_plan = parsed_json["backgroundSpeechDenoisingPlan"].to_json
    background_speech_denoising_plan = Vapi::BackgroundSpeechDenoisingPlan.from_json(json_object: background_speech_denoising_plan)
  end
  if parsed_json["analysisPlan"].nil?
    analysis_plan = nil
  else
    analysis_plan = parsed_json["analysisPlan"].to_json
    analysis_plan = Vapi::AnalysisPlan.from_json(json_object: analysis_plan)
  end
  if parsed_json["artifactPlan"].nil?
    artifact_plan = nil
  else
    artifact_plan = parsed_json["artifactPlan"].to_json
    artifact_plan = Vapi::ArtifactPlan.from_json(json_object: artifact_plan)
  end
  if parsed_json["messagePlan"].nil?
    message_plan = nil
  else
    message_plan = parsed_json["messagePlan"].to_json
    message_plan = Vapi::MessagePlan.from_json(json_object: message_plan)
  end
  if parsed_json["startSpeakingPlan"].nil?
    start_speaking_plan = nil
  else
    start_speaking_plan = parsed_json["startSpeakingPlan"].to_json
    start_speaking_plan = Vapi::StartSpeakingPlan.from_json(json_object: start_speaking_plan)
  end
  if parsed_json["stopSpeakingPlan"].nil?
    stop_speaking_plan = nil
  else
    stop_speaking_plan = parsed_json["stopSpeakingPlan"].to_json
    stop_speaking_plan = Vapi::StopSpeakingPlan.from_json(json_object: stop_speaking_plan)
  end
  if parsed_json["monitorPlan"].nil?
    monitor_plan = nil
  else
    monitor_plan = parsed_json["monitorPlan"].to_json
    monitor_plan = Vapi::MonitorPlan.from_json(json_object: monitor_plan)
  end
  credential_ids = parsed_json["credentialIds"]
  if parsed_json["server"].nil?
    server = nil
  else
    server = parsed_json["server"].to_json
    server = Vapi::Server.from_json(json_object: server)
  end
  if parsed_json["keypadInputPlan"].nil?
    keypad_input_plan = nil
  else
    keypad_input_plan = parsed_json["keypadInputPlan"].to_json
    keypad_input_plan = Vapi::KeypadInputPlan.from_json(json_object: keypad_input_plan)
  end
  id = parsed_json["id"]
  org_id = parsed_json["orgId"]
  created_at = (DateTime.parse(parsed_json["createdAt"]) unless parsed_json["createdAt"].nil?)
  updated_at = (DateTime.parse(parsed_json["updatedAt"]) unless parsed_json["updatedAt"].nil?)
  new(
    transcriber: transcriber,
    model: model,
    voice: voice,
    first_message: first_message,
    first_message_interruptions_enabled: first_message_interruptions_enabled,
    first_message_mode: first_message_mode,
    voicemail_detection: voicemail_detection,
    client_messages: client_messages,
    server_messages: server_messages,
    silence_timeout_seconds: silence_timeout_seconds,
    max_duration_seconds: max_duration_seconds,
    background_sound: background_sound,
    background_denoising_enabled: background_denoising_enabled,
    model_output_in_messages_enabled: model_output_in_messages_enabled,
    transport_configurations: transport_configurations,
    observability_plan: observability_plan,
    credentials: credentials,
    hooks: hooks,
    name: name,
    voicemail_message: voicemail_message,
    end_call_message: end_call_message,
    end_call_phrases: end_call_phrases,
    compliance_plan: compliance_plan,
    metadata: ,
    background_speech_denoising_plan: background_speech_denoising_plan,
    analysis_plan: analysis_plan,
    artifact_plan: artifact_plan,
    message_plan: message_plan,
    start_speaking_plan: start_speaking_plan,
    stop_speaking_plan: stop_speaking_plan,
    monitor_plan: monitor_plan,
    credential_ids: credential_ids,
    server: server,
    keypad_input_plan: keypad_input_plan,
    id: id,
    org_id: org_id,
    created_at: created_at,
    updated_at: updated_at,
    additional_properties: struct
  )
end

.validate_raw(obj:) ⇒ Void

Leveraged for Union-type generation, validate_raw attempts to parse the given

hash and check each fields type against the current object's property
definitions.

Parameters:

  • obj (Object)

Returns:

  • (Void)


596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
# File 'lib/vapi_server_sdk/types/assistant.rb', line 596

def self.validate_raw(obj:)
  obj.transcriber.nil? || Vapi::AssistantTranscriber.validate_raw(obj: obj.transcriber)
  obj.model.nil? || Vapi::AssistantModel.validate_raw(obj: obj.model)
  obj.voice.nil? || Vapi::AssistantVoice.validate_raw(obj: obj.voice)
  obj.first_message&.is_a?(String) != false || raise("Passed value for field obj.first_message is not the expected type, validation failed.")
  obj.first_message_interruptions_enabled&.is_a?(Boolean) != false || raise("Passed value for field obj.first_message_interruptions_enabled is not the expected type, validation failed.")
  obj.first_message_mode&.is_a?(Vapi::AssistantFirstMessageMode) != false || raise("Passed value for field obj.first_message_mode is not the expected type, validation failed.")
  obj.voicemail_detection.nil? || Vapi::AssistantVoicemailDetection.validate_raw(obj: obj.voicemail_detection)
  obj.client_messages&.is_a?(Array) != false || raise("Passed value for field obj.client_messages is not the expected type, validation failed.")
  obj.server_messages&.is_a?(Array) != false || raise("Passed value for field obj.server_messages is not the expected type, validation failed.")
  obj.silence_timeout_seconds&.is_a?(Float) != false || raise("Passed value for field obj.silence_timeout_seconds is not the expected type, validation failed.")
  obj.max_duration_seconds&.is_a?(Float) != false || raise("Passed value for field obj.max_duration_seconds is not the expected type, validation failed.")
  obj.background_sound.nil? || Vapi::AssistantBackgroundSound.validate_raw(obj: obj.background_sound)
  obj.background_denoising_enabled&.is_a?(Boolean) != false || raise("Passed value for field obj.background_denoising_enabled is not the expected type, validation failed.")
  obj.model_output_in_messages_enabled&.is_a?(Boolean) != false || raise("Passed value for field obj.model_output_in_messages_enabled is not the expected type, validation failed.")
  obj.transport_configurations&.is_a?(Array) != false || raise("Passed value for field obj.transport_configurations is not the expected type, validation failed.")
  obj.observability_plan.nil? || Vapi::LangfuseObservabilityPlan.validate_raw(obj: obj.observability_plan)
  obj.credentials&.is_a?(Array) != false || raise("Passed value for field obj.credentials is not the expected type, validation failed.")
  obj.hooks&.is_a?(Array) != false || raise("Passed value for field obj.hooks is not the expected type, validation failed.")
  obj.name&.is_a?(String) != false || raise("Passed value for field obj.name is not the expected type, validation failed.")
  obj.voicemail_message&.is_a?(String) != false || raise("Passed value for field obj.voicemail_message is not the expected type, validation failed.")
  obj.end_call_message&.is_a?(String) != false || raise("Passed value for field obj.end_call_message is not the expected type, validation failed.")
  obj.end_call_phrases&.is_a?(Array) != false || raise("Passed value for field obj.end_call_phrases is not the expected type, validation failed.")
  obj.compliance_plan.nil? || Vapi::CompliancePlan.validate_raw(obj: obj.compliance_plan)
  obj.&.is_a?(Hash) != false || raise("Passed value for field obj.metadata is not the expected type, validation failed.")
  obj.background_speech_denoising_plan.nil? || Vapi::BackgroundSpeechDenoisingPlan.validate_raw(obj: obj.background_speech_denoising_plan)
  obj.analysis_plan.nil? || Vapi::AnalysisPlan.validate_raw(obj: obj.analysis_plan)
  obj.artifact_plan.nil? || Vapi::ArtifactPlan.validate_raw(obj: obj.artifact_plan)
  obj.message_plan.nil? || Vapi::MessagePlan.validate_raw(obj: obj.message_plan)
  obj.start_speaking_plan.nil? || Vapi::StartSpeakingPlan.validate_raw(obj: obj.start_speaking_plan)
  obj.stop_speaking_plan.nil? || Vapi::StopSpeakingPlan.validate_raw(obj: obj.stop_speaking_plan)
  obj.monitor_plan.nil? || Vapi::MonitorPlan.validate_raw(obj: obj.monitor_plan)
  obj.credential_ids&.is_a?(Array) != false || raise("Passed value for field obj.credential_ids is not the expected type, validation failed.")
  obj.server.nil? || Vapi::Server.validate_raw(obj: obj.server)
  obj.keypad_input_plan.nil? || Vapi::KeypadInputPlan.validate_raw(obj: obj.keypad_input_plan)
  obj.id.is_a?(String) != false || raise("Passed value for field obj.id is not the expected type, validation failed.")
  obj.org_id.is_a?(String) != false || raise("Passed value for field obj.org_id is not the expected type, validation failed.")
  obj.created_at.is_a?(DateTime) != false || raise("Passed value for field obj.created_at is not the expected type, validation failed.")
  obj.updated_at.is_a?(DateTime) != false || raise("Passed value for field obj.updated_at is not the expected type, validation failed.")
end

Instance Method Details

#to_json(*_args) ⇒ String

Serialize an instance of Assistant to a JSON object

Returns:

  • (String)


586
587
588
# File 'lib/vapi_server_sdk/types/assistant.rb', line 586

def to_json(*_args)
  @_field_set&.to_json
end