Class: Vapi::CreateAssistantDto
- Inherits:
-
Object
- Object
- Vapi::CreateAssistantDto
- Defined in:
- lib/vapi_server_sdk/types/create_assistant_dto.rb
Constant Summary collapse
- OMIT =
Object.new
Instance Attribute Summary collapse
-
#additional_properties ⇒ OpenStruct
readonly
Additional properties unmapped to the current class definition.
-
#analysis_plan ⇒ Vapi::AnalysisPlan
readonly
This is the plan for analysis of assistant’s calls.
-
#artifact_plan ⇒ Vapi::ArtifactPlan
readonly
This is the plan for artifacts generated during assistant’s calls.
-
#background_denoising_enabled ⇒ Boolean
readonly
This enables filtering of noise and background speech while the user is talking.
-
#background_sound ⇒ Vapi::CreateAssistantDtoBackgroundSound
readonly
This is the background sound in the call.
-
#background_speech_denoising_plan ⇒ Vapi::BackgroundSpeechDenoisingPlan
readonly
This enables filtering of noise and background speech while the user is talking.
-
#client_messages ⇒ Array<Vapi::CreateAssistantDtoClientMessagesItem>
readonly
These are the messages that will be sent to your Client SDKs.
- #compliance_plan ⇒ Vapi::CompliancePlan readonly
-
#credential_ids ⇒ Array<String>
readonly
These are the credentials that will be used for the assistant calls.
-
#credentials ⇒ Array<Vapi::CreateAssistantDtoCredentialsItem>
readonly
These are dynamic credentials that will be used for the assistant calls.
-
#end_call_message ⇒ String
readonly
This is the message that the assistant will say if it ends the call.
-
#end_call_phrases ⇒ Array<String>
readonly
This list contains phrases that, if spoken by the assistant, will trigger the call to be hung up.
-
#first_message ⇒ String
readonly
This is the first message that the assistant will say.
- #first_message_interruptions_enabled ⇒ Boolean readonly
-
#first_message_mode ⇒ Vapi::CreateAssistantDtoFirstMessageMode
readonly
This is the mode for the first message.
-
#hooks ⇒ Array<Vapi::CreateAssistantDtoHooksItem>
readonly
This is a set of actions that will be performed on certain events.
- #keypad_input_plan ⇒ Vapi::KeypadInputPlan readonly
-
#max_duration_seconds ⇒ Float
readonly
This is the maximum number of seconds that the call will last.
-
#message_plan ⇒ Vapi::MessagePlan
readonly
This is the plan for static predefined messages that can be spoken by the assistant during the call, like ‘idleMessages`.
-
#metadata ⇒ Hash{String => Object}
readonly
This is for metadata you want to store on the assistant.
-
#model ⇒ Vapi::CreateAssistantDtoModel
readonly
These are the options for the assistant’s LLM.
-
#model_output_in_messages_enabled ⇒ Boolean
readonly
This determines whether the model’s output is used in conversation history rather than the transcription of assistant’s speech.
-
#monitor_plan ⇒ Vapi::MonitorPlan
readonly
This is the plan for real-time monitoring of the assistant’s calls.
-
#name ⇒ String
readonly
This is the name of the assistant.
-
#observability_plan ⇒ Vapi::LangfuseObservabilityPlan
readonly
This is the plan for observability of assistant’s calls.
-
#server ⇒ Vapi::Server
readonly
This is where Vapi will send webhooks.
-
#server_messages ⇒ Array<Vapi::CreateAssistantDtoServerMessagesItem>
readonly
These are the messages that will be sent to your Server URL.
-
#silence_timeout_seconds ⇒ Float
readonly
How many seconds of silence to wait before ending the call.
-
#start_speaking_plan ⇒ Vapi::StartSpeakingPlan
readonly
This is the plan for when the assistant should start talking.
-
#stop_speaking_plan ⇒ Vapi::StopSpeakingPlan
readonly
This is the plan for when assistant should stop talking on customer interruption.
-
#transcriber ⇒ Vapi::CreateAssistantDtoTranscriber
readonly
These are the options for the assistant’s transcriber.
-
#transport_configurations ⇒ Array<Vapi::TransportConfigurationTwilio>
readonly
These are the configurations to be passed to the transport providers of assistant’s calls, like Twilio.
-
#voice ⇒ Vapi::CreateAssistantDtoVoice
readonly
These are the options for the assistant’s voice.
-
#voicemail_detection ⇒ Vapi::CreateAssistantDtoVoicemailDetection
readonly
These are the settings to configure or disable voicemail detection.
-
#voicemail_message ⇒ String
readonly
This is the message that the assistant will say if the call is forwarded to voicemail.
Class Method Summary collapse
-
.from_json(json_object:) ⇒ Vapi::CreateAssistantDto
Deserialize a JSON object to an instance of CreateAssistantDto.
-
.validate_raw(obj:) ⇒ Void
Leveraged for Union-type generation, validate_raw attempts to parse the given hash and check each fields type against the current object’s property definitions.
Instance Method Summary collapse
- #initialize(transcriber: OMIT, model: OMIT, voice: OMIT, first_message: OMIT, first_message_interruptions_enabled: OMIT, first_message_mode: OMIT, voicemail_detection: OMIT, client_messages: OMIT, server_messages: OMIT, silence_timeout_seconds: OMIT, max_duration_seconds: OMIT, background_sound: OMIT, background_denoising_enabled: OMIT, model_output_in_messages_enabled: OMIT, transport_configurations: OMIT, observability_plan: OMIT, credentials: OMIT, hooks: OMIT, name: OMIT, voicemail_message: OMIT, end_call_message: OMIT, end_call_phrases: OMIT, compliance_plan: OMIT, metadata: OMIT, background_speech_denoising_plan: OMIT, analysis_plan: OMIT, artifact_plan: OMIT, message_plan: OMIT, start_speaking_plan: OMIT, stop_speaking_plan: OMIT, monitor_plan: OMIT, credential_ids: OMIT, server: OMIT, keypad_input_plan: OMIT, additional_properties: nil) ⇒ Vapi::CreateAssistantDto constructor
-
#to_json(*_args) ⇒ String
Serialize an instance of CreateAssistantDto to a JSON object.
Constructor Details
#initialize(transcriber: OMIT, model: OMIT, voice: OMIT, first_message: OMIT, first_message_interruptions_enabled: OMIT, first_message_mode: OMIT, voicemail_detection: OMIT, client_messages: OMIT, server_messages: OMIT, silence_timeout_seconds: OMIT, max_duration_seconds: OMIT, background_sound: OMIT, background_denoising_enabled: OMIT, model_output_in_messages_enabled: OMIT, transport_configurations: OMIT, observability_plan: OMIT, credentials: OMIT, hooks: OMIT, name: OMIT, voicemail_message: OMIT, end_call_message: OMIT, end_call_phrases: OMIT, compliance_plan: OMIT, metadata: OMIT, background_speech_denoising_plan: OMIT, analysis_plan: OMIT, artifact_plan: OMIT, message_plan: OMIT, start_speaking_plan: OMIT, stop_speaking_plan: OMIT, monitor_plan: OMIT, credential_ids: OMIT, server: OMIT, keypad_input_plan: OMIT, additional_properties: nil) ⇒ Vapi::CreateAssistantDto
306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 306 def initialize(transcriber: OMIT, model: OMIT, voice: OMIT, first_message: OMIT, first_message_interruptions_enabled: OMIT, first_message_mode: OMIT, voicemail_detection: OMIT, client_messages: OMIT, server_messages: OMIT, silence_timeout_seconds: OMIT, max_duration_seconds: OMIT, background_sound: OMIT, background_denoising_enabled: OMIT, model_output_in_messages_enabled: OMIT, transport_configurations: OMIT, observability_plan: OMIT, credentials: OMIT, hooks: OMIT, name: OMIT, voicemail_message: OMIT, end_call_message: OMIT, end_call_phrases: OMIT, compliance_plan: OMIT, metadata: OMIT, background_speech_denoising_plan: OMIT, analysis_plan: OMIT, artifact_plan: OMIT, message_plan: OMIT, start_speaking_plan: OMIT, stop_speaking_plan: OMIT, monitor_plan: OMIT, credential_ids: OMIT, server: OMIT, keypad_input_plan: OMIT, additional_properties: nil) @transcriber = transcriber if transcriber != OMIT @model = model if model != OMIT @voice = voice if voice != OMIT @first_message = if != OMIT if != OMIT @first_message_interruptions_enabled = end @first_message_mode = if != OMIT @voicemail_detection = voicemail_detection if voicemail_detection != OMIT @client_messages = if != OMIT @server_messages = if != OMIT @silence_timeout_seconds = silence_timeout_seconds if silence_timeout_seconds != OMIT @max_duration_seconds = max_duration_seconds if max_duration_seconds != OMIT @background_sound = background_sound if background_sound != OMIT @background_denoising_enabled = background_denoising_enabled if background_denoising_enabled != OMIT @model_output_in_messages_enabled = if != OMIT @transport_configurations = transport_configurations if transport_configurations != OMIT @observability_plan = observability_plan if observability_plan != OMIT @credentials = credentials if credentials != OMIT @hooks = hooks if hooks != OMIT @name = name if name != OMIT @voicemail_message = if != OMIT @end_call_message = if != OMIT @end_call_phrases = end_call_phrases if end_call_phrases != OMIT @compliance_plan = compliance_plan if compliance_plan != OMIT @metadata = if != OMIT @background_speech_denoising_plan = background_speech_denoising_plan if background_speech_denoising_plan != OMIT @analysis_plan = analysis_plan if analysis_plan != OMIT @artifact_plan = artifact_plan if artifact_plan != OMIT @message_plan = if != OMIT @start_speaking_plan = start_speaking_plan if start_speaking_plan != OMIT @stop_speaking_plan = stop_speaking_plan if stop_speaking_plan != OMIT @monitor_plan = monitor_plan if monitor_plan != OMIT @credential_ids = credential_ids if credential_ids != OMIT @server = server if server != OMIT @keypad_input_plan = keypad_input_plan if keypad_input_plan != OMIT @additional_properties = additional_properties @_field_set = { "transcriber": transcriber, "model": model, "voice": voice, "firstMessage": , "firstMessageInterruptionsEnabled": , "firstMessageMode": , "voicemailDetection": voicemail_detection, "clientMessages": , "serverMessages": , "silenceTimeoutSeconds": silence_timeout_seconds, "maxDurationSeconds": max_duration_seconds, "backgroundSound": background_sound, "backgroundDenoisingEnabled": background_denoising_enabled, "modelOutputInMessagesEnabled": , "transportConfigurations": transport_configurations, "observabilityPlan": observability_plan, "credentials": credentials, "hooks": hooks, "name": name, "voicemailMessage": , "endCallMessage": , "endCallPhrases": end_call_phrases, "compliancePlan": compliance_plan, "metadata": , "backgroundSpeechDenoisingPlan": background_speech_denoising_plan, "analysisPlan": analysis_plan, "artifactPlan": artifact_plan, "messagePlan": , "startSpeakingPlan": start_speaking_plan, "stopSpeakingPlan": stop_speaking_plan, "monitorPlan": monitor_plan, "credentialIds": credential_ids, "server": server, "keypadInputPlan": keypad_input_plan }.reject do |_k, v| v == OMIT end end |
Instance Attribute Details
#additional_properties ⇒ OpenStruct (readonly)
Returns Additional properties unmapped to the current class definition.
181 182 183 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 181 def additional_properties @additional_properties end |
#analysis_plan ⇒ Vapi::AnalysisPlan (readonly)
Returns This is the plan for analysis of assistant’s calls. Stored in ‘call.analysis`.
132 133 134 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 132 def analysis_plan @analysis_plan end |
#artifact_plan ⇒ Vapi::ArtifactPlan (readonly)
Returns This is the plan for artifacts generated during assistant’s calls. Stored in ‘call.artifact`.
135 136 137 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 135 def artifact_plan @artifact_plan end |
#background_denoising_enabled ⇒ Boolean (readonly)
Returns This enables filtering of noise and background speech while the user is talking. Default ‘false` while in beta. @default false.
84 85 86 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 84 def background_denoising_enabled @background_denoising_enabled end |
#background_sound ⇒ Vapi::CreateAssistantDtoBackgroundSound (readonly)
Returns This is the background sound in the call. Default for phone calls is ‘office’ and default for web calls is ‘off’. You can also provide a custom sound by providing a URL to an audio file.
80 81 82 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 80 def background_sound @background_sound end |
#background_speech_denoising_plan ⇒ Vapi::BackgroundSpeechDenoisingPlan (readonly)
Returns This enables filtering of noise and background speech while the user is talking. Features:
-
Smart denoising using Krisp
-
Fourier denoising
Smart denoising can be combined with or used independently of Fourier denoising. Order of precedence:
-
Smart denoising
-
Fourier denoising.
130 131 132 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 130 def background_speech_denoising_plan @background_speech_denoising_plan end |
#client_messages ⇒ Array<Vapi::CreateAssistantDtoClientMessagesItem> (readonly)
Returns These are the messages that will be sent to your Client SDKs. Default is update,transcript,tool-calls,user-interrupted,voice-input,workflow.node.started. You can check the shape of the messages in ClientMessage schema.
65 66 67 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 65 def @client_messages end |
#compliance_plan ⇒ Vapi::CompliancePlan (readonly)
119 120 121 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 119 def compliance_plan @compliance_plan end |
#credential_ids ⇒ Array<String> (readonly)
Returns These are the credentials that will be used for the assistant calls. By default, all the credentials are available for use in the call but you can provide a subset using this.
170 171 172 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 170 def credential_ids @credential_ids end |
#credentials ⇒ Array<Vapi::CreateAssistantDtoCredentialsItem> (readonly)
Returns These are dynamic credentials that will be used for the assistant calls. By default, all the credentials are available for use in the call but you can supplement an additional credentials using this. Dynamic credentials override existing credentials.
102 103 104 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 102 def credentials @credentials end |
#end_call_message ⇒ String (readonly)
Returns This is the message that the assistant will say if it ends the call. If unspecified, it will hang up without saying anything.
114 115 116 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 114 def @end_call_message end |
#end_call_phrases ⇒ Array<String> (readonly)
Returns This list contains phrases that, if spoken by the assistant, will trigger the call to be hung up. Case insensitive.
117 118 119 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 117 def end_call_phrases @end_call_phrases end |
#first_message ⇒ String (readonly)
Returns This is the first message that the assistant will say. This can also be a URL to a containerized audio file (mp3, wav, etc.). If unspecified, assistant will wait for user to speak and use the model to respond once they speak.
40 41 42 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 40 def @first_message end |
#first_message_interruptions_enabled ⇒ Boolean (readonly)
42 43 44 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 42 def @first_message_interruptions_enabled end |
#first_message_mode ⇒ Vapi::CreateAssistantDtoFirstMessageMode (readonly)
Returns This is the mode for the first message. Default is ‘assistant-speaks-first’. Use:
-
‘assistant-speaks-first’ to have the assistant speak first.
-
‘assistant-waits-for-user’ to have the assistant wait for the user to speak
first.
-
‘assistant-speaks-first-with-model-generated-message’ to have the assistant
speak first with a message generated by the model based on the conversation state. (‘assistant.model.messages` at call start, `call.messages` at squad transfer points). @default ’assistant-speaks-first’.
53 54 55 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 53 def @first_message_mode end |
#hooks ⇒ Array<Vapi::CreateAssistantDtoHooksItem> (readonly)
Returns This is a set of actions that will be performed on certain events.
104 105 106 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 104 def hooks @hooks end |
#keypad_input_plan ⇒ Vapi::KeypadInputPlan (readonly)
179 180 181 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 179 def keypad_input_plan @keypad_input_plan end |
#max_duration_seconds ⇒ Float (readonly)
Returns This is the maximum number of seconds that the call will last. When the call reaches this duration, it will be ended. @default 600 (10 minutes).
76 77 78 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 76 def max_duration_seconds @max_duration_seconds end |
#message_plan ⇒ Vapi::MessagePlan (readonly)
Returns This is the plan for static predefined messages that can be spoken by the assistant during the call, like ‘idleMessages`. Note: `firstMessage`, `voicemailMessage`, and `endCallMessage` are currently at the root level. They will be moved to `messagePlan` in the future, but will remain backwards compatible.
141 142 143 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 141 def @message_plan end |
#metadata ⇒ Hash{String => Object} (readonly)
Returns This is for metadata you want to store on the assistant.
121 122 123 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 121 def @metadata end |
#model ⇒ Vapi::CreateAssistantDtoModel (readonly)
Returns These are the options for the assistant’s LLM.
33 34 35 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 33 def model @model end |
#model_output_in_messages_enabled ⇒ Boolean (readonly)
Returns This determines whether the model’s output is used in conversation history rather than the transcription of assistant’s speech. Default ‘false` while in beta. @default false.
89 90 91 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 89 def @model_output_in_messages_enabled end |
#monitor_plan ⇒ Vapi::MonitorPlan (readonly)
Returns This is the plan for real-time monitoring of the assistant’s calls. Usage:
-
To enable live listening of the assistant’s calls, set
‘monitorPlan.listenEnabled` to `true`.
-
To enable live control of the assistant’s calls, set
‘monitorPlan.controlEnabled` to `true`.
166 167 168 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 166 def monitor_plan @monitor_plan end |
#name ⇒ String (readonly)
Returns This is the name of the assistant. This is required when you want to transfer between assistants in a call.
107 108 109 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 107 def name @name end |
#observability_plan ⇒ Vapi::LangfuseObservabilityPlan (readonly)
Returns This is the plan for observability of assistant’s calls. Currently, only Langfuse is supported.
97 98 99 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 97 def observability_plan @observability_plan end |
#server ⇒ Vapi::Server (readonly)
Returns This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema. The order of precedence is:
-
assistant.server.url
-
phoneNumber.serverUrl
-
org.serverUrl.
177 178 179 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 177 def server @server end |
#server_messages ⇒ Array<Vapi::CreateAssistantDtoServerMessagesItem> (readonly)
Returns These are the messages that will be sent to your Server URL. Default is h-update,status-update,tool-calls,transfer-destination-request,user-interrupted. You can check the shape of the messages in ServerMessage schema.
69 70 71 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 69 def @server_messages end |
#silence_timeout_seconds ⇒ Float (readonly)
Returns How many seconds of silence to wait before ending the call. Defaults to 30. @default 30.
72 73 74 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 72 def silence_timeout_seconds @silence_timeout_seconds end |
#start_speaking_plan ⇒ Vapi::StartSpeakingPlan (readonly)
Returns This is the plan for when the assistant should start talking. You should configure this if you’re running into these issues:
-
The assistant is too slow to start talking after the customer is done
speaking.
-
The assistant is too fast to start talking after the customer is done
speaking.
-
The assistant is so fast that it’s actually interrupting the customer.
149 150 151 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 149 def start_speaking_plan @start_speaking_plan end |
#stop_speaking_plan ⇒ Vapi::StopSpeakingPlan (readonly)
Returns This is the plan for when assistant should stop talking on customer interruption. You should configure this if you’re running into these issues:
-
The assistant is too slow to recognize customer’s interruption.
-
The assistant is too fast to recognize customer’s interruption.
-
The assistant is getting interrupted by phrases that are just acknowledgments.
-
The assistant is getting interrupted by background noises.
-
The assistant is not properly stopping – it starts talking right after
getting interrupted.
159 160 161 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 159 def stop_speaking_plan @stop_speaking_plan end |
#transcriber ⇒ Vapi::CreateAssistantDtoTranscriber (readonly)
Returns These are the options for the assistant’s transcriber.
31 32 33 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 31 def transcriber @transcriber end |
#transport_configurations ⇒ Array<Vapi::TransportConfigurationTwilio> (readonly)
Returns These are the configurations to be passed to the transport providers of assistant’s calls, like Twilio. You can store multiple configurations for different transport providers. For a call, only the configuration matching the call transport provider is used.
94 95 96 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 94 def transport_configurations @transport_configurations end |
#voice ⇒ Vapi::CreateAssistantDtoVoice (readonly)
Returns These are the options for the assistant’s voice.
35 36 37 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 35 def voice @voice end |
#voicemail_detection ⇒ Vapi::CreateAssistantDtoVoicemailDetection (readonly)
Returns These are the settings to configure or disable voicemail detection. Alternatively, voicemail detection can be configured using the model.tools=. This uses Twilio’s built-in detection while the VoicemailTool relies on the model to detect if a voicemail was reached. You can use neither of them, one of them, or both of them. By default, Twilio built-in detection is enabled while VoicemailTool is not.
61 62 63 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 61 def voicemail_detection @voicemail_detection end |
#voicemail_message ⇒ String (readonly)
Returns This is the message that the assistant will say if the call is forwarded to voicemail. If unspecified, it will hang up.
111 112 113 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 111 def @voicemail_message end |
Class Method Details
.from_json(json_object:) ⇒ Vapi::CreateAssistantDto
Deserialize a JSON object to an instance of CreateAssistantDto
389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 389 def self.from_json(json_object:) struct = JSON.parse(json_object, object_class: OpenStruct) parsed_json = JSON.parse(json_object) if parsed_json["transcriber"].nil? transcriber = nil else transcriber = parsed_json["transcriber"].to_json transcriber = Vapi::CreateAssistantDtoTranscriber.from_json(json_object: transcriber) end if parsed_json["model"].nil? model = nil else model = parsed_json["model"].to_json model = Vapi::CreateAssistantDtoModel.from_json(json_object: model) end if parsed_json["voice"].nil? voice = nil else voice = parsed_json["voice"].to_json voice = Vapi::CreateAssistantDtoVoice.from_json(json_object: voice) end = parsed_json["firstMessage"] = parsed_json["firstMessageInterruptionsEnabled"] = parsed_json["firstMessageMode"] if parsed_json["voicemailDetection"].nil? voicemail_detection = nil else voicemail_detection = parsed_json["voicemailDetection"].to_json voicemail_detection = Vapi::CreateAssistantDtoVoicemailDetection.from_json(json_object: voicemail_detection) end = parsed_json["clientMessages"] = parsed_json["serverMessages"] silence_timeout_seconds = parsed_json["silenceTimeoutSeconds"] max_duration_seconds = parsed_json["maxDurationSeconds"] if parsed_json["backgroundSound"].nil? background_sound = nil else background_sound = parsed_json["backgroundSound"].to_json background_sound = Vapi::CreateAssistantDtoBackgroundSound.from_json(json_object: background_sound) end background_denoising_enabled = parsed_json["backgroundDenoisingEnabled"] = parsed_json["modelOutputInMessagesEnabled"] transport_configurations = parsed_json["transportConfigurations"]&.map do |item| item = item.to_json Vapi::TransportConfigurationTwilio.from_json(json_object: item) end if parsed_json["observabilityPlan"].nil? observability_plan = nil else observability_plan = parsed_json["observabilityPlan"].to_json observability_plan = Vapi::LangfuseObservabilityPlan.from_json(json_object: observability_plan) end credentials = parsed_json["credentials"]&.map do |item| item = item.to_json Vapi::CreateAssistantDtoCredentialsItem.from_json(json_object: item) end hooks = parsed_json["hooks"]&.map do |item| item = item.to_json Vapi::CreateAssistantDtoHooksItem.from_json(json_object: item) end name = parsed_json["name"] = parsed_json["voicemailMessage"] = parsed_json["endCallMessage"] end_call_phrases = parsed_json["endCallPhrases"] if parsed_json["compliancePlan"].nil? compliance_plan = nil else compliance_plan = parsed_json["compliancePlan"].to_json compliance_plan = Vapi::CompliancePlan.from_json(json_object: compliance_plan) end = parsed_json["metadata"] if parsed_json["backgroundSpeechDenoisingPlan"].nil? background_speech_denoising_plan = nil else background_speech_denoising_plan = parsed_json["backgroundSpeechDenoisingPlan"].to_json background_speech_denoising_plan = Vapi::BackgroundSpeechDenoisingPlan.from_json(json_object: background_speech_denoising_plan) end if parsed_json["analysisPlan"].nil? analysis_plan = nil else analysis_plan = parsed_json["analysisPlan"].to_json analysis_plan = Vapi::AnalysisPlan.from_json(json_object: analysis_plan) end if parsed_json["artifactPlan"].nil? artifact_plan = nil else artifact_plan = parsed_json["artifactPlan"].to_json artifact_plan = Vapi::ArtifactPlan.from_json(json_object: artifact_plan) end if parsed_json["messagePlan"].nil? = nil else = parsed_json["messagePlan"].to_json = Vapi::MessagePlan.from_json(json_object: ) end if parsed_json["startSpeakingPlan"].nil? start_speaking_plan = nil else start_speaking_plan = parsed_json["startSpeakingPlan"].to_json start_speaking_plan = Vapi::StartSpeakingPlan.from_json(json_object: start_speaking_plan) end if parsed_json["stopSpeakingPlan"].nil? stop_speaking_plan = nil else stop_speaking_plan = parsed_json["stopSpeakingPlan"].to_json stop_speaking_plan = Vapi::StopSpeakingPlan.from_json(json_object: stop_speaking_plan) end if parsed_json["monitorPlan"].nil? monitor_plan = nil else monitor_plan = parsed_json["monitorPlan"].to_json monitor_plan = Vapi::MonitorPlan.from_json(json_object: monitor_plan) end credential_ids = parsed_json["credentialIds"] if parsed_json["server"].nil? server = nil else server = parsed_json["server"].to_json server = Vapi::Server.from_json(json_object: server) end if parsed_json["keypadInputPlan"].nil? keypad_input_plan = nil else keypad_input_plan = parsed_json["keypadInputPlan"].to_json keypad_input_plan = Vapi::KeypadInputPlan.from_json(json_object: keypad_input_plan) end new( transcriber: transcriber, model: model, voice: voice, first_message: , first_message_interruptions_enabled: , first_message_mode: , voicemail_detection: voicemail_detection, client_messages: , server_messages: , silence_timeout_seconds: silence_timeout_seconds, max_duration_seconds: max_duration_seconds, background_sound: background_sound, background_denoising_enabled: background_denoising_enabled, model_output_in_messages_enabled: , transport_configurations: transport_configurations, observability_plan: observability_plan, credentials: credentials, hooks: hooks, name: name, voicemail_message: , end_call_message: , end_call_phrases: end_call_phrases, compliance_plan: compliance_plan, metadata: , background_speech_denoising_plan: background_speech_denoising_plan, analysis_plan: analysis_plan, artifact_plan: artifact_plan, message_plan: , start_speaking_plan: start_speaking_plan, stop_speaking_plan: stop_speaking_plan, monitor_plan: monitor_plan, credential_ids: credential_ids, server: server, keypad_input_plan: keypad_input_plan, additional_properties: struct ) end |
.validate_raw(obj:) ⇒ Void
Leveraged for Union-type generation, validate_raw attempts to parse the given
hash and check each fields type against the current object's property
definitions.
567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 567 def self.validate_raw(obj:) obj.transcriber.nil? || Vapi::CreateAssistantDtoTranscriber.validate_raw(obj: obj.transcriber) obj.model.nil? || Vapi::CreateAssistantDtoModel.validate_raw(obj: obj.model) obj.voice.nil? || Vapi::CreateAssistantDtoVoice.validate_raw(obj: obj.voice) obj.&.is_a?(String) != false || raise("Passed value for field obj.first_message is not the expected type, validation failed.") obj.&.is_a?(Boolean) != false || raise("Passed value for field obj.first_message_interruptions_enabled is not the expected type, validation failed.") obj.&.is_a?(Vapi::CreateAssistantDtoFirstMessageMode) != false || raise("Passed value for field obj.first_message_mode is not the expected type, validation failed.") obj.voicemail_detection.nil? || Vapi::CreateAssistantDtoVoicemailDetection.validate_raw(obj: obj.voicemail_detection) obj.&.is_a?(Array) != false || raise("Passed value for field obj.client_messages is not the expected type, validation failed.") obj.&.is_a?(Array) != false || raise("Passed value for field obj.server_messages is not the expected type, validation failed.") obj.silence_timeout_seconds&.is_a?(Float) != false || raise("Passed value for field obj.silence_timeout_seconds is not the expected type, validation failed.") obj.max_duration_seconds&.is_a?(Float) != false || raise("Passed value for field obj.max_duration_seconds is not the expected type, validation failed.") obj.background_sound.nil? || Vapi::CreateAssistantDtoBackgroundSound.validate_raw(obj: obj.background_sound) obj.background_denoising_enabled&.is_a?(Boolean) != false || raise("Passed value for field obj.background_denoising_enabled is not the expected type, validation failed.") obj.&.is_a?(Boolean) != false || raise("Passed value for field obj.model_output_in_messages_enabled is not the expected type, validation failed.") obj.transport_configurations&.is_a?(Array) != false || raise("Passed value for field obj.transport_configurations is not the expected type, validation failed.") obj.observability_plan.nil? || Vapi::LangfuseObservabilityPlan.validate_raw(obj: obj.observability_plan) obj.credentials&.is_a?(Array) != false || raise("Passed value for field obj.credentials is not the expected type, validation failed.") obj.hooks&.is_a?(Array) != false || raise("Passed value for field obj.hooks is not the expected type, validation failed.") obj.name&.is_a?(String) != false || raise("Passed value for field obj.name is not the expected type, validation failed.") obj.&.is_a?(String) != false || raise("Passed value for field obj.voicemail_message is not the expected type, validation failed.") obj.&.is_a?(String) != false || raise("Passed value for field obj.end_call_message is not the expected type, validation failed.") obj.end_call_phrases&.is_a?(Array) != false || raise("Passed value for field obj.end_call_phrases is not the expected type, validation failed.") obj.compliance_plan.nil? || Vapi::CompliancePlan.validate_raw(obj: obj.compliance_plan) obj.&.is_a?(Hash) != false || raise("Passed value for field obj.metadata is not the expected type, validation failed.") obj.background_speech_denoising_plan.nil? || Vapi::BackgroundSpeechDenoisingPlan.validate_raw(obj: obj.background_speech_denoising_plan) obj.analysis_plan.nil? || Vapi::AnalysisPlan.validate_raw(obj: obj.analysis_plan) obj.artifact_plan.nil? || Vapi::ArtifactPlan.validate_raw(obj: obj.artifact_plan) obj..nil? || Vapi::MessagePlan.validate_raw(obj: obj.) obj.start_speaking_plan.nil? || Vapi::StartSpeakingPlan.validate_raw(obj: obj.start_speaking_plan) obj.stop_speaking_plan.nil? || Vapi::StopSpeakingPlan.validate_raw(obj: obj.stop_speaking_plan) obj.monitor_plan.nil? || Vapi::MonitorPlan.validate_raw(obj: obj.monitor_plan) obj.credential_ids&.is_a?(Array) != false || raise("Passed value for field obj.credential_ids is not the expected type, validation failed.") obj.server.nil? || Vapi::Server.validate_raw(obj: obj.server) obj.keypad_input_plan.nil? || Vapi::KeypadInputPlan.validate_raw(obj: obj.keypad_input_plan) end |
Instance Method Details
#to_json(*_args) ⇒ String
Serialize an instance of CreateAssistantDto to a JSON object
557 558 559 |
# File 'lib/vapi_server_sdk/types/create_assistant_dto.rb', line 557 def to_json(*_args) @_field_set&.to_json end |