Class: Intelligence::ChatRequest
- Inherits:
-
Object
- Object
- Intelligence::ChatRequest
- Defined in:
- lib/intelligence/chat_request.rb
Overview
The ChatRequest class encapsulates a request to an LLM. After creating a new ChatRequest instance you can make the actual request by calling the chat or stream methods. In order to construct a ChatRequest you must first construct and configure an adapter.
example
adapter = Intelligence::Adapter.build( :open_ai ) do
key ENV[ 'OPENAI_API_KEY' ]
do
model 'gpt-4o'
max_tokens 512
end
end
request = Intelligence::ChatRequest.new( adapter: adapter ) response = request.chat( ‘Hello!’ )
if response.success?
puts response.result.text
else
puts response.result.error_description
end
Constant Summary collapse
- DEFAULT_CONNECTION =
Faraday.new { | builder | builder.adapter Faraday.default_adapter }
Instance Method Summary collapse
-
#chat(conversation, *options) ⇒ Object
The
chatmethod leverages the adapter associated with thisChatRequestinstance to construct and make an HTTP request - through Faraday - to an LLM service. -
#initialize(connection: nil, adapter:, **options) ⇒ ChatRequest
constructor
The
initializemethod initializes theChatRequestinstance. - #stream(conversation, *options) ⇒ Object
Constructor Details
#initialize(connection: nil, adapter:, **options) ⇒ ChatRequest
The initialize method initializes the ChatRequest instance. You MUST pass a previously constructed and configured adapter and optionally a (Faraday) connection.
57 58 59 60 61 62 63 64 |
# File 'lib/intelligence/chat_request.rb', line 57 def initialize( connection: nil, adapter: , ** ) @connection = connection || DEFAULT_CONNECTION @adapter = adapter = || {} raise ArgumentError.new( 'An adapter must be configured before a request is constructed.' ) \ if @adapter.nil? end |
Instance Method Details
#chat(conversation, *options) ⇒ Object
The chat method leverages the adapter associated with this ChatRequest instance to construct and make an HTTP request - through Faraday - to an LLM service. The chat method always returns a Faraday::Respose which is augmented with a result method.
If the response is successful ( if response.success? returns true ) the result method returns a ChatResponse instance. If the response is not successful a ChatErrorResult instance is returned.
arguments
-
conversation- an instance ofIntelligence::Conversationor String; this encapsulatesthe content to be sent to the LLM -
options- one or more Hashes with options; these options overide any of theconfiguration used to configure the adapter; you can, for example, pass +{ chat_options: { max_tokens: 1024 }+ to limit the response to 1024 tokens.
82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 |
# File 'lib/intelligence/chat_request.rb', line 82 def chat( conversation, * ) conversation = build_quick_conversation( conversation ) if conversation.is_a?( String ) = .compact.reduce( {} ) { | accumulator, o | accumulator.merge( o ) } = .merge( ) # conversation and tools are presented as simple Hashes to the adapter conversation = conversation.to_h [ :tools ] = [ :tools ].to_a.map!( &:to_h ) if [ :tools ] uri = @adapter.chat_request_uri( ) headers = @adapter.chat_request_headers( ) payload = @adapter.chat_request_body( conversation, ) result_callback = nil response = @connection.post( uri ) do | request | headers.each { | key, value | request.headers[ key ] = value } request.body = payload yield request.extend( ChatRequestMethods ) if block_given? result_callback = request.instance_variable_get( "@_intelligence_result_callback" ) end result = nil if response.success? chat_result_attributes = @adapter.chat_result_attributes( response ) result = ChatResult.new( chat_result_attributes ) else error_result_attributes = @adapter.chat_result_error_attributes( response ) result = ChatErrorResult.new( error_result_attributes ) end response.instance_variable_set( "@_intelligence_result", result ) response.extend( ChatResponseMethods ) end |
#stream(conversation, *options) ⇒ Object
119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 |
# File 'lib/intelligence/chat_request.rb', line 119 def stream( conversation, * ) conversation = build_quick_conversation( conversation ) if conversation.is_a?( String ) = .compact.reduce( {} ) { | accumulator, o | accumulator.merge( o ) } = .merge( ) # conversation and tools are presented as simple Hashes to the adapter conversation = conversation.to_h [ :tools ] = [ :tools ].to_a.map!( &:to_h ) if [ :tools ] uri = @adapter.chat_request_uri( ) headers = @adapter.chat_request_headers( .merge( ) ) payload = @adapter.chat_request_body( conversation, ) context = nil response = @connection.post( uri ) do | request | headers.each { | key, value | request.headers[ key ] = value } request.body = payload yield request.extend( ChatRequestMethods ) result_callback = request.instance_variable_get( "@_intelligence_result_callback" ) request..on_data = Proc.new do | chunk, received_bytes | context, attributes = @adapter.stream_result_chunk_attributes( context, chunk ) result_callback.call( ChatResult.new( attributes ) ) unless attributes.nil? end end result = nil if response.success? stream_result_attributes = @adapter.stream_result_attributes( context ) result = ChatResult.new( stream_result_attributes ) else error_result_attributes = @adapter.stream_result_error_attributes( response ) result = ChatErrorResult.new( error_result_attributes ) end response.instance_variable_set( "@_intelligence_result", result ) response.extend( ChatResponseMethods ) end |