Class: IntersightClient::TelemetryDruidQueryContext
- Inherits:
-
Object
- Object
- IntersightClient::TelemetryDruidQueryContext
- Defined in:
- lib/intersight_client/models/telemetry_druid_query_context.rb
Overview
The query context is used for various query configuration parameters. Can be used to modify query behavior, including grand totals and zero-filling.
Instance Attribute Summary collapse
-
#by_segment ⇒ Object
Return "by segment" results.
-
#chunk_period ⇒ Object
At the Broker process level, long interval queries (of any type) may be broken into shorter interval queries to parallelize merging more than normal.
-
#enable_parallel_merge ⇒ Object
Enable parallel result merging on the Broker.
-
#finalize ⇒ Object
Flag indicating whether to "finalize" aggregation results.
-
#grand_total ⇒ Object
Druid can include an extra "grand totals" row as the last row of a timeseries result set.
-
#max_queued_bytes ⇒ Object
Maximum number of bytes queued per query before exerting backpressure on the channel to the data server.
-
#max_scatter_gather_bytes ⇒ Object
Maximum number of bytes gathered from data processes such as Historicals and realtime processes to execute a query.
-
#parallel_merge_initial_yield_rows ⇒ Object
Number of rows to yield per ForkJoinPool merge task for parallel result merging on the Broker, before forking off a new task to continue merging sequences.
-
#parallel_merge_parallelism ⇒ Object
Maximum number of parallel threads to use for parallel result merging on the Broker.
-
#parallel_merge_small_batch_rows ⇒ Object
Size of result batches to operate on in ForkJoinPool merge tasks for parallel result merging on the Broker.
-
#populate_cache ⇒ Object
Flag indicating whether to save the results of the query to the query cache.
-
#populate_result_level_cache ⇒ Object
Flag indicating whether to save the results of the query to the result level cache.
-
#priority ⇒ Object
Query Priority.
-
#query_id ⇒ Object
Unique identifier given to this query.
-
#serialize_date_time_as_long ⇒ Object
If true, DateTime is serialized as long in the result returned by Broker and the data transportation between Broker and compute process.
-
#serialize_date_time_as_long_inner ⇒ Object
If true, DateTime is serialized as long in the data transportation between Broker and compute process.
-
#skip_empty_buckets ⇒ Object
Timeseries queries normally fill empty interior time buckets with zeroes.
-
#timeout ⇒ Object
Query timeout in milliseconds, beyond which unfinished queries will be cancelled.
-
#use_cache ⇒ Object
Flag indicating whether to leverage the query cache for this query.
-
#use_result_level_cache ⇒ Object
Flag indicating whether to leverage the result level cache for this query.
Class Method Summary collapse
-
.acceptable_attribute_map ⇒ Object
Returns the key-value map of all the JSON attributes this model knows about.
-
.acceptable_attributes ⇒ Object
Returns all the JSON keys this model knows about.
-
.attribute_map ⇒ Object
Attribute mapping from ruby-style variable name to JSON key.
-
.build_from_hash(attributes) ⇒ Object
Builds the object from hash.
-
.openapi_nullable ⇒ Object
List of attributes with nullable: true.
-
.openapi_types ⇒ Object
Attribute type mapping.
Instance Method Summary collapse
-
#==(o) ⇒ Object
Checks equality by comparing each attribute.
-
#_deserialize(type, value) ⇒ Object
Deserializes the data based on type.
-
#_to_hash(value) ⇒ Hash
Outputs non-array value in the form of hash For object, use to_hash.
-
#build_from_hash(attributes) ⇒ Object
Builds the object from hash.
- #eql?(o) ⇒ Boolean
-
#hash ⇒ Integer
Calculates hash code according to all attributes.
-
#initialize(attributes = {}) ⇒ TelemetryDruidQueryContext
constructor
Initializes the object.
-
#list_invalid_properties ⇒ Object
Show invalid properties with the reasons.
-
#to_body ⇒ Hash
to_body is an alias to to_hash (backward compatibility).
-
#to_hash ⇒ Hash
Returns the object in the form of hash.
-
#to_s ⇒ String
Returns the string representation of the object.
-
#valid? ⇒ Boolean
Check to see if the all the properties in the model are valid.
Constructor Details
#initialize(attributes = {}) ⇒ TelemetryDruidQueryContext
Initializes the object
149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 149 def initialize(attributes = {}) if (!attributes.is_a?(Hash)) fail ArgumentError, "The input argument (attributes) must be a hash in `IntersightClient::TelemetryDruidQueryContext` initialize method" end # check to see if the attribute exists and convert string to symbol for hash key attributes = attributes.each_with_object({}) { |(k, v), h| if (!self.class.acceptable_attribute_map.key?(k.to_sym)) fail ArgumentError, "`#{k}` is not a valid attribute in `#{self.class.name}`. Please check the name to make sure it's valid. List of attributes: " + self.class.acceptable_attribute_map.keys.inspect end h[k.to_sym] = v } if attributes.key?(:'grand_total') self.grand_total = attributes[:'grand_total'] end if attributes.key?(:'skip_empty_buckets') self.skip_empty_buckets = attributes[:'skip_empty_buckets'] end if attributes.key?(:'timeout') self.timeout = attributes[:'timeout'] end if attributes.key?(:'priority') self.priority = attributes[:'priority'] end if attributes.key?(:'query_id') self.query_id = attributes[:'query_id'] end if attributes.key?(:'use_cache') self.use_cache = attributes[:'use_cache'] end if attributes.key?(:'populate_cache') self.populate_cache = attributes[:'populate_cache'] end if attributes.key?(:'use_result_level_cache') self.use_result_level_cache = attributes[:'use_result_level_cache'] end if attributes.key?(:'populate_result_level_cache') self.populate_result_level_cache = attributes[:'populate_result_level_cache'] end if attributes.key?(:'by_segment') self.by_segment = attributes[:'by_segment'] end if attributes.key?(:'finalize') self.finalize = attributes[:'finalize'] end if attributes.key?(:'chunk_period') self.chunk_period = attributes[:'chunk_period'] end if attributes.key?(:'max_scatter_gather_bytes') self.max_scatter_gather_bytes = attributes[:'max_scatter_gather_bytes'] end if attributes.key?(:'max_queued_bytes') self.max_queued_bytes = attributes[:'max_queued_bytes'] end if attributes.key?(:'serialize_date_time_as_long') self.serialize_date_time_as_long = attributes[:'serialize_date_time_as_long'] end if attributes.key?(:'serialize_date_time_as_long_inner') self.serialize_date_time_as_long_inner = attributes[:'serialize_date_time_as_long_inner'] end if attributes.key?(:'enable_parallel_merge') self.enable_parallel_merge = attributes[:'enable_parallel_merge'] end if attributes.key?(:'parallel_merge_parallelism') self.parallel_merge_parallelism = attributes[:'parallel_merge_parallelism'] end if attributes.key?(:'parallel_merge_initial_yield_rows') self.parallel_merge_initial_yield_rows = attributes[:'parallel_merge_initial_yield_rows'] end if attributes.key?(:'parallel_merge_small_batch_rows') self.parallel_merge_small_batch_rows = attributes[:'parallel_merge_small_batch_rows'] end end |
Instance Attribute Details
#by_segment ⇒ Object
Return "by segment" results. Primarily used for debugging, setting it to true returns results associated with the data segment they came from.
47 48 49 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 47 def by_segment @by_segment end |
#chunk_period ⇒ Object
At the Broker process level, long interval queries (of any type) may be broken into shorter interval queries to parallelize merging more than normal. Broken up queries will use a larger share of cluster resources, but, if you use groupBy "v1, it may be able to complete faster as a result. Use ISO 8601 periods. For example, if this property is set to P1M (one month), then a query covering a year would be broken into 12 smaller queries. The broker uses its query processing executor service to initiate processing for query chunks, so make sure druid.processing.numThreads is configured appropriately on the broker. groupBy queries do not support chunkPeriod by default, although they do if using the legacy "v1" engine. This context is deprecated since it’s only useful for groupBy "v1", and will be removed in the future releases.
53 54 55 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 53 def chunk_period @chunk_period end |
#enable_parallel_merge ⇒ Object
Enable parallel result merging on the Broker. Note that druid.processing.merge.useParallelMergePool must be enabled for this setting to be set to true.
68 69 70 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 68 def enable_parallel_merge @enable_parallel_merge end |
#finalize ⇒ Object
Flag indicating whether to "finalize" aggregation results. Primarily used for debugging. For instance, the hyperUnique aggregator will return the full HyperLogLog sketch instead of the estimated cardinality when this flag is set to false.
50 51 52 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 50 def finalize @finalize end |
#grand_total ⇒ Object
Druid can include an extra "grand totals" row as the last row of a timeseries result set. To enable this, set "grandTotal" to true. The grand totals row will appear as the last row in the result array, and will have no timestamp. It will be the last row even if the query is run in "descending" mode. Post-aggregations in the grand totals row will be computed based upon the grand total aggregations.
20 21 22 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 20 def grand_total @grand_total end |
#max_queued_bytes ⇒ Object
Maximum number of bytes queued per query before exerting backpressure on the channel to the data server. Similar to maxScatterGatherBytes, except unlike that configuration, this one will trigger backpressure rather than query failure. Zero means disabled.
59 60 61 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 59 def max_queued_bytes @max_queued_bytes end |
#max_scatter_gather_bytes ⇒ Object
Maximum number of bytes gathered from data processes such as Historicals and realtime processes to execute a query. This parameter can be used to further reduce maxScatterGatherBytes limit at query time.
56 57 58 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 56 def max_scatter_gather_bytes @max_scatter_gather_bytes end |
#parallel_merge_initial_yield_rows ⇒ Object
Number of rows to yield per ForkJoinPool merge task for parallel result merging on the Broker, before forking off a new task to continue merging sequences.
74 75 76 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 74 def parallel_merge_initial_yield_rows @parallel_merge_initial_yield_rows end |
#parallel_merge_parallelism ⇒ Object
Maximum number of parallel threads to use for parallel result merging on the Broker.
71 72 73 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 71 def parallel_merge_parallelism @parallel_merge_parallelism end |
#parallel_merge_small_batch_rows ⇒ Object
Size of result batches to operate on in ForkJoinPool merge tasks for parallel result merging on the Broker.
77 78 79 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 77 def parallel_merge_small_batch_rows @parallel_merge_small_batch_rows end |
#populate_cache ⇒ Object
Flag indicating whether to save the results of the query to the query cache. Primarily used for debugging. When set to false, it disables saving the results of this query to the query cache. When set to true, Druid uses druid.broker.cache.populateCache or druid.historical.cache.populateCache to determine whether or not to save the results of this query to the query cache.
38 39 40 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 38 def populate_cache @populate_cache end |
#populate_result_level_cache ⇒ Object
Flag indicating whether to save the results of the query to the result level cache. Primarily used for debugging. When set to false, it disables saving the results of this query to the query cache. When set to true, Druid uses druid.broker.cache.populateResultLevelCache to determine whether or not to save the results of this query to the result-level query cache.
44 45 46 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 44 def populate_result_level_cache @populate_result_level_cache end |
#priority ⇒ Object
Query Priority. Queries with higher priority get precedence for computational resources.
29 30 31 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 29 def priority @priority end |
#query_id ⇒ Object
Unique identifier given to this query. If a query ID is set or known, this can be used to cancel the query.
32 33 34 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 32 def query_id @query_id end |
#serialize_date_time_as_long ⇒ Object
If true, DateTime is serialized as long in the result returned by Broker and the data transportation between Broker and compute process.
62 63 64 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 62 def serialize_date_time_as_long @serialize_date_time_as_long end |
#serialize_date_time_as_long_inner ⇒ Object
If true, DateTime is serialized as long in the data transportation between Broker and compute process.
65 66 67 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 65 def serialize_date_time_as_long_inner @serialize_date_time_as_long_inner end |
#skip_empty_buckets ⇒ Object
Timeseries queries normally fill empty interior time buckets with zeroes. Time buckets that lie completely outside the data interval are not zero-filled. You can disable all zero-filling with this flag. In this mode, the data point for empty buckets are omitted from the results.
23 24 25 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 23 def skip_empty_buckets @skip_empty_buckets end |
#timeout ⇒ Object
Query timeout in milliseconds, beyond which unfinished queries will be cancelled. 0 timeout means no timeout.
26 27 28 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 26 def timeout @timeout end |
#use_cache ⇒ Object
Flag indicating whether to leverage the query cache for this query. When set to false, it disables reading from the query cache for this query. When set to true, Apache Druid uses druid.broker.cache.useCache or druid.historical.cache.useCache to determine whether or not to read from the query cache.
35 36 37 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 35 def use_cache @use_cache end |
#use_result_level_cache ⇒ Object
Flag indicating whether to leverage the result level cache for this query. When set to false, it disables reading from the query cache for this query. When set to true, Druid uses druid.broker.cache.useResultLevelCache to determine whether or not to read from the result-level query cache.
41 42 43 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 41 def use_result_level_cache @use_result_level_cache end |
Class Method Details
.acceptable_attribute_map ⇒ Object
Returns the key-value map of all the JSON attributes this model knows about
111 112 113 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 111 def self.acceptable_attribute_map attribute_map end |
.acceptable_attributes ⇒ Object
Returns all the JSON keys this model knows about
106 107 108 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 106 def self.acceptable_attributes attribute_map.values end |
.attribute_map ⇒ Object
Attribute mapping from ruby-style variable name to JSON key.
80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 80 def self.attribute_map { :'grand_total' => :'grandTotal', :'skip_empty_buckets' => :'skipEmptyBuckets', :'timeout' => :'timeout', :'priority' => :'priority', :'query_id' => :'queryId', :'use_cache' => :'useCache', :'populate_cache' => :'populateCache', :'use_result_level_cache' => :'useResultLevelCache', :'populate_result_level_cache' => :'populateResultLevelCache', :'by_segment' => :'bySegment', :'finalize' => :'finalize', :'chunk_period' => :'chunkPeriod', :'max_scatter_gather_bytes' => :'maxScatterGatherBytes', :'max_queued_bytes' => :'maxQueuedBytes', :'serialize_date_time_as_long' => :'serializeDateTimeAsLong', :'serialize_date_time_as_long_inner' => :'serializeDateTimeAsLongInner', :'enable_parallel_merge' => :'enableParallelMerge', :'parallel_merge_parallelism' => :'parallelMergeParallelism', :'parallel_merge_initial_yield_rows' => :'parallelMergeInitialYieldRows', :'parallel_merge_small_batch_rows' => :'parallelMergeSmallBatchRows' } end |
.build_from_hash(attributes) ⇒ Object
Builds the object from hash
298 299 300 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 298 def self.build_from_hash(attributes) new.build_from_hash(attributes) end |
.openapi_nullable ⇒ Object
List of attributes with nullable: true
142 143 144 145 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 142 def self.openapi_nullable Set.new([ ]) end |
.openapi_types ⇒ Object
Attribute type mapping.
116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 116 def self.openapi_types { :'grand_total' => :'Boolean', :'skip_empty_buckets' => :'Boolean', :'timeout' => :'Integer', :'priority' => :'Integer', :'query_id' => :'String', :'use_cache' => :'Boolean', :'populate_cache' => :'Boolean', :'use_result_level_cache' => :'Boolean', :'populate_result_level_cache' => :'Boolean', :'by_segment' => :'Boolean', :'finalize' => :'Boolean', :'chunk_period' => :'String', :'max_scatter_gather_bytes' => :'Integer', :'max_queued_bytes' => :'Integer', :'serialize_date_time_as_long' => :'Boolean', :'serialize_date_time_as_long_inner' => :'Boolean', :'enable_parallel_merge' => :'Boolean', :'parallel_merge_parallelism' => :'Integer', :'parallel_merge_initial_yield_rows' => :'Integer', :'parallel_merge_small_batch_rows' => :'Integer' } end |
Instance Method Details
#==(o) ⇒ Object
Checks equality by comparing each attribute.
258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 258 def ==(o) return true if self.equal?(o) self.class == o.class && grand_total == o.grand_total && skip_empty_buckets == o.skip_empty_buckets && timeout == o.timeout && priority == o.priority && query_id == o.query_id && use_cache == o.use_cache && populate_cache == o.populate_cache && use_result_level_cache == o.use_result_level_cache && populate_result_level_cache == o.populate_result_level_cache && by_segment == o.by_segment && finalize == o.finalize && chunk_period == o.chunk_period && max_scatter_gather_bytes == o.max_scatter_gather_bytes && max_queued_bytes == o.max_queued_bytes && serialize_date_time_as_long == o.serialize_date_time_as_long && serialize_date_time_as_long_inner == o.serialize_date_time_as_long_inner && enable_parallel_merge == o.enable_parallel_merge && parallel_merge_parallelism == o.parallel_merge_parallelism && parallel_merge_initial_yield_rows == o.parallel_merge_initial_yield_rows && parallel_merge_small_batch_rows == o.parallel_merge_small_batch_rows end |
#_deserialize(type, value) ⇒ Object
Deserializes the data based on type
328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 328 def _deserialize(type, value) case type.to_sym when :Time Time.parse(value) when :Date Date.parse(value) when :String value.to_s when :Integer value.to_i when :Float value.to_f when :Boolean if value.to_s =~ /\A(true|t|yes|y|1)\z/i true else false end when :Object # generic object (usually a Hash), return directly value when /\AArray<(?<inner_type>.+)>\z/ inner_type = Regexp.last_match[:inner_type] value.map { |v| _deserialize(inner_type, v) } when /\AHash<(?<k_type>.+?), (?<v_type>.+)>\z/ k_type = Regexp.last_match[:k_type] v_type = Regexp.last_match[:v_type] {}.tap do |hash| value.each do |k, v| hash[_deserialize(k_type, k)] = _deserialize(v_type, v) end end else # model # models (e.g. Pet) or oneOf klass = IntersightClient.const_get(type) klass.respond_to?(:openapi_one_of) ? klass.build(value) : klass.build_from_hash(value) end end |
#_to_hash(value) ⇒ Hash
Outputs non-array value in the form of hash For object, use to_hash. Otherwise, just return the value
399 400 401 402 403 404 405 406 407 408 409 410 411 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 399 def _to_hash(value) if value.is_a?(Array) value.compact.map { |v| _to_hash(v) } elsif value.is_a?(Hash) {}.tap do |hash| value.each { |k, v| hash[k] = _to_hash(v) } end elsif value.respond_to? :to_hash value.to_hash else value end end |
#build_from_hash(attributes) ⇒ Object
Builds the object from hash
305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 305 def build_from_hash(attributes) return nil unless attributes.is_a?(Hash) TelemetryDruidQueryContext.openapi_types.each_pair do |key, type| if attributes[TelemetryDruidQueryContext.attribute_map[key]].nil? && TelemetryDruidQueryContext.openapi_nullable.include?(key) self.send("#{key}=", nil) elsif type =~ /\AArray<(.*)>/i # check to ensure the input is an array given that the attribute # is documented as an array but the input is not if attributes[TelemetryDruidQueryContext.attribute_map[key]].is_a?(Array) self.send("#{key}=", attributes[TelemetryDruidQueryContext.attribute_map[key]].map { |v| _deserialize($1, v) }) end elsif !attributes[TelemetryDruidQueryContext.attribute_map[key]].nil? self.send("#{key}=", _deserialize(type, attributes[TelemetryDruidQueryContext.attribute_map[key]])) end end self end |
#eql?(o) ⇒ Boolean
285 286 287 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 285 def eql?(o) self == o end |
#hash ⇒ Integer
Calculates hash code according to all attributes.
291 292 293 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 291 def hash [grand_total, skip_empty_buckets, timeout, priority, query_id, use_cache, populate_cache, use_result_level_cache, populate_result_level_cache, by_segment, finalize, chunk_period, max_scatter_gather_bytes, max_queued_bytes, serialize_date_time_as_long, serialize_date_time_as_long_inner, enable_parallel_merge, parallel_merge_parallelism, parallel_merge_initial_yield_rows, parallel_merge_small_batch_rows].hash end |
#list_invalid_properties ⇒ Object
Show invalid properties with the reasons. Usually used together with valid?
245 246 247 248 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 245 def list_invalid_properties invalid_properties = Array.new invalid_properties end |
#to_body ⇒ Hash
to_body is an alias to to_hash (backward compatibility)
375 376 377 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 375 def to_body to_hash end |
#to_hash ⇒ Hash
Returns the object in the form of hash
381 382 383 384 385 386 387 388 389 390 391 392 393 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 381 def to_hash hash = {} TelemetryDruidQueryContext.attribute_map.each_pair do |attr, param| value = self.send(attr) if value.nil? is_nullable = TelemetryDruidQueryContext.openapi_nullable.include?(attr) next if !is_nullable || (is_nullable && !instance_variable_defined?(:"@#{attr}")) end hash[param] = _to_hash(value) end hash end |
#to_s ⇒ String
Returns the string representation of the object
369 370 371 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 369 def to_s to_hash.to_s end |
#valid? ⇒ Boolean
Check to see if the all the properties in the model are valid
252 253 254 |
# File 'lib/intersight_client/models/telemetry_druid_query_context.rb', line 252 def valid? true end |