Class: EcoSystem
- Inherits:
-
EcoFaculty
- Object
- EcoFaculty
- EcoSystem
- Defined in:
- lib/opensecret/commons/eco.system.rb
Overview
— ————————————————————————————— – # — Build the [services eco-system]. The app sits at the centre of the services eco-system. – # — Everything that is done -> is done for (because of, to, in spite of) the [application]. – # — ————————————————————————————— – # — ————————————————————————— — # — The [eco service folder] contains the templates, scripts and configuration. — # — By convention the folder name (off prj root) matches the name of the class. — # — ————————————————————————— — # — Example => ProvisionMongoDb assets are provision.mongo.db — # — ————————————————————————— — # — By Convention — # — Ruby Class => EcoAppServer — # — is Found in File => eco.system.plugins/eco.app.server.rb — # — Has Assets in => provision.app.server/ — # — and Inherits from => ProvisionEcoService — # — Found in File => provision.services/provision.eco.service.rb — # — ————————————————————————— — #
Instance Attribute Summary
Attributes inherited from EcoFaculty
Class Method Summary collapse
-
.reusable_buckets ⇒ Object
– – Get eco-system reusable directory filepaths within – an array.
Instance Method Summary collapse
-
#copy_b4_sync_worthwhile?(sync_attr) ⇒ Boolean
– – [COPY] from another s3 bucket [B4 SYNC] if [WORTHWHILE] – – Once a month (or week) performance may be gained by copying – from the previous s3 bucket before sync-ing the local folder.
-
#execute_scripts ⇒ Object
– – Use the remote host instantiated for the eco plugin.
-
#inject_reusables ⇒ Object
– – Gather the reusable [file] resources from the directoy bucket – arrays that are declared to hold these assets.
-
#overwrite_lines ⇒ Object
– – [FIND] lines that include a set of configured strings and – [REPLACE] then with the configured alternative.
-
#post_provisioning ⇒ Object
– – Implements service discovery for the provisioned eco-system services.
-
#pre_provisioning ⇒ Object
— —————————————————————————– — # — Provision the services eco-system (universe) with the app as the focal point.
-
#provision ⇒ Object
– ————————————————————– – # – eco-system [provisioning] begins in earnest here.
-
#s3_synchronize ⇒ Object
– —————————————————– – # – sync folder with s3 bucket under certain conditions.
-
#s3_upload ⇒ Object
— —————————————————————————- — # — Any file in the eco folder whose name starts with [:s3] gets — # — uploaded to the S3 provisioning folder (in monthly bucket).
-
#sync_2s3_bucket ⇒ Object
– – [SYNC] a local folder with a given S3 bucket at a particular – folder offset location, with a specific set of sync options.
Methods inherited from EcoFaculty
#configure_aws_credentials, #db_fact_exists?, #e_fact, #eco_fact_exists?, #get_eco_fact, #instantiate_runtime, #plugin_fact, #plugin_fact_exists?, #plugin_src_dir, #read_block_facts, #read_properties, #replace_placeholders, #string_fact_exists?, #write_properties
Class Method Details
.reusable_buckets ⇒ Object
– – Get eco-system reusable directory filepaths within – an array. – – The two known directories are – – [1] - reusable.scripts – [2] - reusable.templates – –
80 81 82 83 84 85 86 87 88 89 90 |
# File 'lib/opensecret/commons/eco.system.rb', line 80 def self.reusable_buckets project_basedir = File.dirname( File.dirname( __FILE__ ) ) reusable_buckets = Array.new reusable_buckets.push( File.join(project_basedir, "reusable.scripts") ) reusable_buckets.push( File.join(project_basedir, "reusable.templates") ) return reusable_buckets end |
Instance Method Details
#copy_b4_sync_worthwhile?(sync_attr) ⇒ Boolean
– – [COPY] from another s3 bucket [B4 SYNC] if [WORTHWHILE] – – Once a month (or week) performance may be gained by copying – from the previous s3 bucket before sync-ing the local folder. – – The first [backup] of the new month/week/day is a full backup – of a local folder to up-sync. This can take a lot of time for – a say [7Gig] folder holding many little files. – – ——————- – S3 to S3 Mirror – ——————- – If we copy (mirror) the previous S3 bucket folder before the – sync we gain much in performance because S3 to S3 copying is – super fast - then just the delta is sync’d up. – – ——————————- – Pre-Conditions - Copy B4 Sync – ——————————- – – They copy/mirror before sync will occur when the – – 1 - [sync_options.copy_b4_sync_if] flag is [true] – 2 - to sync S3 folder (not bucket) does NOT exist – 3 - previous periods (month/week..) folder exists – – ————- – Assumptions – ————- – Currently assumes the period is ALWAYS [monthly]. – Change this to cater for – [ hourly, daily, weekly, monthly, quarterly, yearly ] –
321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 |
# File 'lib/opensecret/commons/eco.system.rb', line 321 def copy_b4_sync_worthwhile? sync_attr return false if sync_attr.bucket_b4_name.nil? sync_folder_exists = AwsS3.instance.bucket_folder_contains_something?( sync_attr.s3_bucket_name, sync_attr.offset_path ) return false if sync_folder_exists b4_folder_exists = AwsS3.instance.bucket_folder_contains_something?( sync_attr.bucket_b4_name, sync_attr.offset_path ) return b4_folder_exists end |
#execute_scripts ⇒ Object
– – Use the remote host instantiated for the eco plugin. – Upload the plugin folder and run the reusables. –
137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 |
# File 'lib/opensecret/commons/eco.system.rb', line 137 def execute_scripts return unless eco_fact_exists? :runnables log.info(ere) { "[collate] ---------------------------------------- --- #" } log.info(ere) { "[collate] collate will upload and execute scripts. --- #" } log.info(ere) { "[collate] ---------------------------------------- --- #" } log.info(ere) { "#{pp e_fact(:runnables).values}" } log.info(ere) { "[collate] ---------------------------------------- --- #" } install_dos2unix = "sudo apt-get install -y dos2unix" plugin_host = @c[:machine][:host_class] plugin_host.runtime_dir = @c[:runtime][:dir] plugin_host.execute_cmd install_dos2unix plugin_host.upload_folder @c[:runtime][:dstname], @c[:runtime][:dir] e_fact(:runnables).each_value do | script_name | script_path = @c[:runtime][:dstname] + "/" + @c[:runtime][:dirname] + "/" + script_name cmd1 = "chmod u+x " + script_path cmd2 = "dos2unix " + script_path cmd3 = script_path #### plugin_host.execute_ansible_cmd @c[:runtime][:dir] #### exit plugin_host.execute_cmd cmd1 plugin_host.execute_cmd cmd2 plugin_host.execute_cmd cmd3 end plugin_host.log_remote_host end |
#inject_reusables ⇒ Object
– – Gather the reusable [file] resources from the directoy bucket – arrays that are declared to hold these assets. – – The reusables are gathered only if the plugin declares a fact – called [:reusables] that is an array of simple filenames. – – This method does a recursive search to find and then copy over – these reusable files into the runtime directory. – – —————————— – Constraint - Duplicate Names – —————————— – – Duplicate asset filenames introduce ambiguity in as far as – reusable assets are concerned. Therefore an error will be – raised if this situation arises. –
111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 |
# File 'lib/opensecret/commons/eco.system.rb', line 111 def inject_reusables return unless eco_fact_exists?(:inventory) || eco_fact_exists?(:runnables) files_map = Files.in_folders EcoSystem.reusable_buckets reusables = e_fact(:inventory).merge( e_fact(:runnables) ) reusables.each do |source_name, target_name| error_1 = "Cannot find reusable [#{source_name}].\n\n#{files_map.inspect}" raise ArgumentError.new error_1 unless files_map.has_key? source_name log.info(ere) {"Copying reusable #{source_name} => to => #{target_name}"} source_file = File.join files_map[source_name], source_name target_file = File.join @c[:runtime][:dir], target_name log.info(ere) {"Source DevOps Asset => #{nickname source_file}"} log.info(ere) {"Target DevOps Asset => #{nickname target_file}"} FileUtils.cp source_file, target_file end end |
#overwrite_lines ⇒ Object
– – [FIND] lines that include a set of configured strings and – [REPLACE] then with the configured alternative. – – This behaviour is driven by a (plugin.id).line.replace.json – configuration file that states – – [1] - the target file to change – [2] - the array of words to match each line against – [3] - new line replacing old if all the words match – – ———————————– – [Pre-Conditions] => Only act when – ———————————– – – 1. plugin dir has a json [targetting] configuration file – – ——————————— – [Dependencies and Assumptions] – ——————————— – – 1. json file is formatted with below keys (and value types) – – - replace_file_path : value type => String – - line_search_strings : value type => Array of Strings – - replace_with_string : value type => String – – 2. every file specified exists and is readable + writeable –
415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 |
# File 'lib/opensecret/commons/eco.system.rb', line 415 def overwrite_lines return unless File.exists? @c[:overwrite][:spec_filepath] pointers = JSON.parse( File.read(@c[:overwrite][:spec_filepath]), object_class: OpenStruct ) pointers.each do | pinpoint | Files.find_replace_lines( pinpoint.replace_file_path, pinpoint.line_search_strings, pinpoint.replace_with_string ) end end |
#post_provisioning ⇒ Object
– – Implements service discovery for the provisioned eco-system services. –
59 60 61 62 63 64 65 66 67 |
# File 'lib/opensecret/commons/eco.system.rb', line 59 def post_provisioning execute_scripts s3_upload s3_synchronize write_properties sync_2s3_bucket end |
#pre_provisioning ⇒ Object
— —————————————————————————– — # — Provision the services eco-system (universe) with the app as the focal point. — # — —————————————————————————– — #
48 49 50 51 52 53 |
# File 'lib/opensecret/commons/eco.system.rb', line 48 def pre_provisioning read_properties inject_reusables end |
#provision ⇒ Object
– ————————————————————– – # – eco-system [provisioning] begins in earnest here. By making – # – a [super] call (at the beginning, middle or end) - eco-systems – # – can extend the functionality provided here. – # – ————————————————————– – # – To prevent this code running, child classes must provide their – # – own provision along an (optional) alternative implementation. – # – ————————————————————– – #
30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
# File 'lib/opensecret/commons/eco.system.rb', line 30 def provision super pre_provisioning # --> Do work to gather key provisioning facts replace_placeholders # --> Replace key facts in files within the eco folder core_provisioning # --> Do the heavy lifting 4 provisioning the eco service overwrite_lines # --> Replace pinpointed lines that include a string set. replace_placeholders # --> Replace xtra key facts to prep 4 post provisioning. post_provisioning # --> Notifying service dependents is usually done here. end |
#s3_synchronize ⇒ Object
– —————————————————– – # – sync folder with s3 bucket under certain conditions. – # – —————————————————– – # – Sync Conditions – # – —————————————————– – # – [1] - running in a unix environment – # – [2] - key [s3sync.bucket.name] exists – # – [3] - key [s3sync.path.offset] exists – # – [4] - s3 bucket exists and is writeable – # – [5] - local dir exists and is readable – # – – # – —————————————————– – # – Dependencies and Assumptions – # – —————————————————– – # – the aws iam environment variables are set – # – the s3 bucket specified exists and is writable – # – the s3 bucket contents are deletable – # – local path offset off [plugin folder] exists – # – the [awscli] apt-get package is installed – # – —————————————————– – #
198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 |
# File 'lib/opensecret/commons/eco.system.rb', line 198 def s3_synchronize return if Gem.win_platform? return unless eco_fact_exists? :s3sync_bucket return unless eco_fact_exists? :s3sync_folder log.info(ere) { "[s3 sync] -------------------------------------------- --- #" } log.info(ere) { "[s3 sync] eco plugin running on a non-windows platform --- #" } log.info(ere) { "[s3 sync] with s3 sync parameters available. --- #" } log.info(ere) { "[s3 sync] -------------------------------------------- --- #" } AwsS3.instance.log_bucket_summary AwsS3.instance.sync_with_s3 e_fact(:s3sync_bucket), e_fact(:s3sync_folder) AwsS3.instance.log_bucket_summary end |
#s3_upload ⇒ Object
— —————————————————————————- — # — Any file in the eco folder whose name starts with [:s3] gets — # — uploaded to the S3 provisioning folder (in monthly bucket). Then the url — # — is written into the app properties database with a key that is the remaining — # — filename after the preceeding s3 prefix is removed and subsequently appended — # — appended with the string “.url” — # — —————————————————————————- — #
351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
# File 'lib/opensecret/commons/eco.system.rb', line 351 def s3_upload log.info(ere) { "[s3 upload] examing files in #{@c[:runtime][:dir]}" } # -- ------------------------------------------------------------------ -- # # -- Scan folder for files whose names begin with the s3 upload prefix. -- # # -- ------------------------------------------------------------------ -- # Dir.foreach( @c[:runtime][:dir] ) do | file_name | file_path = File.join @c[:runtime][:dir], file_name next if File.directory? file_path next unless file_name.start_with? @c[:s3][:upload_prefix] read_block_facts __FILE__, __method__, :upload, :src_file_name, file_name Dir.mkdir @c[:s3][:uploads_dir] unless File.exists? @c[:s3][:uploads_dir] next if File.exists? @c[:upload][:dst_file_path] FileUtils.cp @c[:upload][:src_file_path], @c[:upload][:dst_file_path] AwsS3.instance.log_bucket_summary log.warn(ere) { "Warning - Not uploading to S3 bucket = File ==| #{@c[:upload][:dst_file_path]}" } log.warn(ere) { "Warning - Not adding S3 resource URL fact to app_properties fact group." } ##### === ============================================================================================= ##### === Commenting this prevents uploading any file with the s3put tag. ##### === ============================================================================================= ##### === s3_url = AwsS3.instance.upload_to_s3 @c[:s3][:bucket_name], @c[:upload][:dst_file_path] ##### === @c.add_fact :app_properties, @c[:upload][:app_props_key], s3_url ##### === ============================================================================================= end end |
#sync_2s3_bucket ⇒ Object
– – [SYNC] a local folder with a given S3 bucket at a particular – folder offset location, with a specific set of sync options. – – This behaviour is driven by a (plugin.id).s3.sync.spec.json – specification file that states – – [1] - the source folder whose contents will be sync’d up – [2] - the S3 bucket name into which to sync the contents – [3] - the S3 folder path offset (within the S3 bucket) – [4] - sync options like delete, size-only, acl and more –
228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 |
# File 'lib/opensecret/commons/eco.system.rb', line 228 def sync_2s3_bucket return unless @c.has_key?(:s3_sync) && File.exists?(@c[:s3_sync][:spec_filepath]) AwsS3.instance.log_bucket_summary sync_directives = JSON.parse( File.read(@c[:s3_sync][:spec_filepath]), object_class: OpenStruct ) sync_directives.each do | sync_directive | log.info(ere) { "[sync] ############################################################### ### #" } log.info(ere) { "[sync] --------------------------------------------------------------- --- #" } log.info(ere) { "[sync] sync-ing local folder to s3 bucket [#{sync_directive.s3_bucket_name}]" } log.info(ere) { "[sync] --------------------------------------------------------------- --- #" } log.info(ere) { "[sync] sync source folder => #{sync_directive.local_folder}" } log.info(ere) { "[sync] source bucket name => #{sync_directive.s3_bucket_name}" } log.info(ere) { "[sync] mirror bucket name => #{sync_directive.bucket_b4_name}" } log.info(ere) { "[sync] bucket offset path => #{sync_directive.offset_path}" } log.info(ere) { "[sync] sync options array => #{sync_directive.}" } log.info(ere) { "[sync] --------------------------------------------------------------- --- #" } # -- # -- Is it worthwhile to copy between S3 buckets first # -- before sync-ing up the local folder? # -- # -- We deem it yes if (and only if) # -- # -- a) the to-sync folder is over [10MB] # -- b) a bucket_b4_name has been specified # -- c) the folder to sync does [NOT] exist. # -- d) the b4 folder [DOES] exist. # -- # -- If so a S3 [bucket] to [bucket] mirror/copy may # -- dramatically reduce sync time. # -- AwsS3.instance.copy_folder_between_buckets( sync_directive.bucket_b4_name, sync_directive.s3_bucket_name, sync_directive.offset_path ) if copy_b4_sync_worthwhile?( sync_directive ) AwsS3.instance.sync_local_to_s3( sync_directive.local_folder, sync_directive.s3_bucket_name, sync_directive.offset_path, sync_directive. ) end AwsS3.instance.log_bucket_summary end |