Lifeboat is a drop in module for any Ruby Object that responds to 3 callbacks.
It monitors your models for create, update and delete events.
It sends a serialized copy of the model attributes to Amazon SQS when those events happen.
The serialized records are JSON.
If you're operating a web service and you're smart, then you make frequent backups. But if you're operating a popular web service then no backup will ever be fresh enough when you have a failure. A lot of people learned that lesson the hard way during the Great Amazon EBS Meltdown of 2011. How do you recover data that was created after your most-recent backup, but before your database crashed? One way to do that is with database replication. But not everybody has that option. Another way to do it is to send your data to the lifeboats as soon as it's created, so that you can recover it after a disaster.
- Amazon SQS
- right_aws gem
Include the gem in your Gemfile:
Include Lifeboat in your model class:
class AnyObject < ActiveRecord::Base include LifeBoat end
This will serialize your active record object to json and send it to the queue automatically. Moreover, we do XML too.
test: access_key_id: YOURSECRETACCESSID secret_access_key: YOUR-secRE/TACCe\ssKEy
development: access_key_id: YOURSECRETACCESSID secret_access_key: YOUR-secRE/TACCe\ssKEy
production: access_key_id: YOURSECRETACCESSID secret_access_key: YOUR-secRE/TACCe\ssKEy
Lifeboat will then automatically create queue messages each time any instance of the model class is created, updated, or deleted.
Naming conventions for Lifeboat message queues
Lifeboat will generate three different message queues for each model that you configure it for: create_MODEL, update_MODEL, and delete_MODEL. For example, if your model is named Sale, then Lifeboat will generate message queues named create_sale, update_sale, and delete_sale.
The real purpose of data replication is to maintain hot-swappable backup servers. When your production server goes down you can failover to one of your backups. If you're worried about your entire cloud-based data center going down suddenly for days at a time (it happens!) then you can run a hot-swap slave in a different data center, and periodically monitor the message queues generated by Lifeboat in your production, using the Lifeboat rake task:
Lifeboat is configured exactly the same for the master and for the slaves, so you can use the exact same code base in both locations. Lifeboat's update Rake task is idempotent, so you can run it as frequently as you like. And Lifeboat slaves do not destroy records from the message queues, so you can run as many different slave nodes as you like.
If you'd like to contribute a feature or bugfix: Thanks! To make sure your fix/feature has a high chance of being included, please read the following guidelines:
- Check out the latest master to make sure the feature hasn't been implemented or the bug hasn't been fixed yet
- Check out the issue tracker to make sure someone already hasn't requested it and/or contributed it
- Fork the project
- Start a feature/bugfix branch
- Commit and push until you are happy with your contribution
- Make sure to add tests for it. This is important so I don't break it in a future version unintentionally.
- Please try not to mess with the Rakefile, version, or history. If you want to have your own version, or is otherwise necessary, that is fine, but please isolate to its own commit so I can cherry-pick around it.
Buit by Ivan.
Designed by Ivan & Ryan.
Inspired by Amazon. (And not in a good way.)