[nycbug-talk] Off topic: S3 backups
Matt Juszczak
matt at atopia.net
Thu Jul 29 11:49:14 EDT 2010
Hi Chris,
The #1 goal is to prevent someone from accidetally deleting a bucket and losing all contents.
I agree that rackspace cloud backup would be more redundant. But that means writing a script to integrate into two APIs.
Matt
------Original Message------
From: Chris Snyder
To: Matt Juszczak
Cc: talk at lists.nycbug.org
Subject: Re: [nycbug-talk] Off topic: S3 backups
Sent: Jul 29, 2010 11:46
On Thu, Jul 29, 2010 at 10:34 AM, Matt Juszczak <matt at atopia.net> wrote:
> Hi folks,
>
> I know many of us work with cloud providers on a day to day basis. I have an urgent need to implement bucket redundancy, by other copying one bucket to another at regular intervals, or copying an S3 bucket to something like rackspace cloud files at regular intervals.
>
> Has anyone ever had to do this before? Most of the scripts I have found require the files to be downloaded first, then re-uploaded.
>
> Matt
>
So you plan to transfer the bucket data from one region to another?
Seems extreme, given the built-in redundancy in S3. Or is this a way
to get the reduced redundancy pricing tier into play? Seems like the
transfer costs would eat the savings, but maybe not.
>From a data integrity angle, backing up buckets to Rackspace or some
other provider makes more sense to me, since Amazon itself is a single
point of failure. You could have a process running in the Rackspace
cloud that backs up your S3 buckets to local storage.
More information about the talk
mailing list