[nycbug-talk] postgres incremental backups?

Jeff Quast af.dingo at gmail.com
Fri Aug 25 17:38:45 EDT 2006


On 8/24/06, Peter Wright <pete at nomadlogic.org> wrote:
> Hi All,
> I have a postgres 7.4.x database which we will be using for a data
> warehousing project.  We project the initial database clusters will be in
> the ~200-300gig range (served from a NetApp volume).  As I mentioned we
> are using release 7.4, as we have experienced stability problems with 8.x
> in some specific circumstances.  Is there a good way to do incremental
> backups of pgsql db's that does not require taking the database down?  I
> guess I am looking for something similar to PITR (point in time recovery)
> for 7.4.x.  What are other folks out there using?
>
> thanks!
> -pete
>

I don't know much about postgresql. But I use cvs to do incremental
backups of a (relatively much smaller) mysql database:

#!/bin/sh
export CVSROOT=/var/cvs
dbserver=127.0.0.1
username=xxx
password=xxx
database=xxx_db
tables="table1 table2"
dump=`mktemp /tmp/dbXXXXX`
dest=$HOME/cvs/xxx/xxx

/usr/local/bin/mysqldump \
        -u ${username} -p${password} \
        ${database} ${tables} > $dump
if [ $? -ne 0 ]; then
        echo "Failure in mysqldump"
else
        # insert newlines and tabs for readability and cleaner diff'n
        sed s/'),/),\
        '/g < ${dump} > ${dump}.swp
        sed s/'VALUES (/VALUES\
        ('/g < ${dump}.swp > ${dest}
        rm -f ${dump} ${dump}.swp
        cvs ci -m "dump of database '${database}', tables '${tables}'" $dest
fi

This gives suprisingly very readable split-diff's like in cvsweb.
Albeit the database dump is only ~200MB, not GB :D - i run this on a
weekly cron job.



More information about the talk mailing list