Post by Austin GonyouAnyone have an order of operation for something like this?
I was thinking, that while oracle is in hot_backup mode, I would just be
able to do xfs_dump /dev/sdb /dev/sdaa or some such thing, until all
volumes are done.
If I don't use xfs_freeze though, is that a good copy?
I'm a bit confused at this point, and wanted to ask the community. TIA
[grrr... stupid broken "reply" on email client. ALSO sending reply to
list for anyone else who may like to know]
I think as long as your Oracle is in the proper state, the filesystem is
kind of irrelevant. xfsdump, I assume, sees what the OS sees as the
current state of the file, so you just have whatever race conditions the
application may place on any file you're backing up and not worry about
whether the file on the filesystem is "consistent" because from the
perspective of the VFS it is. Oracle specifically handles hot-backups
to prevent any race conditions during backup, so there's no trouble
there either.
You will need to have your Oracle in ARCHIVELOG mode so full redo logs
get archived rather than recycled. (Oracle won't let you put it into a
hot backup mode without being in ARCHIVELOG mode anyway.) ALTER each
TABLESPACE you wish to back up as BEGIN BACKUP, which causes Oracle to
stop flushing transactions to the datafiles and just store/archive them
in the transaction log files thereby leaving your datafiles in a
consistent state by not performing any writes to them during the
backup. Backup your Oracle datafiles. ALTER each TABLESPACE END BACKUP
to make Oracle flush pending transactions back to the datafiles which
starts writing to the datafiles again. Then backup your redo logfiles
and archived redo logs.
Hopefully your redo logs, archived redo logs and datafiles are on
different partitions so xfsdump can archive them separately. (Generally
they're at least on different spindles for performance reasons.)
The only place you may run into problems is with the redo logs since
there is no point at which Oracle may not be writing to those files or
have outstanding writes to those files that have not been written by the
application. Once a transaction (redo) log has filled up, it is
archived and recycled, and you can safely back up the archived logfiles
without fear of writes to the file while being backed up.
Does that help at all?
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
#!/usr/bin/perl -w
$_='while(read+STDIN,$_,2048){$a=29;$b=73;$c=142;$t=255;@t=map
{$_%16or$t^=$c^=($m=(11,10,116,100,11,122,20,100)[$_/16%8])&110;
$t^=(72,@z=(64,72,$a^=12*($_%16-2?0:$m&17)),$b^=$_%64?12:0,@z)
[$_%8]}(16..271);if((@a=unx"C*",$_)[20]&48){$h=5;$_=unxb24,join
"",@b=map{xB8,unxb8,chr($_^$a[--$h+84])}@ARGV;s/...$/1$&/;$d=
unxV,xb25,$_;$e=256|(ord$b[4])<<9|ord$b[3];$d=$d>>8^($f=$t&($d
Post by Austin Gonyou12^$d>>4^$d^$d/8))<<17,$e=$e>>8^($t&($g=($q=$e>>14&7^$e)^$q*
8^$q<<6))<<9,$_=$t[$_]^(($h>>=8)+=$f+(~$g&$t))***@a[128..$#a]}
print+x"C*",@a}';s/x/pack+/g;eval
usage: qrpff 153 2 8 105 225 < /mnt/dvd/VOB_FILENAME \
| extract_mpeg2 | mpeg2dec -
http://www.cs.cmu.edu/~dst/DeCSS/Gallery/
http://www.eff.org/ http://www.anti-dmca.org/