Backup Oracle to S3 – Part 1
Oracle has the ability to backup directly to Amazon S3 buckets.
This is a very attractive option for many reasons:
- offsite backup
- inexpensive storage
- backups can automatically be migrated to archive storage (Amazon Glacier)
The ‘Cloud Backup’ capability of Oracle is found with the ‘Oracle Secure Backup’ package, which is an extra-cost option.
Circumstances may not allow you to purchase this software. Perhaps you are running a departmental server and there is no budget for this.
There could be many reasons.
That does not mean that your Oracle backups cannot be made directly to S3.
If your databases are running on a reasonably recent version of Linux, you may be able to use s3fs which uses the FUSE (Filesystem in Userspace) filesystem.
Requirements:
- Linux kernel of 2.6.18-164+
- The Linux Kernel must support FUSE
The system used for the test:
- Oracle Linux 7.7
- Oracle Database 19.3
Due to the nature of how s3 works (no random access) this method is only appropriate for static backup files.
The use of s3 would not be a good choice for updatable images copies for instance.
Another limitation is file size – 5T is the maximum file size allowed. More on that later.
Another limitation was discovered, as well as a workaround. This will also be discussed.
Let’s get started.
Install Amazon Utilities
This is not a tutorial on installing Amazon command line utilities, so only the necessary commands are shown for those components.
I first installed awscli and s3cmd
# yum install -y awscli s3cmd
AWS credentials were then set up for both oracle and root.
Now create an s3 bucket:
# s3cmd mb s3://pythian-oracle-backups
Now install s3fs-fuse from the yum repo:
$ yum install -y s3fs-fuse
Create a mount point, and give it to oracle:
# mkdir /mnt/s3-oracle-backup
# chown oracle:oinstall /mnt/s3-oracle-backup
Allow oracle to run fusermount
# chmod +x /usr/bin/fusermount
As oracle tries to mount an s3 ‘filesytem’:
$ s3fs pythian-oracle-backups /mnt/s3-oracle-backup
Checking for a mount at /mnt/s3-oracle-backup, there was not one:
# ls /mnt | grep s3
#
Next I tried the debug options:
$ s3fs pythian-oracle-backups /mnt/s3-oracle-backup -o dbglevel=info -f -o curldbg
A couple of things appeared here:
- libcurl and libcurl-dev were out of date – newer versions were recommended for performance reasons
- access was denied due to invalid credentials.
[WAN] curl.cpp:ResetHandle(1855): The CURLOPT_SSL_ENABLE_ALPN option could not be unset. S3 server does not support ALPN, then this option should be disabled to maximize performance. you need to use libcurl 7.36.0 or later. [WAN] curl.cpp:ResetHandle(1858): The S3FS_CURLOPT_KEEP_SENDING_ON_ERROR option could not be set. For maximize performance you need to enable this option and you should use libcurl 7.51.0 or later. ... [CRT] s3fs.cpp:s3fs_check_service(3864): invalid credentials(host=https://s3.amazonaws.com) - result of checking service. [ERR] s3fs.cpp:s3fs_exit_fuseloop(3440): Exiting FUSE event loop due to errors
The credentials were correct, as s3cmd was working properly:
$ s3cmd ls 2015-12-28 21:29 s3://pythian-apps 2016-01-09 21:33 s3://pythian-aws-billing 2010-08-08 00:07 s3://pythian-data 2017-06-12 23:55 s3://pythian-dropbox-backup 2010-08-08 00:08 s3://pythian-fcedit 2010-08-07 19:32 s3://pythian-keepass 2011-09-19 17:28 s3://pythian-keepass2 2011-09-19 18:04 s3://pythian-keepass2-bkup 2010-08-08 00:08 s3://pythian-manuals 2010-11-06 23:25 s3://pythian-maps 2014-08-28 21:14 s3://pythian-music 2012-01-07 18:56 s3://pythian-oracle 2020-03-04 16:18 s3://pythian-oracle-backups 2018-01-15 00:46 s3://pythian-oracle-sw 2010-08-07 18:21 s3://pythian-photos 2016-03-05 16:42 s3://pythian-rx7 2018-12-14 15:49 s3://pythian-stc 2015-03-02 23:22 s3://linux-backups
So I removed s3fs-fuse.
# yum erase s3fs-fuse
Create s3fs-fuse from source
Building software from source is something I have found necessary a number of times on RedHat Linux and its variants.
While the software I need may be available in the repos, it is sometimes too old to be of use.
Such is the case with s3fs-fuse.
Before continuing, I obtained a more recent version of libcurl and libcurl-dev
Get Recent libcurl – instructions
rpm -Uvh http://www.city-fan.org/ftp/contrib/yum-repo/rhel7/x86_64/city-fan.org-release-2-1.rhel7.noarch.rpm yum --enablerepo=city-fan.org install libcurl libcurl-devel
Installation instructions are all in the s3fs-fuse github repo, and can be easily referenced in a browser:
Platform Specific notes – install s3fs-fuse
Install other required libs:
# yum install gcc libstdc++-devel gcc-c++ fuse fuse-devel libxml2-devel mailcap git automake make
Now get the source
# git clone https://github.com/s3fs-fuse/s3fs-fuse.git
At this point the software can be compiled:
# cd s3fs-fuse/ # ./autogen.sh # ./configure --prefix=/usr --with-openssl # make install
Again, make it executable for others
# chmod +x /usr/bin/fusermount
As root, modify the file /etc/fuse.conf by uncommenting the user_allow_other line:
# mount_max = 1000 user_allow_other
As oracle, mount the filesystem and test it:
$ s3fs pythian-oracle-backups /mnt/s3-oracle-backup -o default_permissions -o allow_other -o 'umask=0007' $ touch /mnt/s3-oracle-backup/test-file-oracle $ ls -l /mnt/s3-oracle-backup/ total 1 -rw-r--r-- 1 oracle oinstall 0 Mar 4 11:18 test-file-oracle $ s3cmd ls s3://pythian-oracle-backups 2020-03-04 19:18 0 s3://pythian-oracle-backups/test-file-oracle
The extra options allow the grid user to also write to the mounted filesystem. In my case that was necessary, as the database under tests is using ASM, running as ‘grid’.
See FUSE Manpage for more on FUSE options.
Now for some backups.
RMAN Backups
At first, I attempted to use RMAN to create backups.
This failed. The following errors were seen:
RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03009: failure of backup command on ch1 channel at 03/04/2020 16:49:58 ORA-19506: failed to create sequential file, name=\"CDB_T20200304_db_s15_p1_t1034182197 ORA-27028: skgfqcre: sbtbackup returned error\n", 46) = 46 ORA-19511: non RMAN, but media manager or vendor specific failure, error text
After running strace on the job I could see that errors were occurring on read() and lseek() calls.
It appeared that RMAN was trying to do block-level IO on the files, which is not going to work on S3 (object) storage.
So, it was time to go old school and use user-managed backups.
That was also not without some issues, but they can be handled.
User-Managed Backups
Before there was RMAN, there were User Managed Backups.
While not commonly used, this method of Oracle backup still works.
The documentation just referenced discusses all the considerations in creating backups.
Essentially, these are the steps
- put the database in backup mode
- backup data files
- take the database out of the backup mode
- switch logfile
- backup archive logs
- backup controlfiles (both binary and to trace)
- backup spfile to pfile
Errors in User-Managed Backup
The first attempts to backup data files and archive logs failed, but in a different manner than the RMAN backups.
ASMCMD-8016: copy source '+DATA/CDB/9423786B7CAA4898E053DD01A8C04EF1/DATAFILE/undotbs1.276.1020807859' and target '/mnt/s3-oracle-backup/pdb1/2020-03-05_12-44-54/datafiles/undotbs1.276.1020807859' failed ORA-19505: failed to identify file "/mnt/s3-oracle-backup/pdb1/2020-03-05_12-44-54/datafiles/undotbs1.276.1020807859" ORA-27044: unable to write the header block of file Linux-x86_64 Error: 5: Input/output error Additional information: 4 ORA-15120: ASM file name '/mnt/s3-oracle-backup/pdb1/2020-03-05_12-44-54/datafiles/undotbs1.276.1020807859' does not begin with the ASM prefix character ORA-06512: at "SYS.X$DBMS_DISKGROUP", line 617 ORA-06512: at line 3 (DBD ERROR: OCIStmtExecute)
My hypothesis at this point was the asmcmd cp command has made the call to create the file, and then when it tried to write the file immediately afterwards, it was not available. This could be due to the nature of s3 – the filesystem hasn’t finished creating the file, and ASM is trying to write to it.
If that was the case, I speculated that pre-creating the files with touch would solve that problem; and it did.
Following is the output from a partial backup – the datafiles and archivelogs were skipped, and only controlfile and pfile backups created.
[grid@ora192rac01 RMAN]$ ./backup-db-s3.sh s3fs on /mnt/s3-oracle-backup type fuse.s3fs (rw,nosuid,nodev,relatime,user_id=54321,group_id=54321,default_permissions,allow_other) SID: cdb1 ORACLE_SID = [+ASM1] ? The Oracle base has been changed from /u01/app/19.0.0/grid_base to /u01/app/oracle undotbs1.276.1020807859:+DATA/CDB/9423786B7CAA4898E053DD01A8C04EF1/DATAFILE/undotbs1.276.1020807859 sysaux.275.1020807859:+DATA/CDB/9423786B7CAA4898E053DD01A8C04EF1/DATAFILE/sysaux.275.1020807859 undo_2.279.1020807925:+DATA/CDB/9423786B7CAA4898E053DD01A8C04EF1/DATAFILE/undo_2.279.1020807925 system.277.1020807859:+DATA/CDB/9423786B7CAA4898E053DD01A8C04EF1/DATAFILE/system.277.1020807859 users.280.1020807941:+DATA/CDB/9423786B7CAA4898E053DD01A8C04EF1/DATAFILE/users.280.1020807941 ORACLE_SID = [cdb1] ? The Oracle base has been changed from /u01/app/oracle to /u01/app/19.0.0/grid_base Datafile: undotbs1.276.1020807859:+DATA/CDB/9423786B7CAA4898E053DD01A8C04EF1/DATAFILE/undotbs1.276.1020807859 Skipping Datafile: +DATA/CDB/9423786B7CAA4898E053DD01A8C04EF1/DATAFILE/undotbs1.276.1020807859 Touch: touch /mnt/s3-oracle-backup/pdb1/2020-03-05_12-09-08/datafiles/undotbs1.276.1020807859 Datafile: sysaux.275.1020807859:+DATA/CDB/9423786B7CAA4898E053DD01A8C04EF1/DATAFILE/sysaux.275.1020807859 Skipping Datafile: +DATA/CDB/9423786B7CAA4898E053DD01A8C04EF1/DATAFILE/sysaux.275.1020807859 Touch: touch /mnt/s3-oracle-backup/pdb1/2020-03-05_12-09-08/datafiles/sysaux.275.1020807859 Datafile: undo_2.279.1020807925:+DATA/CDB/9423786B7CAA4898E053DD01A8C04EF1/DATAFILE/undo_2.279.1020807925 Skipping Datafile: +DATA/CDB/9423786B7CAA4898E053DD01A8C04EF1/DATAFILE/undo_2.279.1020807925 Touch: touch /mnt/s3-oracle-backup/pdb1/2020-03-05_12-09-08/datafiles/undo_2.279.1020807925 Datafile: system.277.1020807859:+DATA/CDB/9423786B7CAA4898E053DD01A8C04EF1/DATAFILE/system.277.1020807859 Skipping Datafile: +DATA/CDB/9423786B7CAA4898E053DD01A8C04EF1/DATAFILE/system.277.1020807859 Touch: touch /mnt/s3-oracle-backup/pdb1/2020-03-05_12-09-08/datafiles/system.277.1020807859 Datafile: users.280.1020807941:+DATA/CDB/9423786B7CAA4898E053DD01A8C04EF1/DATAFILE/users.280.1020807941 Skipping Datafile: +DATA/CDB/9423786B7CAA4898E053DD01A8C04EF1/DATAFILE/users.280.1020807941 Touch: touch /mnt/s3-oracle-backup/pdb1/2020-03-05_12-09-08/datafiles/users.280.1020807941 ORACLE_SID = [+ASM1] ? The Oracle base has been changed from /u01/app/19.0.0/grid_base to /u01/app/oracle System altered. min scn: 62876556 max scn: 62877828 ORACLE_SID = [cdb1] ? The Oracle base remains unchanged with value /u01/app/oracle thread_1_seq_436.284.1034250403:+FRA/CDB/ARCHIVELOG/2020_03_05/thread_1_seq_436.284.1034250403 thread_1_seq_437.283.1034251313:+FRA/CDB/ARCHIVELOG/2020_03_05/thread_1_seq_437.283.1034251313 thread_2_seq_345.281.1034251317:+FRA/CDB/ARCHIVELOG/2020_03_05/thread_2_seq_345.281.1034251317 thread_1_seq_438.280.1034251757:+FRA/CDB/ARCHIVELOG/2020_03_05/thread_1_seq_438.280.1034251757 thread_1_seq_436.284.1034250403:+FRA/CDB/ARCHIVELOG/2020_03_05/thread_1_seq_436.284.1034250403 thread_1_seq_437.283.1034251313:+FRA/CDB/ARCHIVELOG/2020_03_05/thread_1_seq_437.283.1034251313 thread_2_seq_345.281.1034251317:+FRA/CDB/ARCHIVELOG/2020_03_05/thread_2_seq_345.281.1034251317 thread_1_seq_438.280.1034251757:+FRA/CDB/ARCHIVELOG/2020_03_05/thread_1_seq_438.280.1034251757 ORACLE_SID = [cdb1] ? The Oracle base has been changed from /u01/app/oracle to /u01/app/19.0.0/grid_base archive log: thread_1_seq_436.284.1034250403:+FRA/CDB/ARCHIVELOG/2020_03_05/thread_1_seq_436.284.1034250403 Skipping Archivelog: +FRA/CDB/ARCHIVELOG/2020_03_05/thread_1_seq_436.284.1034250403 archive log: thread_1_seq_437.283.1034251313:+FRA/CDB/ARCHIVELOG/2020_03_05/thread_1_seq_437.283.1034251313 Skipping Archivelog: +FRA/CDB/ARCHIVELOG/2020_03_05/thread_1_seq_437.283.1034251313 archive log: thread_2_seq_345.281.1034251317:+FRA/CDB/ARCHIVELOG/2020_03_05/thread_2_seq_345.281.1034251317 Skipping Archivelog: +FRA/CDB/ARCHIVELOG/2020_03_05/thread_2_seq_345.281.1034251317 archive log: thread_1_seq_438.280.1034251757:+FRA/CDB/ARCHIVELOG/2020_03_05/thread_1_seq_438.280.1034251757 Skipping Archivelog: +FRA/CDB/ARCHIVELOG/2020_03_05/thread_1_seq_438.280.1034251757 archive log: thread_1_seq_436.284.1034250403:+FRA/CDB/ARCHIVELOG/2020_03_05/thread_1_seq_436.284.1034250403 Skipping Archivelog: +FRA/CDB/ARCHIVELOG/2020_03_05/thread_1_seq_436.284.1034250403 archive log: thread_1_seq_437.283.1034251313:+FRA/CDB/ARCHIVELOG/2020_03_05/thread_1_seq_437.283.1034251313 Skipping Archivelog: +FRA/CDB/ARCHIVELOG/2020_03_05/thread_1_seq_437.283.1034251313 archive log: thread_2_seq_345.281.1034251317:+FRA/CDB/ARCHIVELOG/2020_03_05/thread_2_seq_345.281.1034251317 Skipping Archivelog: +FRA/CDB/ARCHIVELOG/2020_03_05/thread_2_seq_345.281.1034251317 archive log: thread_1_seq_438.280.1034251757:+FRA/CDB/ARCHIVELOG/2020_03_05/thread_1_seq_438.280.1034251757 Skipping Archivelog: +FRA/CDB/ARCHIVELOG/2020_03_05/thread_1_seq_438.280.1034251757
And here are the files created:
[grid@ora192rac01 RMAN]$ find /mnt/s3-oracle-backup/pdb1/2020-03-05_12-09-08/ -type f | xargs ls -ldtar -rwxrwx--- 1 grid oinstall 483 Mar 5 12:09 /mnt/s3-oracle-backup/pdb1/2020-03-05_12-09-08/datafiles/datafiles.txt -rwxrwx--- 1 grid oinstall 761 Mar 5 12:09 /mnt/s3-oracle-backup/pdb1/2020-03-05_12-09-08/archivelogs/archlogs.txt -rwxrwx--- 1 grid oinstall 19841024 Mar 5 12:09 /mnt/s3-oracle-backup/pdb1/2020-03-05_12-09-08/controlfile/controlfile.bkup -rwxrwx--- 1 grid oinstall 12646 Mar 5 12:09 /mnt/s3-oracle-backup/pdb1/2020-03-05_12-09-08/controlfile/controlfile.txt -rwxrwx--- 1 oracle asmadmin 1950 Mar 5 12:09 /mnt/s3-oracle-backup/pdb1/2020-03-05_12-09-08/pfile/pfile.txt
As you may guess, even a small database takes some time to complete a backup to a remote destination when the uplink (10M in my case) is rather limited.
Archive logs have been successfully backed up. A full backup is currently running and will take some time to complete.
The backup-db-s3.sh script may be found at the end of this blog.
Un-mounting the s3 Filesystem
There is a bug in s3fs as regards unmounting a filesystem:
$ s3fs unmount /mnt/s3-oracle-backup/ -o nonempty fusermount: failed to access mountpoint /mnt/s3-oracle-backup: Permission denied
Running this fusermount command was successful, however:
fusermount -u -o rw,nosuid,nodev,nonempty,subtype=s3fs /mnt/s3-oracle-backup
Wrap Up
There are other s3 limitations that may come into play.
In particular, remember that s3fs is simulating a file system on s3.
Large numbers of files in an s3 bucket can cause a directory listing to be quite slow.
Using sub-directories will help alleviate that issue.
And finally, the big question: Can this backup be restored?
Stay tuned, that will be addressed in a future installment.
backup-db-s3.sh
#!/usr/bin/env bash : <<COMMENT backup-db-s3.sh Oracle User Managed Backup https://docs.oracle.com/database/121/BRADV/osbackup.htm Jared Still - Pythian 2020 still@pythian.com jkstill@gmail.com This script is S3 specific It was written as part of a blog on backing up directly to Amazon AWS S3 COMMENT : <<NOTES This script makes the following assumptions: - the database being backed up is a PDB (pluggable database) in a container database (CDB) - ASM is used - backup is made by grid user, as oracle does not have ASM privileges The /mnt/s3-oracle-backup directory was initially created by the oracle user. The dba group was added to the grid user so tha grid could also write to the directory As both Oracle and Grid must write here, it is necessary for the mount point to have a common group for the oracle and grid users Note: spfile and controlfile backup come from the oracle users, even if run as grid, as it is the oracle db instance creating the files There is no warranty expressed or implied by the use of this script - use it at your own risk. As per usual, be sure to test any backups made to verify they are complete and recoverable. NOTES declare -A sid sid[ora]='cdb1' sid[asm]='+ASM1' declare dbname=pdb1 declare -A db db[db]=dbserver/pdb1 db[cdb]=dbserver/cdb declare -A username username[db]=sys username[cdb]=sys declare -A password password[db]=PASSWORD password[cdb]=PASSWORD # any value other than 0 will activate these flags # skip datafile backup - for debugging declare skipDatafileBackup=0 # skip archivelog backup - for debugging declare skipArchlogBackup=0 ######################################### ## user configurations above this line ## ######################################### declare rc declare touchSleep=1 declare timestamp=$(date '+%Y-%m-%d_%H-%M-%S') declare s3Mount=/mnt/s3-oracle-backup declare backupHome=${s3Mount}/${dbname}/${timestamp} declare -A targetDirs targetDirs[datafiles]=${backupHome}/datafiles targetDirs[archivelogs]=${backupHome}/archivelogs targetDirs[controlfile]=${backupHome}/controlfile targetDirs[pfile]=${backupHome}/pfile declare -A inventory inventory[datafiles]=${targetDirs[datafiles]}/datafiles.txt inventory[archivelogs]=${targetDirs[archivelogs]}/archlogs.txt declare -A sql declare -A scn declare -A filenames filenames[controlfile_backup]=${targetDirs[controlfile]}/controlfile.bkup filenames[controlfile_backup_tmp]=/tmp/controlfile.tmp filenames[controlfile_trace]=${targetDirs[controlfile]}/controlfile.txt filenames[pfile_backup]=${targetDirs[pfile]}/pfile.txt sql[minscn]='select min(checkpoint_change#) retval from gv\$datafile' sql[maxscn]='select max(current_scn) retval from gv\$database' sql[backup_controlfile]="alter database backup controlfile to '${filenames[controlfile_backup]}' reuse" sql[backup_controlfile_tmp]="alter database backup controlfile to '${filenames[controlfile_backup_tmp]}' reuse" sql[backup_controlfile_trace]="alter database backup controlfile to trace as '${filenames[controlfile_trace]}' reuse" sql[backup_spfile]="create pfile='${filenames[pfile_backup]}' from spfile" sql[begin_backup]="alter database begin backup" sql[end_backup]="alter database end backup" newdir () { local dirName="$1" [[ -z $dirName ]] && { echo echo "newdir(): dirName cannot be empty" echo exit 3 } mkdir -p $dirName [[ $? -ne 0 ]] && { echo echo could not create $dirName echo exit 1 } [[ -d $newDir ]] && { echo echo $dirName not available echo exit 2 } # ownership is grid:oinstall # oracle must also be able to write here: chmod g+rwx $dirName } # pass in the key to username[], etc, not the actual db name # pass in a SQL that returns a single value # always runs as sysdba getSqlVal () { local sqldb="$1"; shift local sql="$@" eval "sqlplus -L -s ${username[$sqldb]}/${password[$sqldb]}@${db[$sqldb]} as sysdba <<-EOF set pause off set echo off set timing off set trimspool on set verify off set head off clear col clear break clear computes btitle '' ttitle '' btitle off ttitle off set newpage 1 set pages 0 lines 200 term on feed off $sql; exit EOF " } # execute some sql - no return value expected # currently the same as getSqlVal, but subject to change # always runs as sysdba exeSql () { local sqldb="$1"; shift local sql="$@" eval "sqlplus -L -s ${username[$sqldb]}/${password[$sqldb]}@${db[$sqldb]} as sysdba <<-EOF set pause off set echo off set timing off set trimspool on set verify off set head off clear col clear break clear computes btitle '' ttitle '' btitle off ttitle off set newpage 1 set pages 0 lines 200 term on feed off $sql; exit EOF " } cleanup () { echo echo "!!! $0 Interrupted - cleaning up !!" echo exeSql db ${sql[end_backup]} exit 4 } exitIfError () { local errCode=$? if [[ $errCode -ne 0 ]]; then echo echo Error Encountered - exiting echo cleanup fi } # precreate file names on s3 # then sleep a bit to ensure it is available s3Touch () { local file2touch="$@" touch $file2touch declare rc=$? [[ $rc -ne 0 ]] && { echo echo "touch failed with $rc (s3Touch)" echo } sleep $touchSleep } trap "cleanup" INT trap "cleanup" TERM trap "cleanup" ABRT trap "cleanup" QUIT # ensure that s3fs is mounted /bin/mount | grep $s3Mount [[ $? -ne 0 ]] && { echo echo "! $s3Mount is not Mounted !" echo exit 5 } for dir2create in ${targetDirs[@]} do newdir $dir2create done unset ORAENV_ASK echo SID: ${sid[ora]} . oraenv <<< ${sid[ora]} scn[min]=$(getSqlVal db ${sql[minscn]}) exitIfError $? # begin backup mode - get a list of datafiles exeSql db ${sql[begin_backup]} exitIfError $? sqlplus -L -s ${username[db]}/${password[db]}@${db[db]} as sysdba <<-EOF set echo off term on pause off verify off set feed off head off set linesize 1000 trimspool on col name_data format a1000 spool ${inventory[datafiles]} select substr(file_name,instr(file_name,'/',-1)+1) ||':'|| file_name name_data from dba_data_files; spool off exit; EOF . oraenv <<< ${sid[asm]} # backup the datafiles for df in $(grep -v '^$' ${inventory[datafiles]} ) do echo Datafile: $df declare shortName=$(echo $df | cut -f1 -d:) declare fullName=$(echo $df | cut -f2 -d:) if [[ $skipDatafileBackup -ne 0 ]]; then echo "Skipping Datafile: $fullName" else # asmcmd cp expects the file to appear immediately after creating # and that is not occuring with s3 # use 'touch' to precreate the file s3Touch "${targetDirs[datafiles]}/${shortName}" asmcmd cp "$fullName" ${targetDirs[datafiles]} fi done . oraenv <<< ${sid[ora]} # end backup mode exeSql db ${sql[end_backup]} exitIfError $? # switch logfile exeSql cdb 'alter system switch logfile' scn[max]=$(getSqlVal db ${sql[maxscn]}) exitIfError $? echo min scn: ${scn[min]} echo max scn: ${scn[max]} . oraenv <<< ${sid[ora]} # get list of archivelogs sqlplus -L -s ${username[db]}/${password[db]}@${db[db]} as sysdba <<-EOF set echo off term on pause off verify off set feed off head off col name format a200 set linesize 1000 trimspool on col name_data format a1000 spool ${inventory[archivelogs]} select distinct substr(name,instr(name,'/',-1)+1) ||':'|| name name_date from gv\$archived_log where first_change# >= (select min(first_change#) min_change from gv\$archived_log where ${scn[min]} between first_change# and next_change#); spool off exit; EOF echo echo "finished archivelog inventory" echo . oraenv <<< ${sid[asm]} # backup the archivelogs for archlog in $(grep -v '^$' ${inventory[archivelogs]}) do echo archive log: $archlog declare shortName=$(echo $archlog | cut -f1 -d:) declare fullName=$(echo $archlog | cut -f2 -d:) if [[ $skipArchlogBackup -ne 0 ]]; then echo "Skipping Archivelog: $fullName" else s3Touch "${targetDirs[archivelogs]}/${shortName}" asmcmd cp "$fullName" ${targetDirs[archivelogs]} fi done # backup the controlfile touch ${filenames[controlfile_backup]} touch ${filenames[controlfile_trace]} exeSql cdb "${sql[backup_controlfile_tmp]}" rc=$? #exitIfError $rc : <<'ORA-00245' This error may occur on RAC when making a contfolfile backup: ORA-00245: control file backup failed; in Oracle RAC, target might not be on shared storage As we are not making the backup to shared storage, there is little to be done about it. A trace controlfile backup is also made ORA-00245 if [[ $rc -eq 0 ]]; then cp ${filenames[controlfile_backup_tmp]} ${filenames[controlfile_backup]} else echo echo Failed to created temp backup controlfile echo This is probably due to ORA-00245 on RAC echo echo This can happen if the backup is on non-shared storage echo and the controlfile is currently being written by another instance echo echo A backup controfile to trace will still be created echo fi # this may work on S3, but often fails, much as datafiles do if not first 'touched' # it also fails due to 'write past end of file' # do not backup directly to s3, just copy the tmp backupcontrol file to s3 as seen previously #exeSql cdb "${sql[backup_controlfile]}" #exitIfError $? exeSql cdb "${sql[backup_controlfile_trace]}" exitIfError $? # it is assumed that an spfile is in use # modify accordingly if that is not the case #touch $Pfilenames[pfile_backup]} exeSql cdb "${sql[backup_spfile]}" exitIfError $?