Sunday, August 24, 2014

AWR Warehouse

I just noticed last week that there is a new patch for Enterprise Manager and it is enabling AWR Warehouse feature. There is a note ID 1901202.1 which describe bundle patch for OEM 12c release 4.

Today I had a chance to install it in my lab and now I can start testing new OEM feature.
There is some documentation here and on Kellyn's blog.


It is not configured so first task is to configure AWR Warehouse repository. In my case I will use same database which is used for OEM repository.



Retention period and staging area for snapshot files has to be configured as well.


 After these two steps AWR Warehouse configuration job is started and when it will be finished AWR Warehouse will be ready to use.
 

When repository is ready we can start adding databases which will be a source of AWR data.
 

To add a new database to warehouse it has be already configured in OEM and has a default credentials.


If all conditions are met database has been successfully added.



Now it's time to play with these new feature and see what we can achieve using it.

regards,
Marcin




Crossplatform transportable tablespaces - part 2

It took some time since I wrote a first post about TTS migration but I finished that project literally hours before my summer break. Now after couple of days while I enjoyed thermal waters and good wine of Hungary it's time to write next post.

As I described in my previous post I had to migrate database from HP-UX into Linux and also upgrade it from 10g into 12c. This time it was only PoC but my goal was to minimize downtime of production database.

Source database datasheet:
- version 10.2.0.4 
- OS - HP-UX 
- existing backup using data files copy 
- there is a one backup set per data file 
- daily incremental backups are recovered into data files and keep in FRA

On target server a new version of Oracle 12.1.0.1 has been installed and configured with ASM. New database with same character set as source database has been created as well.


Target database datasheet:
- version 12.1.0.1
- OS -Linux 64 bit
- storage - ASM

Transportable tablespaces (TTS) allow us to migrate data between databases but it is DBA responsibility to migrate rest of objects like views and PL/SQL code using for example DataPump. Before I have started a work on TTS I did the following preparation steps:
  1. On source database identify list of tablespaces and it's datafiles to move to new server
  2. On source database identify owners of objects included in TTS
    select distinct owner from dba_tables where tablespace_name like ('LIST','OF','TABLESPACES','TO','MIGRATE');
    
  3. On source database verify that tablespaces are self contained
    begin
       SYS.DBMS_TTS.TRANSPORT_SET_CHECK(ts_list => 'LIST,OF,TABLESPACES,TO,MIGRATE', full_check => TRUE);
    end;
    /
    
    select * from SYS.TRANSPORT_SET_VIOLATIONS;
    
  4. On target database create owners for all objects included in TTS 

This is list of steps I performed to achieve my goal.
  1. Copy existing data files copies into new server - if other location is used on new server change script in point 2
  2. Create a script to convert data file from data file copy into data file in new location
    select 'convert datafile ''' || b.name || ''' format ''+DATA/POCDB/TTS/' || REGEXP_REPLACE(f.name,'(/disk\d/oradata/XXX/)','') || ''' from platform ''HP-UX IA (64-bit)'';' from V$BACKUP_COPY_DETAILS b, v$datafile f where f.file# = b.file#;
    
  3. Convert file using script from point 2. Example output
    convert datafile '/oracle/fra/o1_mf_pocdb_rep_9x3xjcon_.dbf' format '+DATA/POCDB/TTS/reports01.dbf' from platform 'HP-UX IA (64-bit)';
    convert datafile '/oracle/fra/o1_mf_pocdb_rep_aas24412_.dbf' format '+DATA/POCDB/TTS/reports02.dbf' from platform 'HP-UX IA (64-bit)';
    convert datafile '/oracle/fra/o1_mf_pocdb_rep_22ee1445_.dbf' format '+DATA/POCDB/TTS/reports03.dbf' from platform 'HP-UX IA (64-bit)';
    convert datafile '/oracle/fra/o1_mf_pocdb_rep_34ddr545_.dbf' format '+DATA/POCDB/TTS/reports04.dbf' from platform 'HP-UX IA (64-bit)';
    
  4. Copy daily incremental backupsets into new server - if other location is used on new server change script in point 5
  5. Create a script to apply incremental backupset into new files
    set linesize 600 pagesize 999 feedback off head off trimspool on
    select 'recover from platform ''HP-UX IA (64-bit)'' foreign datafilecopy ''' || name || ''' from backupset ''' || handle || ''';'
    from V$BACKUP_DATAFILE bd, v$datafile d, V$BACKUP_PIECE bp where bd.file# = d.file#
    and bp.set_count = bd.set_count and handle is not null
    and bp.COMPLETION_TIME > sysdate -1
    order by bp.set_count;
    
  6. Recover data files copies
    recover from platform 'HP-UX IA (64-bit)' foreign datafilecopy '+DATA/POCDB/TTS/reports01.dbf' from backupset '/oracle/fra/POCDB/backupset/2014_07_20/o1_mf_nnnd1_TAG20140720T065649_9wppkp6w_.bkp';
    recover from platform 'HP-UX IA (64-bit)' foreign datafilecopy '+DATA/POCDB/TTS/reports02.dbf' from backupset '/oracle/fra/POCDB/backupset/2014_07_20/o1_mf_nnnd1_TAG20140720T065649_9wppkxg5_.bkp';
    recover from platform 'HP-UX IA (64-bit)' foreign datafilecopy '+DATA/POCDB/TTS/reports03.dbf' from backupset '/oracle/fra/POCDB/backupset/2014_07_20/o1_mf_nnnd1_TAG20140720T065649_9wppk4w9_.bkp';
    recover from platform 'HP-UX IA (64-bit)' foreign datafilecopy '+DATA/POCDB/TTS/reports04.dbf' from backupset '/oracle/fra/POCDB/backupset/2014_07_20/o1_mf_nnnd1_TAG20140720T065649_9wppkbws_.bkp';
    
  7. Run steps 4 to 6 until cut over date
  8. Run incremental backup on source
  9. Switch all required tablespace into read only mode
  10. Export transportable tablespaces using DataPump  using parameter file like this
    directory=XXX_DPDUMP
    dumpfile=tts_aws1.dmp
    logfile=tts.log
    TRANSPORT_TABLESPACES='TABLESPACES', 'TO', 'EXPORT'
    TRANSPORT_FULL_CHECK=y
    
    EXPDP command
    expdp parfile=tts.par
    
  11. Run incremental backup on source
  12. Copy backupsets from point 8 and 11 into new server
  13. Create a script to apply incremental backupset into new files (like in point 5)
  14. Import transportable tablespaces using dump file from point 9 and all converted files. In my case first attempt took very long as I didn't excluded stats and Oracle was gathering stats during importing process. This operation can be postponed to next phase using EXCLUDE option. Example IMPDP parameter file
    directory=AWS
    dumpfile=tts_aws1.dmp
    logfile=aws_tts_import1.log
    exclude=TABLE_STATISTICS,INDEX_STATISTICS
    TRANSPORT_DATAFILES=+DATA/POCDB/TTS/reports01.dbf,
    +DATA/POCDB/TTS/reports02.dbf,
    +DATA/POCDB/TTS/reports03.dbf,
    +DATA/POCDB/TTS/reports04.dbf
    
    Run IMPDP command
    impdp parfile=imp.par
    
  15. Export source database code and users
    expdp directory=DPDUMP dumpfile=code.dmp exclude=TABLE_DATA full=y
    
  16. Import PL/SQL code - quick, dirty approach - but it was enough fot that case
    impdp directory=AWS TABLE_EXISTS_ACTION=SKIP dumpfile=code.dmp log=code_import.log full=y
    
  17. Perform backup of new database and gather new statistics

 
Performing all steps above allow me to migrate 1 TB database from HP-UX into Linux with 30 min downtime on source database. As it was POC I left source database working as main production database. For real migration time it's necessary to add time to recover last incremental backup and import TTS on new platform and also resolve issue with time necessary to gather statistics on new platform. Probably copy existing stats using PL/SQL will be solution there but it has to be check in next phase.

This post is long enough so I leave lesson learned to the next one.

regards,
Marcin



Saturday, July 26, 2014

Beauty of command line - OEM 12c

Why all software should have a command line and automation plugin ? Answer is simple - if you have to repeat number of operation for different targets - scripts can help you save your precious time.

I really enjoy a fact that Oracle added a command line to Oracle Enterprise Manager line, and now you can script lot of boring tasks like adding new administrator to list of users who can access Named Credentials.

To add new admin (przepiorom) it's enough to run the following script
 
add_privs.sh przepiorom

This is first draft of this script (no error handling but it's doing his work)
#!bin/bash

NEW_ADMIN=$1

TPID=$$
PRIV_LIST=`emcli list_named_credentials | awk '{ print $1; }' | grep -v Credential  > /tmp/priv_$TPID`


while read LINE ; do
        echo $LINE
        emcli grant_privs -name="${NEW_ADMIN}" -privilege="FULL_CREDENTIAL;CRED_NAME=${LINE}:CRED_OWNER=sysman"
done > /tmp/priv_$TPID

rm /tmp/priv_$PPID


The next example is an another script which is refreshing a Weblogic domain components.
When a new version of application is deployed a previous one are still registered as a targets and you will see it as down in your OEM.



There is a domain refresh command in OEM menu but if you have more systems going through all of those is not what you want. Using a command line and configuration file you can be done with one line.
emcli login -username=sysman -password=xxxxxxx -force
emcli refresh_wls -input_file=domain_refresh_file:/home/oracle/bin/domain_refresh_file.csv –debug

Content of domain_refresh_file.csv looks like this:

/xxx_soa_mot_domain_soa/soa,R
/xxx_soa_mot_domain_soa/soa,E

There is a one line per target split into two parts.

First part of line is a target name and domain name, ex. /xxx_soa_mot_domain_soa/soa Second part is operation: 
R - remove target which doesn't exist in domain anymore 
E - enable refresh of domain (aka. add monitoring targets)


regards,
Marcin

Wednesday, July 16, 2014

Crossplatform transportable tablespaces - part 1

There is couple of way to do heterogeneous migration of Oracle databases but staring with 12c there is whole set of new RMAN commands to transport data across different platforms.
I was looking for a best method of move tablespaces from HP-UX to Linux and after some research I found this presentation by Martin Bach from Enkitec (you can watch it online here). Martin describing Oracle Perl script (MOS ID 1389592.1) which allows you to convert tablespaces on 11g database including Exadata. At first sigh it looks like a solution for me but to use that script I need to create a new backup of all tablespaces I want to move. That could be an option but I already had a daily updated copy of all data files in FRA. So in next step I started to investigate who script is working and how to convert a backupset from HPUX to Linux and apply incremental backup into files not registered into database. Well solution was easy to predict - use PL/SQL RMAN interface - DBMS_BACKUP_RECOVERY package. 

It was not a first time when I was looking into it and it remembered me a Oracle 8i database with corrupted control file without RMAN catalog which had to be recovered. For those who forget RMAN in Oracle 8i had not a catalog functionality so you had to treat all control files or RMAN catalog with extra care. But using knowledge about files and backup names, PL/SQL and DBMS_BACKUP_RECOVERY it was possible to restore everything manually.

But let's come back to current problem - I was keen to use PL/SQL but before that I decided to check what new Oracle introduced in 12c and nice surprise - now all operations described used by Oracle perl script are possible from RMAN interface. So in next step I decided to do a little test with existing copies of data files from smallest tablespace called 'USERS'.
This small test was successful and now I need to document and describe all steps and this is material for a next post. It's working a little bit better than Oracle script as there is no need to convert backupset - recover can apply and do conversion on the fly

New syntax to learn in investigate more:
RMAN> recover from platform 'HP-UX IA (64-bit)' foreign datafilecopy 'C:\TEMP\USERS01.DBF' from backupset 'c:\temp\inc15_2.bkp';

Starting restore at 15-JUL-14
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring foreign file C:\TEMP\USERS01.DBF
channel ORA_DISK_1: reading from backup piece c:\temp\inc15_2.bkp
channel ORA_DISK_1: foreign piece handle=C:\TEMP\INC15_2.BKP
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 15-JUL-14


regards
Marcin

Tuesday, July 1, 2014

Don't delete your flashback logs manually

What happen when someone will delete Oracle flashback logs ? You probably don't notice it until you will try to flashback database or bounce instance. 
There is no hope for flashback database without flashback files but there is still way to start your database again without recovery or data loss.

Here is a scenario:
[oracle@dev-6 alert]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Tue Jul 1 09:34:18 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning option

SQL> select status from v$instance;

STATUS
------------------------------------------------
MOUNTED

SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-38760: This database instance failed to turn on flashback database

OK open database doesn't work. So what happen when I disable a flashback logging
SQL> alter database flashback off;

Database altered.

SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-38760: This database instance failed to turn on flashback database
Well still doesn't work - but what is a flashback state ?
SQL> select flashback_on from v$database;

FLASHBACK_ON
------------------------------------------------------------------------
RESTORE POINT ONLY

SQL> select * from v$restore_point;
select * from v$restore_point
              *
ERROR at line 1:
ORA-38701: Flashback database log 33 seq 476 thread 1:
"/u01/app/oracle/fast_recovery_area/DEV/flashback/o1_mf_9nq1wbon_.flb"
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
OK at least there is some information about root cause - it is looking for missing flashback files. Information about flashback database is keep inside control file so let's try to recreate control file using trace file
SQL> alter database backup controlfile to trace as '/tmp/control.ctl';

Database altered.

SQL> shutdown immediate
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL> startup nomount
ORACLE instance started.

Total System Global Area 1336176640 bytes
Fixed Size                  2253024 bytes
Variable Size             822087456 bytes
Database Buffers          503316480 bytes
Redo Buffers                8519680 bytes
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning option
Backup of control file has been created in trace file and edited as follow
[oracle@dev-6 tmp]$ vi control.ctl
CREATE CONTROLFILE REUSE DATABASE "DEV" NORESETLOGS  ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 292
LOGFILE
  GROUP 1 '/u01/app/oracle/oradata/dev/redo01.log'  SIZE 50M BLOCKSIZE 512,
  GROUP 2 '/u01/app/oracle/oradata/dev/redo02.log'  SIZE 50M BLOCKSIZE 512,
  GROUP 3 '/u01/app/oracle/oradata/dev/redo03.log'  SIZE 50M BLOCKSIZE 512
-- STANDBY LOGFILE
DATAFILE
  '/u01/app/oracle/oradata/dev/system01.dbf',
  '/u01/app/oracle/oradata/dev/sysaux01.dbf',
  '/u01/app/oracle/oradata/dev/undotbs01.dbf',
  '/u01/app/oracle/oradata/dev/users01.dbf',
  '/u01/app/oracle/oradata/dev/USER.dbf',
  '/u01/app/oracle/oradata/dev/DATA.dbf',
  '/u01/app/oracle/oradata/dev/DATA_INDEX.dbf',
  '/u01/app/oracle/oradata/dev/REFERENCE.dbf',
  '/u01/app/oracle/oradata/dev/REFERENCE_INDEX.dbf',
  '/u01/app/oracle/oradata/dev/ARCHIVE.dbf',
  '/u01/app/oracle/oradata/dev/apex01.dbf'
CHARACTER SET AL32UTF8
;

VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE','DISK TO ''/nim_backup/backup/oracle/db/dev/%F''');
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CONTROLFILE AUTOBACKUP','ON');
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO RECOVERY WINDOW OF 7 DAYS');

RECOVER DATABASE;

-- Block change tracking was enabled, so re-enable it now.
ALTER DATABASE ENABLE BLOCK CHANGE TRACKING
USING FILE '/u01/app/oracle/oradata/dev/bct_01.log' REUSE;

-- All logs need archiving and a log switch is needed.
ALTER SYSTEM ARCHIVE LOG ALL;

-- Database can now be opened normally.
ALTER DATABASE OPEN;

ALTER TABLESPACE TEMP ADD TEMPFILE '/u01/app/oracle/oradata/dev/temp01.dbf' REUSE;
Let's try to create new control files and open database
[oracle@dev-6 tmp]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Tue Jul 1 09:43:45 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning option

SQL> @control.ctl

Control file created.


PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.

Media recovery complete.

Database altered.


System altered.


Database altered.


Tablespace altered.


SQL> alter database flashback on;

Database altered.

SQL> select flashback_on from v$database;

FLASHBACK_ON
------------------------------------------------------------------------
YES

SQL>
Database has been started and opened - all done. 

I blogged about that cause I have to solve this problem several time when due to space restriction flashback logs has been deleted by other DBA manually instead of disabling and enabling flashback on database. Just keep in mind if you need to release space in FRA don't delete flashback logs manually but turn off flashback on the database 

regards, 
Marcin

Thursday, June 26, 2014

Top Linux processes and Oracle

Today due to application bug I had to check several time mapping between Linux PID processes and Oracle sessions. With small BASH script this is quite easy now
#!/bin/bash

# ps doesn't show real time CPU usage but average - skewed for long running processed
#TOP5=`ps xo pid --sort pcpu | tail -5 | xargs echo | sed -e 's/\s/,/g'`
TOP5=`top -b -d 1 -n 1 | head -12 | tail -5 | awk '{print $1;}' | xargs echo | sed -e 's/\s/,/g'`

sqlplus / as sysdba <<-EOF
        set linesize 200 pagesize 999
        select spid, s.username, s.program, sql_id, event from v\$session s, v\$process p where s.paddr = p.addr and spid in ($TOP5);
        exit
EOF


regards,
Marcin 

Tuesday, June 24, 2014

Rolling upgrads using logical standby database.

Couple of weeks ago there was a Twitter discussion started by Martin Bach (@MartinDBA) about cases for logical standby implementation. A rolling upgrade was mentioned by Tim Gorman (@timothyjgormanas) as one of potential recommendations for using this rare use product. I have been involved in such project in the past and I prepared an instruction and did quite large number of rolling upgrades from 11.1 into 11.2.

There are couple of my “gotchas”

  • Support for data types – make sure that all data type in your application are supported by logical standby
  • Support for Oracle features like reference partitioning or compression
  • Logging all apply related errors during a logical standby phase
  • Keep DML operations running on big dataset to minimum – keep in mind that update tab1 set col1=2 will be translated into separated update for every row in table and you really want to avoid it.
  • Compatible parameter – if you are using flashback to rollback changes you can change compatible parameter with restore points
If you checked that all your types and features are supported this is a list of advantage you can get from rolling upgrade:
  • Keep your application downtime low – in reality we have an average downtime around 3 min (including additional restart of instance to remove a restore points and change compatible parameter)
  • If you have a problems with upgrading you can rollback it quite easy and revert logical standby into physical
  • Your upgrade script can work longer as your primary database is still running
  • After upgrade you can have a read only access to your production database for tests if needed

There are two good Oracle white papers about Rolling upgrades :

First one is longer and required more work but also can give you more control over the process. Second one is more automated and easier but you have less control over switchover time.

This is I hope a first post from rolling upgrade series – in next one I will post more details about manual process.

Regards,
Marcin