Monday 22 April 2013

EXPDP with oracle 10g & oracle 11g with Advanced Compression

Datapump in 11g has a good feature to reduce size of exports and resources used on machines and tapes by compressing the dumps as and when the export happens..
In version 10g, datapump did not have the compress option for data only the default option to compress METADATA only.
Example in 10g -
TEST schema using datapump -

nohup expdp \'/ as sysdba\' schemas=ACE directory=CRAMERDATA_EXP
DUMPFILE=ACESCHEMABK_22-apr13.dmp LOGFILE=ACESCHEMABK_22-apr13.log &

Export took   50 minutes.
Export dump size was 20 gb.
The same data was exported in 11g -
Only difference was adding the following bit to the expdp command in 11g  > compression=all

nohup expdp \'/ as sysdba\' schemas=ACE directory=CRAMERDATA_EXP
DUMPFILE=ACESCHEMABK_22-apr13.dmp LOGFILE=ACESCHEMABK_22-apr13.log compression=all &

Export took  15 minutes.
Export dump size 6 gb


In 11g it took just over half the time and saved 7 times the space which will tremendously reduce the amount of data moved off to tape on a daily basis especially in some databases where exports are taken daily and moved to tape.
Also there is no need to use unix pipes or uncompress dumps before commencing the import as Oracle takes care of that.

The compression clause can be specified at the tablespace, table or partition level with the following options:
NOCOMPRESS - The table or partition is not compressed. This is the default action when no compression clause is specified.
COMPRESS - This option is considered suitable for data warehouse systems. Compression is enabled on the table or partition during direct-path inserts only.
COMPRESS FOR DIRECT_LOAD OPERATIONS - This option has the same affect as the simple COMPRESS keyword.
COMPRESS FOR ALL OPERATIONS - This option is considered suitable for OLTP systems. As the name implies, this option enables compression for all operations, including regular DML statements. This option requires the COMPATIBLE initialization parameter to be set to 11.1.0 or higher. In 11gR2 this option has been renamed to COMPRESS FOR OLTP and the original name has been deprecated.

 

Friday 19 April 2013

ORA-16401: archivelog rejected by RFS

How do I fix the error "ORA-16401: archivelog rejected by RFS"?


Errors in file /oracle/app/oracle/admin/nms/udump/drnms1_rfs_17756.trc:
ORA-16401: archivelog rejected by RFS


Answer:  The "ORA-16401: archivelog rejected by RFS" error is a noise error.  Per the docs:. the ORA-16401 says that no action is necessary; this is an informational statement provided to record the event for diagnostic purposes."
ORA-16401: archivelog rejected by RFS

Cause: An attempt was made to re-archive an existing archivelog. This usually happens because either a multiple primary database or standby database(s) or both are trying to archive to this standby database.

Action: See alert log and trace file for more details. No action is necessary; this is an informational statement provided to record the event for diagnostic purposes.

 

Thursday 18 April 2013

Slave Behind Master in MySQL


In order for you to know that MySQL is fully processing the SQL from the relay logs. Try the following:

Check the network status with your Network Ops group, and allocate dedicated bandwidth to the MySQL replication process, if possible.

OR

STOP SLAVE IO_THREAD;
This will stop replication from downloading new entries from the master into its relay logs.
The other thread, known as the SQL thread, will continue processing the SQL statements it downloaded from the master.

When you run SHOW SLAVE STATUS\G, keep your eye on Exec_Master_Log_Pos. Run SHOW SLAVE STATUS\G again. If Exec_Master_Log_Pos does not move after a minute, you can go ahead run START SLAVE IO_THREAD;. This may reduce the number of Seconds_Behind_Master.
Other than that, there is really nothing you can do except to:

Trust Replication
Monitor Seconds_Behind_Master
Monitor Exec_Master_Log_Pos
Run SHOW PROCESSLIST;, take note of the SQL thread to see if it is processing long running queries.