Since 12.2, Managed Recovery Process (MRP) can be started on multiple instances, making Standby recovery much faster and making easier for the system to be kept on Sync with your Primary Database (Found thanks to this old post: http://jeyaseelan-m.blogspot.com/2018/05/mira-multi-instance-redo-apply-12cr2.html)

This new feature is called MIRA (more info in following PDF https://www.oracle.com/technetwork/database/availability/redo-apply-2745943.pdf)

The process to enable it is very simple, and can be done both from Dataguard Broker, and from sqlplus.

Using sqlplus

-- Start MRP from all the instances available:
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION INSTANCES ALL;

-- Start MRP on 2 Instances:
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION INSTANCES 2;

Using DG broker:

-- Start MRP from all the instances available:
edit database st122  set property ApplyInstances=all;

-- Start MRP on 2 Instances:
edit database st122  set property ApplyInstances=2;

To see what is happening, we can check the alertlog from both instances when we enable MRP in all instances.

Enable MRP in all instances of our 2 instances 12.2 version Standby called st122

[oracle@rac2-node1 ~]$ dgmgrl /
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Sun Jul 5 13:17:20 2020

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected to "st122"
Connected as SYSDG.
DGMGRL> show configuration

Configuration - dg_broker_config

  Protection Mode: MaxPerformance
  Members:
  db122 - Primary database
    st122 - Physical standby database 

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 36 seconds ago)

DGMGRL> edit database st122  set property ApplyInstances=all;
Property "applyinstances" updated

From the alertlog, we can see how MRP is started and slaves are started on both instances

== Instance1 Alertlog ==

Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT INSTANCES ALL NODELAY  <<<<<<<

2020-07-05T13:21:31.377133+01:00
Attempt to start background Managed Standby Recovery process (st1221)
Starting background process MRP0
2020-07-05T13:21:31.394269+01:00
MRP0 started with pid=114, OS id=17530 
2020-07-05T13:21:31.395218+01:00
MRP0: Background Managed Standby Recovery process started (st1221)
2020-07-05T13:21:36.714880+01:00
Started logmerger process on instance id 1
Started logmerger process on instance id 2
Starting Multi Instance Redo Apply (MIRA) on 2 instances 
WARNING! File header update interval to record checkpointsreset to default 5 minutes

2020-07-05T13:21:36.812621+01:00
Starting Multi Instance Redo Apply (MIRA) <<<<<<<

2020-07-05T13:21:36.968025+01:00
Managed Standby Recovery starting Real Time Apply
2020-07-05T13:21:37.438603+01:00
Reconfiguration started (old inc 12, new inc 14)
List of instances (total 2) :
 1 2
My inst 1   
 Global Resource Directory frozen

[....]

2020-07-05T13:21:39.051495+01:00
Started 8 apply slaves on instance id 1
2020-07-05T13:21:39.858557+01:00
Started 8 apply slaves on instance id 2
2020-07-05T13:21:42.381887+01:00

[....]

2020-07-05T13:21:42.928972+01:00
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT INSTANCES ALL NODELAY
2020-07-05T13:21:43.393395+01:00
RFS[3]: Selected log 111 for T-1.S-215 dbid 955512738 branch 1022154277
2020-07-05T13:21:43.423744+01:00
Archived Log entry 395 added for T-1.S-214 ID 0x38f3d69b LAD:1
2020-07-05T13:21:43.711488+01:00
Recovery of Online Redo Log: Thread 1 Group 111 Seq 215 Reading mem 0
  Mem# 0: +RECO/ST122/ONLINELOG/group_111.399.1043160849
2020-07-05T13:21:45.527259+01:00
RFS[5]: Selected log 122 for T-2.S-173 dbid 955512738 branch 1022154277
2020-07-05T13:21:45.613438+01:00
Archived Log entry 396 added for T-2.S-172 ID 0x38f3d69b LAD:1

== Instance2 Alertlog ==

2020-07-05T13:21:40.028955+01:00
Starting Multi Instance Redo Apply (MIRA) 
2020-07-05T13:21:42.376537+01:00
Media Recovery of Online Log [Thread=2, Seq=172]
2020-07-05T13:21:42.382467+01:00
Recovery of Online Redo Log: Thread 2 Group 121 Seq 172 Reading mem 0
  Mem# 0: +RECO/ST122/ONLINELOG/group_121.409.1043160909
2020-07-05T13:21:45.788680+01:00
Media Recovery of Online Log [Thread=2, Seq=173]
2020-07-05T13:21:45.794279+01:00
Recovery of Online Redo Log: Thread 2 Group 122 Seq 173 Reading mem 0
  Mem# 0: +RECO/ST122/ONLINELOG/group_122.408.1043160935
(END)

We can also see the slaves processes:

[oracle@rac2-node1 ~]$ ps -ef |grep ora_pr |grep -v grep
oracle   17539     1  1 13:21 ?        00:00:22 ora_pr00_st1221
oracle   17552     1  1 13:21 ?        00:00:12 ora_pr02_st1221
oracle   17554     1  0 13:21 ?        00:00:10 ora_pr03_st1221
oracle   17556     1  0 13:21 ?        00:00:10 ora_pr04_st1221
oracle   17559     1  0 13:21 ?        00:00:10 ora_pr05_st1221
oracle   17561     1  0 13:21 ?        00:00:11 ora_pr06_st1221
oracle   17564     1  0 13:21 ?        00:00:10 ora_pr07_st1221
oracle   17566     1  0 13:21 ?        00:00:10 ora_pr08_st1221
oracle   17568     1  0 13:21 ?        00:00:10 ora_pr09_st1221
oracle   17571     1  0 13:21 ?        00:00:01 ora_pr0i_st1221
oracle   17573     1  0 13:21 ?        00:00:01 ora_pr0j_st1221
oracle   17575     1  0 13:21 ?        00:00:06 ora_pr0k_st1221
oracle   17577     1  0 13:21 ?        00:00:06 ora_pr0m_st1221
oracle   17581     1  1 13:21 ?        00:00:12 ora_pr0p_st1221
oracle   17584     1  1 13:21 ?        00:00:12 ora_pr0r_st1221
[oracle@rac2-node1 ~]$ 

[oracle@rac2-node2 ~]$ ps -ef |grep ora_pr |grep -v grep
oracle   21885     1  9 13:21 ?        00:01:57 ora_pr00_st1222
oracle   21899     1  0 13:21 ?        00:00:11 ora_pr01_st1222
oracle   21901     1  0 13:21 ?        00:00:10 ora_pr02_st1222
oracle   21903     1  0 13:21 ?        00:00:10 ora_pr03_st1222
oracle   21905     1  0 13:21 ?        00:00:10 ora_pr04_st1222
oracle   21907     1  0 13:21 ?        00:00:10 ora_pr05_st1222
oracle   21909     1  0 13:21 ?        00:00:10 ora_pr06_st1222
oracle   21911     1  0 13:21 ?        00:00:10 ora_pr07_st1222
oracle   21913     1  0 13:21 ?        00:00:10 ora_pr08_st1222
oracle   21919     1  1 13:21 ?        00:00:12 ora_pr09_st1222
oracle   21921     1  1 13:21 ?        00:00:12 ora_pr0a_st1222
oracle   21924     1  0 13:21 ?        00:00:06 ora_pr0b_st1222
oracle   21927     1  0 13:21 ?        00:00:06 ora_pr0c_st1222
oracle   21929     1  0 13:21 ?        00:00:01 ora_pr0d_st1222
oracle   21931     1  0 13:21 ?        00:00:01 ora_pr0e_st1222
[oracle@rac2-node2 ~]$ 

Something you can still notice, is that the main process will still only run in one of the instances, and DG broker still shows Instance1 as the one running it

DGMGRL> show database st122

Database - st122

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          0 seconds (computed 1 second ago)
  Average Apply Rate: 1.00 KByte/s
  Real Time Query:    ON
  Instance(s):
    st1221 (apply instance)
    st1222

Database Status:
SUCCESS

DGMGRL> 

[oracle@rac2-node1 ~]$ ps -ef |grep mrp |grep -v grep
oracle   17530     1  1 13:21 ?        00:00:22 ora_mrp0_st1221

[oracle@rac2-node2 ~]$ ps -ef |grep mrp |grep -v grep
[oracle@rac2-node2 ~]$ 

== Primary ==

13:56:07 db1221 > ALTER SYSTEM ARCHIVE LOG CURRENT;

System ARCHIVE altered.

13:56:12 db1221 >

== Standby ==

13:57:26 st1221 > select inst_id, process, status MRP_stat, thread#, sequence#, block#, BLOCKS "Total Blocks"
  2  from gv$managed_standby 
  3  where process like 'MRP%' or process like 'RFS%' 
  4  and status != 'IDLE'
  5  order by inst_id,process,thread# ; 

   INST_ID    PROCESS        MRP_STAT    THREAD#    SEQUENCE#    BLOCK#    Total Blocks 
__________ __________ _______________ __________ ____________ _________ _______________ 
         1 MRP0       APPLYING_LOG             1          219         8          512000 

13:57:28 st1221 > 

Just be aware that MIRA will need some extra memory, so is possible, you will hit Out of memory errors if your current SGA is not enough.
To be fair, I faced these errors since my SGA was only 700MB, so I was actually surprised the system was even running 🙂

2020-06-15T13:19:55.085048+01:00
 Expected per process system memlock (soft) limit to lock
 SHARED GLOBAL AREA (SGA) into memory: 770M
2020-06-15T13:19:55.085195+01:00
 Available system pagesizes:
  4K, 2048K 
2020-06-15T13:19:55.085352+01:00
 Supported system pagesize(s):
2020-06-15T13:19:55.085429+01:00

[....]

2020-06-15T13:38:18.188856+01:00
Starting Multi Instance Redo Apply (MIRA) 
2020-06-15T13:38:19.872934+01:00
Errors in file /u01/app/oracle/diag/rdbms/st122/st1221/trace/st1221_pr0e_7956.trc  (incident=41993):
ORA-04031: unable to allocate 1048616 bytes of shared memory ("shared pool","unknown object","krpm sender he","krpm rcvbuf buffer")
Incident details in: /u01/app/oracle/diag/rdbms/st122/st1221/incident/incdir_41993/st1221_pr0e_7956_i41993.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
2020-06-15T13:38:21.399461+01:00
Multi Instance Redo Apply terminaed with error 448
2020-06-15T13:38:21.423660+01:00
Errors in file /u01/app/oracle/diag/rdbms/st122/st1221/trace/st1221_pr00_7756.trc:
ORA-00448: normal completion of background process
2020-06-15T13:38:21.424496+01:00
Managed Standby Recovery not using Real Time Apply
2020-06-15T13:38:21.791193+01:00
Slave exiting with ORA-4031 exception
2020-06-15T13:38:21.791526+01:00
Errors in file /u01/app/oracle/diag/rdbms/st122/st1221/trace/st1221_pr0e_7956.trc:

[....]

 Fix write in gcs resources
2020-06-15T13:43:08.347797+01:00
Reconfiguration complete (total time 0.2 secs) 
2020-06-15T13:43:09.040568+01:00
Managed Standby Recovery starting Real Time Apply
2020-06-15T13:43:09.739229+01:00

* instance 2 validates domain 0 
2020-06-15T13:43:10.009826+01:00
Reconfiguration started (old inc 26, new inc 28)
List of instances (total 2) :
 1 2
Last modified: 8 August 2020

Author

Comments

Aditya Gupta 

Very Nice information Victor….

Write a Reply or Comment

Your email address will not be published.