Monday, May 4, 2026

ORA-16766 Error in Oracle Data Guard: Causes and Fix (19c Guide)

ORA-16766 Error in Oracle Data Guard: Causes and Fix (Oracle 19c Guide)

ORA-16766 Error in Oracle Data Guard: Causes and Fix (Oracle 19c Guide)

Redo Apply Service Not Running, How to Diagnose and Restart MRP in Under 5 Minutes
📅 4 May 2026
👤 Chetan Yadav, Senior Oracle & Cloud DBA
⏱️ 10-12 min read
⏱️ Estimated Reading Time: 10-12 minutes
ORA-16766, MRP Not Running, DGMGRL Diagnosis, MRP Restart, Standby Apply Fix, Oracle 19c
Oracle Data Guard ORA-16766 error fix guide showing network infrastructure representing redo transport and standby database replication
ORA-16766

Full Error: ORA-16766: Redo Apply is stopped

This error appears in DGMGRL SHOW CONFIGURATION output against the standby database. It means the Managed Recovery Process (MRP) on the standby has stopped and redo is no longer being applied. The standby is diverging from the primary with every passing second.

It was 3:22 AM. The monitoring alert fired: "Data Guard configuration warning — ORA-16766 on standby." Apply lag had jumped from zero to 47 minutes in under an hour. The standby database was alive, connected, receiving redo, but not applying any of it.

ORA-16766 is one of the most common Oracle Data Guard errors in production Oracle 19c environments. It always means the same thing: the MRP process on the standby has stopped. But the reasons it stops, and the correct fix for each reason, are completely different.

This guide covers every root cause of ORA-16766 in Oracle 19c, the exact DGMGRL and SQL commands to diagnose it, and the step-by-step fix commands for each scenario. Most ORA-16766 errors are resolved in under 5 minutes once you know which cause you are dealing with.

ORA-16766 Oracle Data Guard error causes diagnosis and fix diagram showing root causes MRP diagnosis commands fix commands and decision flow for Oracle 19c
Figure 1: ORA-16766 Complete Reference, Root Causes, Diagnosis Queries, Fix Commands, Quick Decision Flow and Scenario Reference Table for Oracle 19c Data Guard

1. What ORA-16766 Means and Why It Is Critical

ORA-16766: Redo Apply is stopped means the Managed Recovery Process (MRP0) on the standby database has terminated. No redo is being applied. The standby is frozen at the point MRP stopped.

While ORA-16766 is active, every transaction committed on the primary is at risk if the primary fails. The apply lag is your exposure window. A 2-hour apply lag when the primary fails means up to 2 hours of data loss.

StateWhat Is HappeningData Loss Risk
MRP Running normallyStandby applies redo, lag near zeroNear zero (depends on protection mode)
ORA-16766 activeMRP stopped, standby frozen, lag growingEqual to the current apply lag
ORA-16766 + no SRLsMRP cannot start even if commandedGrowing every minute until fixed
Priority Level: Fix Immediately

ORA-16766 is not a warning to schedule for the next maintenance window. Every minute it remains unfixed is another minute of redo that the standby cannot apply. If the primary fails while ORA-16766 is active, you activate a standby that is not fully synchronised. Treat it as a P1 incident.

2. Step 1: Immediate Diagnosis in DGMGRL and SQL

Before applying any fix, run these diagnostic queries to identify the exact cause. Each cause has a different fix. Running the wrong command wastes time and can make things worse.

DGMGRL, Step 1 Diagnosis, Identify ORA-16766 Root Cause
-- DGMGRL Diagnosis (run from primary or standby server) -- dgmgrl / -- Step 1a: Full configuration status SHOW CONFIGURATION; -- Look for: ORA-16766 next to standby database name -- Step 1b: Detailed standby database status SHOW DATABASE VERBOSE standby_db; -- Look for: ApplyState, LogXptStatus, StatusReport -- Step 1c: Validate the standby VALIDATE DATABASE standby_db; -- SQL Diagnosis (run on STANDBY) -- Step 2a: Check if MRP process is running SELECT process, status, thread#, sequence# FROM v$managed_standby WHERE process LIKE 'MRP%' OR process LIKE 'PR%' ORDER BY process; -- If no rows = MRP not running = ORA-16766 confirmed -- Step 2b: Check apply and transport lag SELECT name, value, time_computed FROM v$dataguard_stats WHERE name IN ('transport lag','apply lag','apply finish time') ORDER BY name; -- Step 2c: Check Standby Redo Logs (missing = root cause) SELECT group#, thread#, bytes/1024/1024 AS size_mb, archived, status FROM v$standby_log ORDER BY thread#, group#; -- No rows = SRLs missing, MRP cannot do real-time apply -- Step 2d: Check protection mode vs effective level (PRIMARY) SELECT protection_mode, protection_level FROM v$database; -- If they differ, primary fell back silently
Interpret Your Results:

No MRP0 in v$managed_standby = Root Cause 1 (MRP not started)
No rows in v$standby_log = Root Cause 2 (Missing SRLs)
DGMGRL ApplyState = OFF = Root Cause 3 (Broker disabled apply)
protection_level differs from protection_mode = Root Cause 4
Alert log shows MRP0 terminated after crash = Root Cause 5

3. Root Cause 1: MRP Was Stopped or Never Started

Most Common The standby is open but MRP was manually cancelled during maintenance, stopped by a network timeout, or was never started when the standby was first opened.

SQL, Fix Root Cause 1, Start or Restart MRP
-- Run on STANDBY database -- Option A: Real-time apply (RECOMMENDED when SRLs exist) ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION; -- Option B: Without real-time apply (if SRLs missing) ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION; -- Option C: With parallel apply for faster lag catch-up ALTER DATABASE RECOVER MANAGED STANDBY DATABASE PARALLEL 8 USING CURRENT LOGFILE DISCONNECT FROM SESSION; -- Verify MRP started SELECT process, status, thread#, sequence# FROM v$managed_standby WHERE process LIKE 'MRP%'; -- Expected: MRP0 status = APPLYING or WAIT_FOR_LOG

In a 4.8 TB core banking RAC primary, ORA-16766 appeared after a maintenance window where MRP was cancelled to apply an OS patch and not restarted. Apply lag had grown to 2.4 hours by the time monitoring fired. Restarting with parallel apply degree 8 cleared the lag to zero in 18 minutes.


4. Root Cause 2: Missing Standby Redo Logs Prevent MRP from Starting

Silent Killer If v$standby_log returns no rows, SRLs were never created. Without SRLs, USING CURRENT LOGFILE cannot function and MRP cannot start real-time apply, triggering immediate ORA-16766.

SQL, Fix Root Cause 2, Create Missing Standby Redo Logs
-- Step 1: Check online redo log size on PRIMARY SELECT group#, bytes/1024/1024 AS size_mb, status FROM v$log ORDER BY group#; -- Step 2: Create SRLs on STANDBY (same size, groups+1 per thread) ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 11 ('+DATA') SIZE 500M; ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 12 ('+DATA') SIZE 500M; ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 13 ('+DATA') SIZE 500M; ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 14 ('+DATA') SIZE 500M; -- Step 3: Verify SRLs created SELECT group#, thread#, bytes/1024/1024 AS size_mb, status FROM v$standby_log ORDER BY thread#, group#; -- Step 4: Restart MRP with real-time apply ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION; -- Step 5: Confirm real-time mode SELECT process, status FROM v$managed_standby WHERE process='MRP0'; -- Expected: WAIT_FOR_LOG (real-time) not WAIT_FOR_GAP

5. Root Cause 3: Data Guard Broker Set Apply State to OFF

Broker Managed If you manage Data Guard via DGMGRL, the broker controls apply state. Running direct ALTER DATABASE RECOVER SQL will work temporarily but broker will override it. Fix must go through DGMGRL.

DGMGRL, Fix Root Cause 3, Re-enable Apply via Broker
-- dgmgrl / -- Check current apply state SHOW DATABASE VERBOSE standby_db; -- Look for: ApplyState = OFF -- Enable the database if disabled ENABLE DATABASE standby_db; -- Set apply state to ON EDIT DATABASE standby_db SET STATE='APPLY-ON'; -- Verify SHOW CONFIGURATION; -- Expected: SUCCESS, no ORA-16766 -- Cross-check on standby SELECT process, status FROM v$managed_standby WHERE process LIKE 'MRP%';
Broker Overrides Direct SQL

When dg_broker_start = TRUE, broker manages apply state. If broker has apply set to OFF, it will cancel MRP again shortly after a direct SQL restart. Always fix ORA-16766 through DGMGRL when broker is enabled.

6. Root Cause 4: Protection Mode Conflict Causing MRP Abort

Silent Risk Maximum Protection mode with an unstable standby connection can cause Oracle to abort MRP to prevent divergence, triggering ORA-16766. Check if protection_level differs from protection_mode on the primary.

SQL + DGMGRL, Fix Root Cause 4, Resolve Protection Mode Conflict
-- Check on PRIMARY SELECT protection_mode, protection_level FROM v$database; -- If they differ, primary fell back silently -- Check archive destination error SELECT dest_id, status, error, gap_status FROM v$archive_dest_status WHERE target = 'STANDBY' AND status != 'INACTIVE'; -- Lower protection mode temporarily if needed (dgmgrl /) EDIT CONFIGURATION SET PROTECTION MODE AS MAXAVAILABILITY; -- Restart MRP on STANDBY ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION; -- Restore original mode once standby is stable EDIT CONFIGURATION SET PROTECTION MODE AS MAXPROTECTION;

7. Root Cause 5: MRP Stopped After Standby Instance Crash

Post-Crash After a standby OS reboot or crash, MRP does not automatically restart unless broker or a trigger is configured. Standby is open, ORA-16766 appears immediately.

SQL, Fix Root Cause 5, Post-Crash MRP Restart + Auto-Start Trigger
-- Verify standby state SELECT status, database_role, open_mode FROM v$database; -- Restart MRP ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION; -- Prevent recurrence: auto-start trigger on standby CREATE OR REPLACE TRIGGER start_mrp_on_open AFTER STARTUP ON DATABASE DECLARE v_role v$database.database_role%TYPE; BEGIN SELECT database_role INTO v_role FROM v$database; IF v_role = 'PHYSICAL STANDBY' THEN EXECUTE IMMEDIATE 'ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION'; END IF; EXCEPTION WHEN OTHERS THEN NULL; END; /
Broker Alternative: If Data Guard Broker is enabled, you do not need the trigger. Broker auto-restarts MRP after standby restart if state is APPLY-ON. Ensure dg_broker_start = TRUE in the standby spfile.

8. Verify the Fix: Confirm ORA-16766 Is Fully Resolved

SQL + DGMGRL, Complete Post-Fix Verification
-- 1. MRP running (STANDBY) SELECT process, status, sequence# FROM v$managed_standby WHERE process LIKE 'MRP%'; -- Expected: MRP0 = WAIT_FOR_LOG or APPLYING -- 2. Apply lag dropping to zero SELECT name, value FROM v$dataguard_stats WHERE name IN ('transport lag','apply lag'); -- 3. Protection level matches mode (PRIMARY) SELECT protection_mode, protection_level FROM v$database; -- 4. DGMGRL shows SUCCESS -- SHOW CONFIGURATION; -- 5. No archive gap (PRIMARY) SELECT thread#, low_sequence#, high_sequence# FROM v$archive_gap; -- Expected: No rows
Full Resolution Checklist:

DGMGRL shows SUCCESS. MRP0 status is WAIT_FOR_LOG or APPLYING. Apply lag dropping to zero. No rows in v$archive_gap. Protection level matches protection mode. All five = ORA-16766 fully resolved.

9. Prevent ORA-16766 from Recurring

Configuration Hardening Checklist

  • Standby Redo Logs created: same size as online redo logs, groups + 1 per thread
  • Data Guard Broker enabled with dg_broker_start = TRUE on both nodes
  • MRP startup trigger created on standby (if not using broker auto-restart)
  • Protection mode set to Maximum Availability unless Zero Data Loss is mandated
  • Monitoring alert for ORA-16766 in alert log with 5-minute response SLA
  • DGMGRL VALIDATE DATABASE run weekly
  • Apply lag alert threshold set at 30 seconds
SQL, Proactive ORA-16766 Detection (Run Every 5 Minutes)
-- MRP health check on STANDBY SELECT CASE WHEN COUNT(*) = 0 THEN 'ALERT: MRP not running — ORA-16766 likely' ELSE 'OK: MRP running' END AS mrp_status FROM v$managed_standby WHERE process = 'MRP0' AND status NOT IN ('IDLE'); -- Apply lag alert SELECT name, value, CASE WHEN TO_DSINTERVAL(value) > INTERVAL '0 00:00:30' HOUR TO SECOND THEN 'ALERT: ' || name || ' exceeds 30 seconds' ELSE 'OK' END AS lag_status FROM v$dataguard_stats WHERE name IN ('transport lag','apply lag') ORDER BY name;

FAQ: ORA-16766 in Oracle Data Guard

What does ORA-16766 mean in Oracle Data Guard?
ORA-16766 means Redo Apply is stopped. The MRP0 process on the standby has terminated and no redo is being applied. The standby is frozen at the SCN when MRP stopped. It does not mean the standby is down or the network is broken, only that MRP has stopped.
How do I fix ORA-16766 in Oracle 19c Data Guard?
The fastest fix for most cases: on the standby run ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION; If using Broker: DGMGRL> EDIT DATABASE standby_db SET STATE='APPLY-ON'; If MRP still does not start, check v$standby_log — if empty, create SRLs first.
Why does ORA-16766 keep coming back after restart?
Recurring ORA-16766 usually means: (1) Broker has apply set to OFF and keeps cancelling MRP — fix via DGMGRL. (2) Missing SRLs cause MRP to fail on start. (3) Maximum Protection mode with unstable standby connection causes repeated abort. Check which one applies using DGMGRL SHOW DATABASE VERBOSE.
Does ORA-16766 mean data loss has already occurred?
No. ORA-16766 means data loss has not yet occurred but the risk window is open. The standby is not applying redo, so if the primary fails, you lose redo generated since MRP stopped. Fix it immediately. Every minute of delay increases your exposure.
Can ORA-16766 appear while the standby is still receiving redo?
Yes. This is the missing SRL pattern. The RFS process receives and archives redo normally (transport lag = zero) but MRP cannot start real-time apply without SRLs. So the standby receives redo but apply lag grows and ORA-16766 appears in DGMGRL.

Dealing with a different Data Guard error?

Drop a comment with your exact error code and DGMGRL output. I will help you diagnose and fix it.

Comment DG

About the Author

Chetan Yadav

Chetan Yadav is a Senior Oracle, PostgreSQL, MySQL, and Cloud DBA with 15+ years of hands-on experience managing mission-critical production databases across finance, healthcare, and e-commerce sectors.

Chetan has resolved ORA-16766 and related Data Guard errors across multiple production RAC and single-instance environments. His writing focuses on exact diagnosis and exact fix commands, not generic advice.

This blog covers real-world DBA problems, cloud architecture, and practical learning, not theoretical documentation or vendor marketing.

1 comment:

  1. DG

    One pattern I see repeatedly in production: ORA-16766 appears
    silently after a standby maintenance window where the DBA cancels
    MRP to apply an OS patch but forgets to restart it. By the time
    monitoring detects the apply lag, the standby is already hours
    behind the primary.

    The fix is straightforward — ALTER DATABASE RECOVER MANAGED
    STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT — but the
    critical step most DBAs miss is checking v$standby_log first.
    If Standby Redo Logs are missing, MRP will fail to start even
    after the correct restart command.

    Always verify with SHOW DATABASE VERBOSE in DGMGRL after
    restarting. If ORA-16766 returns within minutes, the broker
    state is overriding your manual fix.

    ReplyDelete