Showing posts with label Transport Lag. Show all posts
Showing posts with label Transport Lag. Show all posts

Tuesday, April 21, 2026

How to Fix Data Guard Lag in Oracle 19c (Step-by-Step Troubleshooting Guide)

How to Fix Data Guard Lag in Oracle 19c: Step-by-Step Troubleshooting Guide

How to Fix Data Guard Lag in Oracle 19c: Step-by-Step Troubleshooting Guide

5 Proven Fixes for Transport Lag and Apply Lag with Exact SQL Commands
 March 2026
 Chetan Yadav, Senior Oracle & Cloud DBA
⏱️ 12 - 14 min read
⏱️ Estimated Reading Time: 12 - 14 minutes
Missing SRLs, Parallel Apply, Network Compression, RMAN Conflict, Protection Mode Mismatch
Oracle Data Guard lag fix decision flowchart with quick diagnosis panel and fix reference table for Oracle 19c
⚙️ Environment Referenced

Oracle Database: 19.18.0.0.0 Enterprise Edition  •  Standby Type: Physical Standby (Active Data Guard)  •  Protection Mode: Maximum Availability (SYNC/AFFIRM)
Primary: 2-Node RAC, 4.8 TB OLTP  •  Network: Dedicated 1 GbE WAN, RTT 1.8 ms  •  Peak Load: 2,800 TPS

Data Guard lag is one of the most stressful production alerts a DBA receives. The standby is falling behind the primary. Every second of lag is a second of potential data loss if the primary fails right now. The pressure to fix it quickly is real.

The problem is that "Data Guard lag" is not one problem. It is five different problems that all show the same symptom. Applying the wrong fix wastes time and can make things worse. This guide gives you the exact decision path, the exact diagnostic queries, and the exact fix commands for each root cause, in the order you should check them.

Follow the steps in order. Each step either identifies your problem and gives you the fix, or clears that cause and moves you to the next. Most Data Guard lag issues are resolved within Steps 1 to 3.

Monday, April 6, 2026

Why Data Guard Lag Happens in Production: Sync, I/O and Network Deep Dive

Why Data Guard Lag Happens in Production: Sync, I/O and Network Deep Dive

Why Data Guard Lag Happens in Production: Sync, I/O and Network Deep Dive

6 Root Causes of Transport and Apply Lag, With Diagnostic SQL to Prove Each One
06 March 2026
Chetan Yadav, Senior Oracle & Cloud DBA
⏱️ 14 - 16 min read
⏱️ Estimated Reading Time: 14 - 16 minutes
Transport Lag, Apply Lag, SYNC vs ASYNC, Network RTT, Standby I/O, MRP Apply Bottleneck
Oracle Data Guard lag root cause map showing 6 production causes across Primary Network and Standby layers with diagnostic metric reference table
⚙️ Production Environment Referenced

Oracle Database: 19.18.0.0.0 Enterprise Edition  •  Primary: 2-Node RAC, 4.8 TB OLTP  •  Standby: Physical Standby (Active Data Guard)
Protection Mode: Maximum Availability (SYNC/AFFIRM)  •  Network: Dedicated 1 GbE WAN, 120 km, RTT 1.8 ms
Peak Load: 2,800 TPS, 180 MB/sec redo generation  •  Application: Core banking transaction processing

The monitoring alert fires at 11:43 PM: "Data Guard apply lag exceeds 900 seconds." Transport lag is 180 seconds. Apply lag is 900 seconds. The standby is 15 minutes behind the primary. If the primary fails right now, 15 minutes of financial transactions are at risk.

This scenario happens in production Data Guard environments more often than most teams admit. The problem looks the same from the outside every time, but the root cause is completely different each time. Transport lag and apply lag each have different causes, different diagnostic queries, and different fixes. Treating them as the same problem wastes hours of investigation.

This guide covers all six real production causes of Data Guard lag, the exact SQL to identify each one, and the specific fix for each. No guesswork. Precise diagnosis first, then precise resolution.

Monday, March 30, 2026

Why Data Guard Lag Happens in Production: Sync, I/O and Network Deep Dive

Why Data Guard Lag Happens in Production: Sync, I/O and Network Deep Dive

Why Data Guard Lag Happens in Production: Sync, I/O and Network Deep Dive

Six Root Causes of Transport and Apply Lag , With Diagnostic SQL to Prove Each One
30 March 2026
Chetan Yadav , Senior Oracle & Cloud DBA
⏱️ 14–16 min read
⏱️ Estimated Reading Time: 14–16 minutes
Transport Lag • Apply Lag • SYNC vs ASYNC • Network RTT • Standby I/O • MRP Apply Bottleneck
Oracle Data Guard lag root cause architecture map showing 6 production causes across Primary Network and Standby layers
⚙️ Production Environment Referenced in This Article

Oracle Database: 19.18.0.0.0 Enterprise Edition  •  Primary: 2-Node RAC, 4.8 TB OLTP  •  Standby: Physical Standby (Active Data Guard)
Protection Mode: Maximum Availability (SYNC/AFFIRM)  •  Network: Dedicated 1 GbE WAN (120 km distance, RTT 1.8 ms)
Peak Load: 2,800 TPS, 180 MB/sec redo generation  •  Application: Core banking transaction processing

The alert arrives at 11:43 PM: "Data Guard apply lag exceeds 900 seconds." The DBA on call opens the monitoring dashboard. Transport lag is 180 seconds. Apply lag is 900 seconds. The standby is 15 minutes behind the primary. If the primary fails right now, 15 minutes of financial transactions could be at risk.

This scenario plays out in production Data Guard environments more often than most teams admit. Lag is not a single problem , it is six different problems that look identical from the outside. Transport lag and apply lag each have completely different root causes, different diagnostic queries, and completely different fixes. Treating them the same wastes hours of investigation time.

This guide covers every real cause of Data Guard lag I have diagnosed in production, the exact SQL to prove which one you are dealing with, and the specific fix for each. No guesswork. No generic advice about "check your network." Precise diagnosis first, then precise resolution.