Showing posts with label RAC Performance. Show all posts
Showing posts with label RAC Performance. Show all posts

Monday, March 9, 2026

Oracle RAC Internals Explained: Cache Fusion and Cluster Design Lessons

Oracle RAC Internals Explained: Cache Fusion and Cluster Design Lessons

Oracle RAC Internals Explained: Cache Fusion and Cluster Design Lessons

Real Production High Availability Architecture and Clustering Deep Dive
 March 09, 2026
 Chetan Yadav — Senior Oracle & Cloud DBA
⏱️ 12–13 min read
⏱️ Estimated Reading Time: 12–13 minutes
Oracle RAC internals: Cache Fusion, Cluster Interconnect, Split-Brain, and Real Production Failures
Oracle RAC 4-node cluster architecture diagram showing Cache Fusion, GCS, GES, 10GbE Private Interconnect and Shared ASM Storage
⚙️ Test Environment

Oracle Database: 19.18.0.0.0 Enterprise Edition  •  Cluster: 4-Node Oracle RAC on Oracle Linux 8.7
Storage: Oracle ASM, 12 TB shared (Normal Redundancy)  •  DB Size: 8.2 TB (6.8 TB data + 1.4 TB indexes)
Workload: Mixed OLTP/Batch  •  Peak Load: 3,200 concurrent sessions, 2,400 TPS
Interconnect: Dual 10GbE bonded private network  •  Application: Financial transaction processing system

3:47 AM. Pager alert: "RAC Node 2 evicted — cluster performance degraded." I logged into the surviving node running Oracle Database 19.18.0.0.0. The cluster had automatically failed over, but performance had collapsed. What should have been 2,400 transactions per second was now limping at 900 TPS.

I checked interconnect statistics immediately. The gc cr block receive time averaged 247 milliseconds — it should be under 1 millisecond. This wasn't a failed-node problem; this was network infrastructure failure. The private interconnect switch had undergone a firmware upgrade during the maintenance window. The new firmware version had a packet forwarding bug causing random 200ms+ delays in Cache Fusion block transfers. Applications were technically connected, but every cross-node block request was timing out and retrying. We initiated emergency failover to the DR site while network engineering rolled back the switch firmware.

Oracle RAC is not just "multiple databases sharing storage." It's a distributed cache coherency system where every node maintains its own buffer cache, but all nodes must coordinate which version of each data block is current. Cache Fusion is the mechanism that makes this work — transferring blocks between nodes over the private interconnect instead of forcing disk writes. Understanding this is the difference between an operational RAC cluster and a ticking time bomb.

This guide covers real Oracle RAC internals: how Cache Fusion actually works, why interconnect design matters more than CPU, what causes split-brain scenarios, and the production lessons learned from managing RAC clusters that can't afford downtime.

1. RAC Architecture Fundamentals: Beyond the Marketing

Oracle RAC is sold as "high availability and scalability." Reality is more nuanced.

What RAC Actually Provides

CapabilityRealityCommon Misconception
High AvailabilitySurvives single node failure"Zero downtime" — not true during network failures
ScalabilityRead scaling works well"Linear scaling" — write workloads don't scale linearly
Load BalancingDistributes connections"Automatic query routing" — application must handle
MaintenanceRolling patches possible"No downtime patches" — some still require outage

Core RAC Components

Every RAC cluster requires:

  • Shared Storage: ASM or certified cluster filesystem — all nodes access the same datafiles
  • Private Interconnect: Dedicated network for Cache Fusion messages (1 GB minimum, 10 GB+ recommended)
  • Voting Disks: Quorum mechanism to prevent split-brain (typically 3 or 5)
  • OCR (Oracle Cluster Registry): Cluster configuration database
  • Clusterware: Grid Infrastructure managing node membership and resources
SQL — Verify RAC Configuration
-- Check cluster database status -- Verify RAC instances SELECT inst_id, instance_name, host_name, status FROM gv$instance ORDER BY inst_id; -- Check cluster interconnect configuration SELECT inst_id, name, ip_address FROM gv$cluster_interconnects ORDER BY inst_id;
Oracle Licensing Note

The queries in this article use dynamic performance views (v$ and gv$ views) which are available in all Oracle Database editions without additional licensing. When analyzing historical performance data, AWR and ASH queries require the Oracle Diagnostics Pack license. For unlicensed environments, use Statspack (free) or real-time v$ views as shown above.

Single Instance vs RAC: Architectural Differences

Single Instance:

  • One SGA, one buffer cache
  • No coordination overhead
  • Simple lock management
  • Straightforward troubleshooting

RAC Cluster:

  • Multiple SGAs — one per node
  • Cache Fusion coordination required
  • Global lock management via GES
  • Complex distributed troubleshooting

2. Cache Fusion Explained: How Blocks Move Between Nodes

Cache Fusion is Oracle's distributed shared cache architecture used in Oracle Real Application Clusters (RAC). It was fully introduced with Oracle RAC in Oracle 9i, replacing the disk-based block pinging architecture used in earlier Oracle Parallel Server (OPS) environments.

Instead of forcing modified blocks to be written to disk before another instance reads them, RAC transfers blocks directly between instance buffer caches over the private interconnect. This memory-to-memory block transfer dramatically reduces latency compared with disk-based synchronization.

The Problem Cache Fusion Solves

Without Cache Fusion (Oracle Parallel Server 8i architecture):

  1. Node 1 modifies block 1234567 in its buffer cache (8 KB block size)
  2. Node 2 requests the same block for a SELECT query
  3. Node 1 must write the dirty block to shared storage via LGWR and DBWR
  4. Node 2 reads the block from disk via db file sequential read wait event
  5. Result: Forced disk I/O averaging 8–15 ms latency (ping-pong effect)
  6. Scalability ceiling: 2–3 nodes maximum due to I/O contention

With Cache Fusion (Oracle 19.18.0.0.0 RAC):

  1. Node 1 holds dirty block 1234567 in buffer cache (current mode)
  2. Node 2 requests the block via Global Cache Services message
  3. GCS coordinates transfer — Node 1 identified as master for this resource
  4. Node 1 ships the block directly over the private interconnect (10 GbE)
  5. Transfer completes in 0.5–2.0 milliseconds (10x faster than disk)
  6. Node 2 receives the block in its buffer cache without disk I/O
  7. Result: Memory-to-memory transfer; disk write deferred until checkpoint
  8. Scalability: Proven deployments up to 16+ nodes in production

Cache Fusion Block Transfer Modes

Current Mode Block Transfer (gc current): When a session requests the most recent version of a block for UPDATE or DELETE operations, Oracle transfers the current mode block. In our 19.18.0.0.0 production RAC environment with 10 GbE interconnect, current mode transfers average 1.2 ms during peak load. If the block is dirty, the owning instance retains a past image (PI) for instance crash recovery purposes.

Consistent Read Mode Block Transfer (gc cr): For SELECT queries requiring read consistency, Oracle may construct consistent read (CR) versions of blocks using undo data. In our testing on Oracle 19.18.0.0.0, CR block transfers show slightly higher latency (1.5–2.0 ms average) because they may require block reconstruction from multiple undo records before transfer. The gc cr block receive time metric in v$system_event directly measures this latency.

Cache Fusion Wait Events in Oracle 19.18.0.0.0

Wait EventDescriptionTypical LatencyProduction Impact
gc current block 2-way Current block transfer between 2 instances 0.5–2.0 ms (10 GbE)
3–8 ms (1 GbE)
Most common; acceptable if under 2 ms average
gc current block 3-way Block transfer requiring 3-instance coordination 1.5–4.0 ms (10 GbE) Higher cost; occurs when block has past images on multiple nodes
gc cr block 2-way Consistent read block constructed and transferred 1.0–2.5 ms Read-heavy workloads; check undo contention if high
gc current block busy Waiting for in-flight block transfer to complete Variable Hot block contention; redesign needed if persistent
gc buffer busy acquire Multiple sessions contending for the same buffer Variable Severe: indicates same block being modified by multiple nodes simultaneously
SQL — Calculate Real-Time Cache Fusion Efficiency (Oracle 19c)
-- Cache Fusion latency analysis per instance -- Run this during a performance investigation SELECT inst_id, ROUND( (SELECT SUM(time_waited_micro) FROM gv$system_event WHERE event LIKE 'gc cr block%way' AND inst_id = s.inst_id) / NULLIF( (SELECT SUM(total_waits) FROM gv$system_event WHERE event LIKE 'gc cr block%way' AND inst_id = s.inst_id), 0) / 1000, 2) AS avg_gc_cr_latency_ms, ROUND( (SELECT SUM(time_waited_micro) FROM gv$system_event WHERE event LIKE 'gc current block%way' AND inst_id = s.inst_id) / NULLIF( (SELECT SUM(total_waits) FROM gv$system_event WHERE event LIKE 'gc current block%way' AND inst_id = s.inst_id), 0) / 1000, 2) AS avg_gc_current_latency_ms FROM gv$instance s ORDER BY inst_id;
SQL — Identify Hot Blocks Causing Excessive Cache Fusion Transfers
-- Identify hot blocks causing excessive transfers SELECT o.object_name, o.object_type, c.file#, c.block#, c.class#, c.status, COUNT(*) AS contention_count FROM gv$bh c JOIN dba_objects o ON c.objd = o.data_object_id WHERE c.status IN ('xcur', 'scur', 'cr', 'read') AND c.forced_reads > 10 GROUP BY o.object_name, o.object_type, c.file#, c.block#, c.class#, c.status HAVING COUNT(*) > 5 ORDER BY contention_count DESC FETCH FIRST 20 ROWS ONLY;
Real Production Example — Our 19.18 RAC Cluster:

During peak batch processing at 11 PM, we observed gc current block 2-way latency spike to 12 ms (baseline 1.2 ms). Analysis revealed the batch job was performing mass updates on a single table with a right-growing index (order_id sequence). All four RAC instances were contending for the rightmost leaf block of the index.

Solution: We partitioned the index by range and implemented four separate sequences with CACHE 1000 and ORDER settings. Post-change, gc current latency returned to baseline 1.3 ms and batch completion time reduced from 4.2 hours to 2.8 hours.
SQL — Monitor Cache Fusion Wait Events
-- Cache Fusion wait events across all instances SELECT inst_id, event, total_waits, time_waited, ROUND(average_wait, 3) AS avg_wait_ms FROM gv$system_event WHERE event LIKE 'gc%' AND total_waits > 0 ORDER BY time_waited DESC; -- Interconnect transfer rates per instance SELECT inst_id, name, value FROM gv$sysstat WHERE name IN ( 'gcs messages sent', 'ges messages sent', 'global cache blocks received', 'global cache blocks served' ) ORDER BY inst_id, name;

3. Global Cache Services (GCS) and Global Enqueue Services (GES)

GCS and GES are the coordination layers that make RAC work.

Global Cache Services (GCS)

Responsibilities:

  • Tracks which node holds which blocks
  • Maintains block ownership information
  • Coordinates block transfers between nodes
  • Manages cache coherency across the cluster

Global Enqueue Services (GES)

Responsibilities:

  • Manages global enqueues across the RAC cluster
  • Coordinates locking for shared database resources
  • Ensures consistent lock state across all instances
  • Maintains global enqueue structures for cluster coordination
SQL — GCS/GES Resource Distribution
-- Blocked global enqueues across cluster SELECT inst_id, resource_name, current_mode, blocked FROM gv$ges_enqueue WHERE blocked = 1; -- GCS latch statistics per instance SELECT inst_id, name, gets, misses, sleeps FROM gv$latch WHERE name LIKE '%cache%' ORDER BY gets DESC;

Resource Mastering

Each resource (block, lock) has a master node responsible for coordinating access.

Master node responsibilities:

  • Tracks current owner of the resource
  • Grants access to requesting nodes
  • Maintains resource state information

Remastering occurs when:

  • A node joins or leaves the cluster
  • Resource access patterns change significantly
  • Manual remastering is triggered by DBA

4. Cluster Interconnect: The Most Critical Component

The interconnect is the most important part of RAC. If the interconnect fails, the cluster fails.

Interconnect Requirements

MetricMinimumRecommendedWhy It Matters
Bandwidth1 Gbps10+ GbpsCache Fusion throughput
Latency< 5 ms< 1 msBlock transfer speed
Packet Loss< 1%< 0.1%Message reliability
RedundancySingle pathBonded NICsFailover capability

Common Interconnect Problems

  • Risk Shared switches: Interconnect traffic mixed with public traffic
  • Risk Insufficient bandwidth: 1 Gbps not enough for high-transaction workloads
  • Risk High latency: Geographic distance between nodes (>1 ms)
  • Risk Single point of failure: One switch, one cable
SQL — Diagnose Interconnect Issues
-- Interconnect latency check across all nodes SELECT inst_id, name, value FROM gv$sysstat WHERE name LIKE '%gc cr block receive time%' OR name LIKE '%gc current block receive time%' ORDER BY inst_id; -- Calculate average interconnect latency per node SELECT inst_id, ROUND( (SELECT value FROM gv$sysstat WHERE name = 'gc cr block receive time' AND inst_id = s.inst_id) / NULLIF( (SELECT value FROM gv$sysstat WHERE name = 'gc cr blocks received' AND inst_id = s.inst_id), 0), 2) AS avg_cr_latency_ms FROM gv$instance s ORDER BY inst_id;

Interconnect Design Best Practices

  • Best Dedicated network: Separate from public and backup networks
  • Best 10 Gbps minimum: For all production workloads
  • Best Low-latency switches: Purpose-built for interconnect
  • Best NIC bonding: Redundant paths for automatic failover
  • Best Jumbo frames: MTU 9000 for better throughput

5. Split-Brain Scenarios and Voting Disk Protection

Split-brain is the nightmare scenario where a cluster partitions and both sides believe they are primary.

What is Split-Brain?

Consider a 3-node RAC cluster running normally. If a network partition occurs (interconnect fails), Node 1 can no longer reach Nodes 2 and 3. Both sides believe the other side has failed. Both sides attempt to become primary. If both sides write to shared storage simultaneously the result is data corruption.

How Voting Disks Prevent Split-Brain

Voting disks implement a quorum mechanism:

  • Typically 3 or 5 voting disks are configured
  • A node must access a majority of voting disks to survive
  • With 3 voting disks, a node needs access to at least 2
  • With 5 voting disks, a node needs access to at least 3
  • The losing side evicts itself automatically — no manual intervention required
Bash — Check Voting Disk and Cluster Status
# Check voting disk configuration crsctl query css votedisk # Verify OCR configuration ocrcheck # Check overall cluster status across all nodes crsctl check cluster -all

Node Eviction Process

When a node is evicted the following sequence occurs:

  1. Cluster detects node unresponsiveness (missed heartbeats)
  2. Voting disk quorum check fails for that node
  3. Clusterware initiates an immediate node reboot
  4. The instance crashes (immediate termination — no graceful shutdown)
  5. Surviving nodes perform instance recovery from redo logs
  6. Applications reconnect automatically to surviving nodes

6. RAC Performance Tuning: What Actually Matters

RAC tuning is different from single-instance tuning. The metrics that matter most are cluster-specific.

Key RAC-Specific Metrics

MetricGood ValueProblem ThresholdAction
GC CR block receive time< 1 ms> 5 msCheck interconnect hardware
GC current block busy< 1% of waits> 5% of waitsReduce hot blocks
Blocks received (per node)Balanced across nodesSkewed to one nodeFix application routing
Cache transfers< 10% of reads> 30% of readsPartition data or workload
SQL — Comprehensive RAC Health Check
-- RAC performance report: CR and Current block latency per node SELECT inst_id, 'CR Block Receive Time (ms)' AS metric, ROUND( (SELECT value FROM gv$sysstat WHERE name = 'gc cr block receive time' AND inst_id = i.inst_id) / NULLIF( (SELECT value FROM gv$sysstat WHERE name = 'gc cr blocks received' AND inst_id = i.inst_id), 0), 2) AS value FROM gv$instance i UNION ALL SELECT inst_id, 'Current Block Receive Time (ms)', ROUND( (SELECT value FROM gv$sysstat WHERE name = 'gc current block receive time' AND inst_id = i.inst_id) / NULLIF( (SELECT value FROM gv$sysstat WHERE name = 'gc current blocks received' AND inst_id = i.inst_id), 0), 2) FROM gv$instance i ORDER BY inst_id, metric;

Common RAC Performance Problems

1. Hot Blocks
A single block being accessed by multiple nodes simultaneously causes excessive Cache Fusion traffic. Solution: partition data, use sequences wisely, avoid right-growing indexes.

2. Unbalanced Load
One node handling 80% of the workload while others are underutilized. Solution: fix application-level connection distribution and service definitions.

3. Interconnect Saturation
Cache Fusion messages exceeding available bandwidth causes latency to increase dramatically. Solution: upgrade interconnect to 10 GbE or 25 GbE; reduce unnecessary block transfers through workload partitioning.

7. Real Production Failures and Lessons Learned

These are actual RAC incidents from production environments.

Incident 1: Switch Firmware Causes Mass Eviction

Network team upgraded switch firmware during the maintenance window. The new firmware had a bug causing random packet drops. The cluster detected node unresponsiveness, and all 4 nodes evicted themselves simultaneously — complete cluster failure.

Lesson: Never trust network changes without extended interconnect testing. Always run ping and traceroute across the private interconnect for at least 30 minutes post-change before closing the maintenance window.
Incident 2: Storage Latency Masquerading as a RAC Issue

AWR showed high gc cr block receive time. Initial assumption was an interconnect problem. Deep investigation revealed storage latency of 50 ms — nodes were waiting for disk I/O, not Cache Fusion.

Lesson: Always check storage I/O latency before blaming RAC or the interconnect. Check v$filestat and storage-level metrics first.
Incident 3: Application Design Killing RAC Performance

The application used a single global sequence for order IDs. Every insert required global coordination across all nodes. This caused enq: SQ contention cluster-wide. Throughput was capped at 200 TPS against a target of 2,000+ TPS.

Lesson: RAC exposes bad application design immediately. Partition sequences per node, or use local sequences with offsets to eliminate global coordination overhead.

8. When RAC Makes Sense (And When It Doesn't)

RAC is not a universal solution. It has specific use cases where it excels and others where it makes things worse.

Good Use Cases for RAC

  • Good Read-heavy workloads: Reporting, analytics, read scaling
  • Good High availability requirement: Cannot tolerate planned downtime for patches
  • Good Partitioned workloads: Each node handles a different data subset
  • Good Connection scaling: Need to support 10,000+ concurrent connections

Bad Use Cases for RAC

  • Avoid Write-intensive OLTP: Cache Fusion overhead degrades write performance
  • Avoid Single global sequences: Become cluster-wide bottlenecks immediately
  • Avoid Budget-constrained environments: RAC requires expensive hardware and licensing
  • Avoid Teams without RAC expertise: Troubleshooting requires deep knowledge

RAC Alternatives to Consider

RequirementRAC SolutionAlternative Solution
High AvailabilityRAC clusterData Guard with fast failover
Read ScalingRAC nodesActive Data Guard read replicas
Zero Downtime PatchingRAC rolling patchData Guard rolling upgrade
Connection PoolingRAC load balancingApplication-level connection pool

9. FAQ

Does RAC provide disaster recovery?
No. RAC provides high availability within a single data center, not disaster recovery across data centers. All RAC nodes access the same shared storage — if that storage fails or the data center fails, the entire RAC cluster fails. For disaster recovery you need Data Guard in addition to RAC. A common architecture is: primary site runs RAC for HA, standby site runs Data Guard for DR.
Can I run RAC over a WAN?
Technically possible with Oracle Extended RAC, but not recommended for most use cases. Cache Fusion requires sub-millisecond latency. WAN latency (typically 20–100 ms) causes severe performance degradation. Extended RAC is designed for metro-area clusters (<100 km) with dark fiber connections. For true geographic distribution, use Data Guard instead.
Does RAC double my database performance?
No. Adding a second RAC node does not double throughput. Read-heavy workloads can scale near-linearly (1.8x with 2 nodes). Write-heavy workloads see minimal scaling (1.2–1.4x with 2 nodes) due to Cache Fusion coordination overhead. Some workloads actually perform worse in RAC due to global contention. RAC is about availability and read scaling, not write performance multiplication.
Should I mention RAC experience on my resume?
Absolutely — but be specific. Don't just write "Oracle RAC experience." Write: "Managed 4-node Oracle 19c RAC cluster serving 50,000 TPS OLTP workload. Troubleshot Cache Fusion performance issues, optimized interconnect configuration, and reduced gc cr block receive time from 8 ms to 1.2 ms through network tuning." Specific metrics and outcomes matter. RAC expertise is valuable because it's complex and few DBAs understand it deeply.

About the Author

Chetan Yadav

Chetan Yadav is a Senior Oracle, PostgreSQL, MySQL, and Cloud DBA with 15+ years of hands-on experience managing production databases across on-premises, hybrid, and cloud environments. He specializes in high availability architecture, performance tuning, disaster recovery, and database migrations.

Throughout his career, Chetan has designed and implemented Oracle RAC clusters for mission-critical systems in finance, healthcare, and e-commerce sectors. He has architected high-availability solutions serving millions of transactions daily and has troubleshot complex Cache Fusion performance issues under production pressure.

This blog focuses on real-world DBA problems, career growth, and practical learning — not theoretical documentation or vendor marketing.