Top-10 Wait Events Query (Universal DB Performance Tuning)
Top-10 Wait Events Query (Universal DB Performance Tuning)
⏱️ Estimated Reading Time: 18 minutes
During a live production slowdown, a fresher DBA once jumped straight into query tuning.
Indexes were added, SQL was rewritten, and parameters were debated—yet performance did not improve.
The real issue was never SQL. The database was waiting on something else entirely.
A simple Top-10 wait events query would have revealed the truth in minutes.
Understanding wait events is one of the fastest ways to move from a reactive DBA
to a confident performance engineer trusted during real incidents.
Aurora MySQL Lock Detection Script - Complete Guide 2026
⏱️ Estimated Reading Time: 6–7 minutes
Aurora MySQL Lock Detection Script - Complete Production Guide 2026
In a production Aurora MySQL environment, undetected locks can silently degrade application performance, cause connection pool exhaustion, and lead to cascading timeouts across microservices. A single long-running transaction holding row locks can block hundreds of queries, turning a minor issue into a critical incident.
This article provides a comprehensive Shell Script for Aurora MySQL Lock Detection and Analysis. It covers blocking sessions, InnoDB lock waits, metadata locks, and transaction isolation issues—perfect for daily monitoring, incident response, or pre-deployment validation.
Replication Lag on Readers: Lock waits on the writer propagate to read replicas
Split-Second SLA Breaches: P99 latency spikes from 50ms to 5+ seconds
Running a unified lock detection script ensures you catch blocking chains, identify victim queries, and resolve issues before they trigger PagerDuty alerts.
2. Production-Ready Lock Detection Script
This shell script combines Performance Schema queries, InnoDB lock analysis, and metadata lock detection to provide a complete locking overview.
Note: Execute this script with a MySQL user having PROCESS and SELECT privileges on performance_schema and information_schema.
📋 aurora_lock_detection.sh
#!/bin/bash
# ====================================================
# Aurora MySQL Lock Detection & Analysis Script
# Author: Chetan Yadav
# Usage: ./aurora_lock_detection.sh
# ====================================================
# MySQL Connection Parameters
MYSQL_HOST="your-aurora-cluster.cluster-xxxxx.us-east-1.rds.amazonaws.com"
MYSQL_PORT="3306"
MYSQL_USER="monitor_user"
MYSQL_PASS="your_secure_password"
MYSQL_DB="information_schema"
# Output file for detailed logging
OUTPUT_LOG="/tmp/aurora_lock_detection_$(date +%Y%m%d_%H%M%S).log"
echo "==================================================" | tee -a $OUTPUT_LOG
echo " AURORA MYSQL LOCK DETECTION - $(date) " | tee -a $OUTPUT_LOG
echo "==================================================" | tee -a $OUTPUT_LOG
# 1. Check for Blocking Sessions (InnoDB Lock Waits)
echo -e "\n[1] Detecting InnoDB Lock Waits..." | tee -a $OUTPUT_LOG
mysql -h $MYSQL_HOST -P $MYSQL_PORT -u $MYSQL_USER -p$MYSQL_PASS \
-D $MYSQL_DB -sN <<EOF | tee -a $OUTPUT_LOG
SELECT
r.trx_id AS waiting_trx_id,
r.trx_mysql_thread_id AS waiting_thread,
r.trx_query AS waiting_query,
b.trx_id AS blocking_trx_id,
b.trx_mysql_thread_id AS blocking_thread,
b.trx_query AS blocking_query,
TIMESTAMPDIFF(SECOND, r.trx_wait_started, NOW()) AS wait_seconds
FROM information_schema.innodb_lock_waits w
INNER JOIN information_schema.innodb_trx b
ON b.trx_id = w.blocking_trx_id
INNER JOIN information_schema.innodb_trx r
ON r.trx_id = w.requesting_trx_id
ORDER BY wait_seconds DESC;
EOF
# 2. Check for Long-Running Transactions
echo -e "\n[2] Long-Running Transactions (>30 sec)..." | tee -a $OUTPUT_LOG
mysql -h $MYSQL_HOST -P $MYSQL_PORT -u $MYSQL_USER -p$MYSQL_PASS \
-D $MYSQL_DB -sN <<EOF | tee -a $OUTPUT_LOG
SELECT
trx_id,
trx_mysql_thread_id AS thread_id,
trx_state,
TIMESTAMPDIFF(SECOND, trx_started, NOW()) AS runtime_sec,
trx_rows_locked,
trx_rows_modified,
SUBSTRING(trx_query, 1, 80) AS query_snippet
FROM information_schema.innodb_trx
WHERE TIMESTAMPDIFF(SECOND, trx_started, NOW()) > 30
ORDER BY runtime_sec DESC;
EOF
# 3. Check for Metadata Locks
echo -e "\n[3] Detecting Metadata Locks..." | tee -a $OUTPUT_LOG
mysql -h $MYSQL_HOST -P $MYSQL_PORT -u $MYSQL_USER -p$MYSQL_PASS \
-D performance_schema -sN <<EOF | tee -a $OUTPUT_LOG
SELECT
object_schema,
object_name,
lock_type,
lock_duration,
lock_status,
owner_thread_id
FROM metadata_locks
WHERE lock_status = 'PENDING'
AND object_schema NOT IN ('performance_schema', 'mysql');
EOF
# 4. Check Active Processlist
echo -e "\n[4] Active Processlist..." | tee -a $OUTPUT_LOG
mysql -h $MYSQL_HOST -P $MYSQL_PORT -u $MYSQL_USER -p$MYSQL_PASS \
-e "SHOW FULL PROCESSLIST;" | grep -v "Sleep" | tee -a $OUTPUT_LOG
# 5. Check Last Deadlock
echo -e "\n[5] Last Detected Deadlock..." | tee -a $OUTPUT_LOG
mysql -h $MYSQL_HOST -P $MYSQL_PORT -u $MYSQL_USER -p$MYSQL_PASS \
-e "SHOW ENGINE INNODB STATUS\G" | \
grep -A 50 "LATEST DETECTED DEADLOCK" | tee -a $OUTPUT_LOG
echo -e "\n==================================================" | tee -a $OUTPUT_LOG
echo " LOCK DETECTION COMPLETE. Review: $OUTPUT_LOG " | tee -a $OUTPUT_LOG
echo "==================================================" | tee -a $OUTPUT_LOG
This script consolidates five critical lock detection queries into a single diagnostic report, providing immediate visibility into blocking sessions and lock contention hotspots.
3. Script Output & Analysis Explained
Check Component
What "Healthy" Looks Like
Red Flags
InnoDB Lock Waits
Empty result set (no blocking chains)
Any rows indicate active blocking; wait_time > 5 seconds is critical
Long Transactions
Transactions < 5 seconds
Transactions > 60 seconds with high trx_rows_locked indicate forgotten transactions
Metadata Locks
No PENDING locks
PENDING metadata locks block DDL; check for unclosed transactions on that table
Processlist
Queries in "Sending data" or "Sorting result"
Multiple queries stuck in "Waiting for table metadata lock"
Understanding MySQL's locking mechanisms is vital for Aurora DBAs:
InnoDB Row Locks
Acquired automatically during DML operations (UPDATE, DELETE). Uses MVCC (Multi-Version Concurrency Control) to allow non-blocking reads while writes are in progress. Lock waits occur when two transactions try to modify the same row.
Metadata Locks (MDL)
Protect table structure during DDL operations (ALTER TABLE, DROP TABLE). A long-running SELECT can hold a metadata lock that blocks an ALTER TABLE, even though no row locks exist.
Deadlocks
Occur when two transactions acquire locks in opposite orders. InnoDB automatically detects deadlocks and rolls back the smaller transaction (the "victim"). Frequent deadlocks indicate poor transaction design or missing indexes.
Gap Locks
Used in REPEATABLE READ isolation level to prevent phantom reads. Can cause unexpected blocking when queries scan ranges without proper indexes.
5. Troubleshooting Common Lock Issues
If the script reports blocking or long lock waits, follow this workflow:
Trigger if InnoDB_Lock_Waits > 5 for 2 consecutive periods
SNS notification to on-call engineer
Method 3: Performance Insights Integration
Aurora's Performance Insights automatically tracks lock waits. Use this script as a supplementary deep-dive tool when Performance Insights shows spikes in wait/io/table/sql/handler or wait/lock/table/sql/handler.
7. Interview Questions: MySQL Lock Troubleshooting
Prepare for these questions in Aurora/MySQL DBA interviews:
Q: What's the difference between InnoDB row locks and table locks?
A: InnoDB uses row-level locking for DML operations, allowing high concurrency. Table locks (LOCK TABLES) lock the entire table and block all other operations. MyISAM uses table locks by default; InnoDB uses row locks with MVCC.
Q: How does MySQL's REPEATABLE READ isolation level cause deadlocks?
A: REPEATABLE READ uses gap locks to prevent phantom reads. If two transactions scan overlapping ranges without proper indexes, they can acquire gap locks in opposite orders, causing deadlocks. READ COMMITTED avoids gap locks but allows phantom reads.
Q: How do you identify the blocking query in a lock wait scenario?
A: Query information_schema.innodb_lock_waits joined with innodb_trx to map blocking_trx_id to the actual query. Use SHOW ENGINE INNODB STATUS for detailed lock information including locked record details.
Q: What causes metadata lock timeouts in production?
A: Long-running queries or unclosed transactions holding shared metadata locks. Even a simple SELECT with an open transaction prevents DDL operations. Use lock_wait_timeout and ensure applications properly close connections.
Q: How do you prevent deadlocks at the application level?
A: (1) Access tables in consistent order across all transactions, (2) Keep transactions short, (3) Use appropriate indexes to reduce gap locks, (4) Consider READ COMMITTED isolation if acceptable, (5) Implement exponential backoff retry logic.
8. Final Summary
A healthy Aurora MySQL cluster requires proactive lock monitoring, not just reactive troubleshooting. The script provided above delivers instant visibility into blocking sessions, long transactions, and metadata lock contention.
Use this script as part of your Daily Health Check routine and integrate it with CloudWatch alarms for real-time alerting. Combine it with Performance Insights for comprehensive lock analysis during incidents.
Metadata locks can block DDL even without row lock contention
Deadlocks indicate transaction design issues or missing indexes
Automate monitoring with CloudWatch custom metrics
9. FAQ
Q1: Can this script impact production performance?
A: The queries access information_schema and performance_schema, which are lightweight metadata operations. Running every 5 minutes has negligible impact. Avoid running every 10 seconds on large clusters.
Q2: What if the blocking query shows NULL?
A: The transaction may have completed its query but hasn't committed. Check trx_state in innodb_trx—if it's "LOCK WAIT", the transaction is idle but holding locks. Kill it if it's been idle > 5 minutes.
Q3: How do I grant minimum privileges for the monitoring user?
A:GRANT SELECT ON information_schema.* TO 'monitor_user'@'%'; GRANT SELECT ON performance_schema.* TO 'monitor_user'@'%'; GRANT PROCESS ON *.* TO 'monitor_user'@'%';
Q4: Does this work with Aurora MySQL 2.x and 3.x?
A: Yes, the script uses standard MySQL 5.7+ features. Aurora MySQL 2.x (MySQL 5.7) and 3.x (MySQL 8.0) both support these queries. MySQL 8.0 has enhanced performance_schema lock tables for deeper analysis.
Q5: What's the difference between this and Performance Insights?
A: Performance Insights provides visual dashboards and historical trends. This script gives real-time CLI output with specific blocking chains and kill commands—ideal for incident response and automation.
About the Author
Chetan Yadav is a Senior Oracle, PostgreSQL, MySQL and Cloud DBA with 14+ years of experience supporting high-traffic production environments across AWS, Azure and on-premise systems. His expertise includes Oracle RAC, ASM, Data Guard, performance tuning, HA/DR design, monitoring frameworks and real-world troubleshooting.
He trains DBAs globally through deep-dive technical content, hands-on sessions and automation workflows. His mission is to help DBAs solve real production problems and advance into high-paying remote roles worldwide.
If you are a DBA, you know the panic of a "Quiet Standby." The alerts are silent. The phone isn't ringing. But deep down, you wonder: Is my Disaster Recovery (DR) site actually in sync, or has it been stuck on Sequence #10452 since last Tuesday?
Too many monitoring tools (like OEM or Zabbix) only trigger an alert when the lag hits a threshold (e.g., "Lag > 30 Mins"). By then, it’s often too late. You don't just want to know if there is a lag; you need to know where the lag is.
Is it the Network (Transport Lag)? Or is it the Disk/CPU (Apply Lag)?
Below is the exact script I use in my daily health checks. It consolidates 4 different dynamic performance views (v$dataguard_stats, v$managed_standby, v$archive_gap, v$database) into one single "Truth" report.
The Script (dg_health_check.sql)
Save this as dg_health_check.sql and run it on your Standby Database.
SQL
SET LINESIZE 200 PAGESIZE 1000CHECK OFF FEEDBACK OFF ECHO OFF VERIFY OFF
COL name FORMAT a30
COL value FORMAT a20
COL unit FORMAT a30
COL time_computed FORMAT a25
COL process FORMAT a10
COL status FORMAT a15
COL sequence# FORMAT 99999999
COL block# FORMAT 999999
COL error_message FORMAT a50
PROMPT ========================================================
PROMPT ORACLE DATA GUARD HEALTH CHECK (Run on Standby)
PROMPT ========================================================
PROMPT
PROMPT 1. DATABASE ROLE & PROTECTION MODE
PROMPT ----------------------------------------SELECT name, db_unique_name, database_role, open_mode, protection_mode
FROM v$database;
PROMPT
PROMPT 2.REAL-TIME LAG STATISTICS (The Source of Truth)
PROMPT ------------------------------------------ Transport Lag = Delay in receiving data (Network Issue)-- Apply Lag = Delay in writing data (IO/CPU Issue)SELECT name, value, unit, time_computed
FROM v$dataguard_stats
WHERE name IN ('transport lag', 'apply lag', 'estimated startup time');
PROMPT
PROMPT 3. MRP (MANAGED RECOVERY PROCESS) STATUS
PROMPT ------------------------------------------ IF NO ROWS SELECTED: Your recovery is STOPPED.-- Look for 'APPLYING_LOG' or 'WAIT_FOR_LOG'SELECT process, status, thread#, sequence#, block#
FROM v$managed_standby
WHERE process LIKE'MRP%';
PROMPT
PROMPT 4. GAP DETECTION
PROMPT ------------------------------------------ If rows appear here, you have a missing archive log that FAL_SERVER could not fetch.SELECT*FROM v$archive_gap;
PROMPT
PROMPT 5. RECENT ERRORS (Last10 Events)
PROMPT ----------------------------------------SELECT TO_CHAR(timestamp, 'DD-MON-RR HH24:MI:SS') as err_time, message
FROM v$dataguard_status
WHERE severity IN ('Error','Fatal')
ANDtimestamp> sysdate-1ORDERBYtimestampDESCFETCHFIRST10ROWSONLY;
PROMPT ========================================================
PROMPT ENDOF REPORT
PROMPT ========================================================
How to Analyze the Output (Like a Senior DBA)
Scenario A: High Transport Lag
What you see:Transport Lag is high (e.g., +00 01:20:00), but Apply Lag is low.
What it means: Your Primary database is generating Redo faster than your network can ship it.
The Fix: Check your network bandwidth. If you are using Oracle 19c or 23ai, consider enabling Redo Compression in your Data Guard broker configuration (EditDatabase Set Property RedoCompression='ENABLE').
Scenario B: High Apply Lag
What you see:Transport Lag is near 0, but Apply Lag is climbing (e.g., +00 00:45:00).
What it means: The data is there (on the standby server), but the database can't write it to disk fast enough. This often happens during batch loads or index rebuilds on the Primary.
The Fix: Check I/O stats on the Standby. Ensure you are using Real-Time Apply so the MRP (Managed Recovery Process) reads directly from Standby Redo Logs (SRLs) rather than waiting for archive logs to be finalized.
Scenario C: MRP Status is "WAIT_FOR_GAP"
What you see: In Section 3, the status is WAIT_FOR_GAP.
What it means: A severe gap has occurred. The Standby is missing a specific sequence number and cannot proceed until you manually register that file.
The Fix: Run the query in Section 4 (v$archive_gap) to identify the missing sequence, restore it from backup, and register it.
Why this works in 2026
Old school scripts relied on v$archived_log, which only tells you history. In modern Oracle Cloud (OCI) and Hybrid environments, v$dataguard_stats is the only view that accurately calculates the time difference between the Primary commit and the Standby visibility.
Chetan Yadav is a Senior Oracle, PostgreSQL, MySQL & Cloud DBA with 14+ years of experience supporting high-traffic production environments across AWS, Azure, and on-premise systems. His core expertise includes Oracle RAC, ASM, Data Guard, performance tuning, HA/DR design, monitoring frameworks, and real-world troubleshooting.
He also trains DBAs globally through deep-dive technical content, hands-on sessions, and automation workflows using n8n, AI tools, and modern monitoring stacks. His mission is to help DBAs solve real production problems and grow into high-paying remote roles worldwide.
Chetan regularly publishes expert content across Oracle, PostgreSQL, MySQL, and Cloud DBA technologies—including performance tuning guides, DR architectures, monitoring tools, scripts, and real incident-based case studies.
These platforms feature guides, scripts, diagrams, troubleshooting workflows, and real-world DBA case studies designed for database professionals worldwide.
Slow queries are one of the biggest reasons for
performance degradation in MySQL and Aurora MySQL environments. High latency
SQL can create CPU spikes, I/O pressure, row lock waits, replication lag, and
application-level timeouts.
This article provides a production-ready MySQL Slow Query
Diagnostic Script, explains how to interpret the results, and
shows how DBAs can use this script for proactive tuning and operational
monitoring.
Table
of Contents
1.What Slow Query Diagnostics Mean for MySQL DBAs
2.Production-Ready MySQL Slow Query Diagnostic Script
3.Script Output Explained
4.Additional Performance Metrics to Watch
5.Add-On Scripts (Top by Buffer Gets, Disk Reads)
6.Real-World MySQL DBA Scenario
7.How to Automate These Checks
8.Interview Questions
9.Final Summary
10.FAQ
11.About
the Author
12.Call
to Action (CTA)
1.
What Slow Query Diagnostics Mean for MySQL DBAs
Slow queries lead to:
·High CPU utilisation
·Increased IOPS and latency
·Row lock waits and deadlocks
·Replication lag in Aurora MySQL / RDS MySQL
·Query timeout issues at the application layer
·Poor customer experience under load
MySQL’s Performance Schema provides deep
visibility into SQL patterns, allowing DBAs to identify:
·High-latency queries
·Full table scans
·Missing index patterns
·SQL causing temporary tables
·SQL responsible for heavy disk reads
·SQL generating high row examinations
Slow query diagnostics are essential for
maintaining consistent performance in production systems.
2.
Production-Ready MySQL Slow Query Diagnostic Script
This script analyses execution time, latency,
row scans and query patterns using Performance Schema:
/* MySQL Slow Query Diagnostic Script
Works on: MySQL 5.7, MySQL 8.0, Aurora MySQL
*/
SELECT
DIGEST_TEXT AS Query_Sample,
SCHEMA_NAME AS Database_Name,
COUNT_STAR AS Execution_Count,
ROUND(SUM_TIMER_WAIT/1000000000000, 4) AS Total_Time_Seconds,
ROUND((SUM_TIMER_WAIT/COUNT_STAR)/1000000000000, 6) AS Avg_Time_Per_Exec,
SUM_ROWS_EXAMINED AS Rows_Examined,
SUM_ROWS_SENT AS Rows_Sent,
FIRST_SEEN,
LAST_SEEN
FROM performance_schema.events_statements_summary_by_digest
WHERE SCHEMA_NAME NOTIN ('mysql','sys','performance_schema','information_schema')
ORDERBY Total_Time_Seconds DESC
LIMIT 20;
This is a field-tested script used in multiple
production environments including AWS RDS MySQL and Amazon Aurora MySQL.
3.
Script Output Explained
Column
Meaning
Query_Sample
Normalized version of SQL for pattern analysis
Database_Name
Schema on which SQL is executed
Execution_Count
How many times the SQL pattern ran
Total_Time_Seconds
Total execution time consumed
Avg_Time_Per_Exec
Average latency per execution
Rows_Examined
Total rows scanned (detects full scans)
Rows_Sent
Rows returned by the query
FIRST_SEEN / LAST_SEEN
Time window of activity
These values help DBAs identify the
highest-impact SQL patterns immediately.
4.
Additional Performance Metrics You Must Watch
During slow query investigations, always
check:
·High Rows_Examined
→ Missing index
·High Avg_Time_Per_Exec
→ Expensive joins or sorting
·High Rows_Examined
vs Rows_Sent difference → Inefficient filtering
·High Execution_Count
→ Inefficient query called repeatedly
·Repeated occurrence between FIRST_SEEN and LAST_SEEN → Ongoing issue
MySQL workload analysis becomes easy when
these metrics are evaluated together.
5.
Add-On Script: Top SQL by Buffer Gets
Useful for identifying CPU-heavy SQL:
SELECT
sql_id,
buffer_gets,
executions,
ROUND(buffer_gets/EXECUTIONS, 2) AS gets_per_exec,
sql_text
FROM performance_schema.events_statements_summary_by_digest
ORDERBY buffer_gets DESC
LIMIT 10;
6.
Add-On Script: Top SQL by Disk Reads
Identifies IO-intensive SQL patterns:
SELECT
DIGEST_TEXT,
SUM_ROWS_EXAMINED,
SUM_ROWS_SENT,
SUM_CREATED_TMP_TABLES,
SUM_CREATED_TMP_DISK_TABLES
FROM performance_schema.events_statements_summary_by_digest
ORDERBY SUM_CREATED_TMP_DISK_TABLES DESC
LIMIT 10;
These help diagnose latency issues caused by
slow storage or inefficient joins.
7.
Real-World MySQL DBA Scenario
A typical incident scenario:
1.Application complaints about slow API response
2.CloudWatch shows high read latency
3.Slow query log or Performance Schema shows a SQL digest
consuming high execution time
4.SQL performs a full table scan on a large table
5.Missing index identified on a WHERE clause or JOIN
condition
6.Index added / query refactored
7.Latency drops, performance normalises
This is the real process DBAs follow for
incident resolution.
8.
How to Automate These Checks
DBAs typically automate slow query monitoring
using:
·Linux cron + shell scripts
·Python automation with scheduling
·n8n workflows + MySQL nodes
·AWS CloudWatch + Lambda alerts for Aurora MySQL
·Grafana + Prometheus exporters
·Slack / Teams notifications for high-latency SQL
Automation ensures issues are detected before
users experience downtime.
9.
Interview Questions – Slow Query Diagnostics
Be ready for:
·How do you find top slow queries in MySQL?
·What is the advantage of Performance Schema?
·Difference between Rows_Examined and Rows_Sent?
·What creates temporary disk tables?
·How do you detect missing indexes from slow
queries?
·How do you reduce query execution time?
·How does MySQL slow query log differ from
Performance Schema?
Mentioning these scripts gives you a strong
technical advantage.
10.
Final Summary
Slow query diagnostics are essential for
maintaining high performance in MySQL, Aurora MySQL, and RDS MySQL systems. The
diagnostic script provided above offers deep visibility into SQL patterns,
latency contributors and row scan behaviour.
This script can be used for daily health
checks, tuning analysis, or fully automated monitoring workflows.
11.
FAQ – MySQL Slow Query Diagnostics
Q1:
What causes slow queries in MySQL?
Missing indexes, inefficient joins, large table scans, temporary table
creation, outdated statistics, or poor schema design.
Q2:
Does this script work in Aurora MySQL?
Yes, it works in Aurora MySQL 2.x/3.x because Performance Schema is supported.
Q3:
Should I enable slow query logs as well?
Yes, slow query logs complement Performance Schema for long-running queries.
Q4:
Can this script detect full table scans?
Yes—high Rows_Examined with low Rows_Sent is a clear indicator.
Q5:
Does this script impact performance?
No, Performance Schema summary tables are lightweight.
About the Author
Chetan Yadav is a Senior Oracle, PostgreSQL, MySQL and Cloud DBA with 14+ years of experience supporting high-traffic production environments across AWS, Azure and on-premise systems. His expertise includes Oracle RAC, ASM, Data Guard, performance tuning, HA/DR design, monitoring frameworks and real-world troubleshooting.
He trains DBAs globally through deep-dive technical content, hands-on sessions and automation workflows using n8n, AI tools and modern monitoring stacks. His mission is to help DBAs solve real production problems and advance into high-paying remote roles worldwide.
Chetan regularly publishes expert content across Oracle, PostgreSQL, MySQL and Cloud DBA technologies—including performance tuning guides, DR architectures, monitoring tools, scripts and real incident-based case studies.
These platforms feature guides, scripts, diagrams, troubleshooting workflows and real-world DBA case studies designed for database professionals worldwide.
Call to Action
If you found this helpful, follow my blog and LinkedIn for deep Oracle, MySQL, PostgreSQL and Cloud DBA content. I publish real production issues, scripts, case studies and monitoring guides that help DBAs grow in their career.