Estimated
Reading Time: 6–7 minutes
Slow queries are one of the biggest reasons for
performance degradation in MySQL and Aurora MySQL environments. High latency
SQL can create CPU spikes, I/O pressure, row lock waits, replication lag, and
application-level timeouts.
This article provides a production-ready MySQL Slow Query
Diagnostic Script, explains how to interpret the results, and
shows how DBAs can use this script for proactive tuning and operational
monitoring.
Table
of Contents
1.
What Slow Query Diagnostics Mean for MySQL DBAs
2.
Production-Ready MySQL Slow Query Diagnostic Script
3.
Script Output Explained
4.
Additional Performance Metrics to Watch
5.
Add-On Scripts (Top by Buffer Gets, Disk Reads)
6.
Real-World MySQL DBA Scenario
7.
How to Automate These Checks
8.
Interview Questions
9.
Final Summary
10. FAQ
11. About
the Author
12. Call
to Action (CTA)
1.
What Slow Query Diagnostics Mean for MySQL DBAs
Slow queries lead to:
·
High CPU utilisation
·
Increased IOPS and latency
·
Row lock waits and deadlocks
·
Replication lag in Aurora MySQL / RDS MySQL
·
Query timeout issues at the application layer
·
Poor customer experience under load
MySQL’s Performance Schema provides deep
visibility into SQL patterns, allowing DBAs to identify:
·
High-latency queries
·
Full table scans
·
Missing index patterns
·
SQL causing temporary tables
·
SQL responsible for heavy disk reads
·
SQL generating high row examinations
Slow query diagnostics are essential for
maintaining consistent performance in production systems.
2.
Production-Ready MySQL Slow Query Diagnostic Script
This script analyses execution time, latency,
row scans and query patterns using Performance Schema:
/* MySQL Slow Query Diagnostic Script Works on: MySQL 5.7, MySQL 8.0, Aurora MySQL*/ SELECT DIGEST_TEXT AS Query_Sample, SCHEMA_NAME AS Database_Name, COUNT_STAR AS Execution_Count, ROUND(SUM_TIMER_WAIT/1000000000000, 4) AS Total_Time_Seconds, ROUND((SUM_TIMER_WAIT/COUNT_STAR)/1000000000000, 6) AS Avg_Time_Per_Exec, SUM_ROWS_EXAMINED AS Rows_Examined, SUM_ROWS_SENT AS Rows_Sent, FIRST_SEEN, LAST_SEENFROM performance_schema.events_statements_summary_by_digestWHERE SCHEMA_NAME NOT IN ('mysql','sys','performance_schema','information_schema')ORDER BY Total_Time_Seconds DESCLIMIT 20;
This is a field-tested script used in multiple
production environments including AWS RDS MySQL and Amazon Aurora MySQL.
3.
Script Output Explained
|
Column |
Meaning |
|
Query_Sample |
Normalized version of SQL for pattern analysis |
|
Database_Name |
Schema on which SQL is executed |
|
Execution_Count |
How many times the SQL pattern ran |
|
Total_Time_Seconds |
Total execution time consumed |
|
Avg_Time_Per_Exec |
Average latency per execution |
|
Rows_Examined |
Total rows scanned (detects full scans) |
|
Rows_Sent |
Rows returned by the query |
|
FIRST_SEEN / LAST_SEEN |
Time window of activity |
These values help DBAs identify the
highest-impact SQL patterns immediately.
4.
Additional Performance Metrics You Must Watch
During slow query investigations, always
check:
·
High Rows_Examined
→ Missing index
·
High Avg_Time_Per_Exec
→ Expensive joins or sorting
·
High Rows_Examined
vs Rows_Sent difference → Inefficient filtering
·
High Execution_Count
→ Inefficient query called repeatedly
·
Repeated occurrence between FIRST_SEEN and LAST_SEEN → Ongoing issue
MySQL workload analysis becomes easy when
these metrics are evaluated together.
5.
Add-On Script: Top SQL by Buffer Gets
Useful for identifying CPU-heavy SQL:
SELECT sql_id, buffer_gets, executions, ROUND(buffer_gets/EXECUTIONS, 2) AS gets_per_exec, sql_textFROM performance_schema.events_statements_summary_by_digestORDER BY buffer_gets DESCLIMIT 10;
Identifies IO-intensive SQL patterns:
SELECT DIGEST_TEXT, SUM_ROWS_EXAMINED, SUM_ROWS_SENT, SUM_CREATED_TMP_TABLES, SUM_CREATED_TMP_DISK_TABLESFROM performance_schema.events_statements_summary_by_digestORDER BY SUM_CREATED_TMP_DISK_TABLES DESCLIMIT 10;
These help diagnose latency issues caused by
slow storage or inefficient joins.
7.
Real-World MySQL DBA Scenario
A typical incident scenario:
1.
Application complaints about slow API response
2.
CloudWatch shows high read latency
3.
Slow query log or Performance Schema shows a SQL digest
consuming high execution time
4.
SQL performs a full table scan on a large table
5.
Missing index identified on a WHERE clause or JOIN
condition
6.
Index added / query refactored
7.
Latency drops, performance normalises
This is the real process DBAs follow for
incident resolution.
8.
How to Automate These Checks
DBAs typically automate slow query monitoring
using:
·
Linux cron + shell scripts
·
Python automation with scheduling
·
n8n workflows + MySQL nodes
·
AWS CloudWatch + Lambda alerts for Aurora MySQL
·
Grafana + Prometheus exporters
·
Slack / Teams notifications for high-latency SQL
Automation ensures issues are detected before
users experience downtime.
9.
Interview Questions – Slow Query Diagnostics
Be ready for:
·
How do you find top slow queries in MySQL?
·
What is the advantage of Performance Schema?
·
Difference between Rows_Examined and Rows_Sent?
·
What creates temporary disk tables?
·
How do you detect missing indexes from slow
queries?
·
How do you reduce query execution time?
·
How does MySQL slow query log differ from
Performance Schema?
Mentioning these scripts gives you a strong
technical advantage.
10.
Final Summary
Slow query diagnostics are essential for
maintaining high performance in MySQL, Aurora MySQL, and RDS MySQL systems. The
diagnostic script provided above offers deep visibility into SQL patterns,
latency contributors and row scan behaviour.
This script can be used for daily health
checks, tuning analysis, or fully automated monitoring workflows.
11.
FAQ – MySQL Slow Query Diagnostics
Q1:
What causes slow queries in MySQL?
Missing indexes, inefficient joins, large table scans, temporary table
creation, outdated statistics, or poor schema design.
Q2:
Does this script work in Aurora MySQL?
Yes, it works in Aurora MySQL 2.x/3.x because Performance Schema is supported.
Q3:
Should I enable slow query logs as well?
Yes, slow query logs complement Performance Schema for long-running queries.
Q4:
Can this script detect full table scans?
Yes—high Rows_Examined with low Rows_Sent is a clear indicator.
Q5:
Does this script impact performance?
No, Performance Schema summary tables are lightweight.
Chetan Yadav is a Senior Oracle, PostgreSQL, MySQL and Cloud DBA with 14+ years of experience supporting high-traffic production environments across AWS, Azure and on-premise systems. His expertise includes Oracle RAC, ASM, Data Guard, performance tuning, HA/DR design, monitoring frameworks and real-world troubleshooting.
He trains DBAs globally through deep-dive technical content, hands-on sessions and automation workflows using n8n, AI tools and modern monitoring stacks. His mission is to help DBAs solve real production problems and advance into high-paying remote roles worldwide.
Chetan regularly publishes expert content across Oracle, PostgreSQL, MySQL and Cloud DBA technologies—including performance tuning guides, DR architectures, monitoring tools, scripts and real incident-based case studies.
Explore More Technical Work
LinkedIn (Professional Profile & Articles)
https://www.linkedin.com/in/chetanyadavvds/
YouTube – Oracle Foundations Playlist
https://www.youtube.com/playlist?list=PL5TN6ECUWGROHQGXep_5hff-2ageWTp4b
Telegram – LevelUp_Careers DBA Tips
https://t.me/LevelUp_Careers
Instagram – Oracle/Cloud Learning Reels
https://www.instagram.com/levelup_careers/
Facebook Page – OracleDBAInfo
https://www.facebook.com/OracleDBAInfo
These platforms feature guides, scripts, diagrams, troubleshooting workflows and real-world DBA case studies designed for database professionals worldwide.
Call to Action
If you found this helpful, follow my blog and LinkedIn for deep Oracle, MySQL, PostgreSQL and Cloud DBA content. I publish real production issues, scripts, case studies and monitoring guides that help DBAs grow in their career.


No comments:
Post a Comment