Performance Tuning   «Prev  Next»
Lesson 8 How do we measure Oracle performance?
Objective Name the metrics of Oracle performance.

Measuring Oracle Performance

When someone asks, “Is my Oracle database performing well?”, the answer depends on more than a single number. In Oracle 23c, performance tuning is about understanding which parts of the workload consume time and resources, and how that experience looks from the end user’s point of view. To do that, you rely on a set of well-defined performance metrics collected from the database and the underlying platform.

When Is My Database Fully Tuned?

Oracle will almost always run faster if you give it more resources: faster CPUs, more memory, and low-latency storage all reduce waits and response time. A database that can cache almost all frequently used blocks in memory will outperform one that constantly reads from disk.

However, performance tuning is not simply “give Oracle everything.” You balance:

  • Service-level expectations (response time, throughput, SLAs).
  • Workload characteristics (OLTP vs. reporting vs. mixed workloads).
  • Available resources and cost (on-prem hardware or Oracle Cloud service tier).

A well-tuned Oracle 23c database is one where:

  • Most foreground time is spent doing useful work, not waiting on avoidable bottlenecks.
  • Key business operations meet their response-time goals.
  • Resource usage (CPU, I/O, memory) is stable and predictable under normal load.

Oracle Cloud Infrastructure (OCI) Architect

External Factors That Impact Oracle Performance

Oracle does not run in isolation. When a user reports slow performance, the root cause might be:

  • Network latency or bandwidth limitations between the application tier and the database.
  • CPU saturation or scheduling delays at the OS or virtual machine level.
  • Storage latency, overloaded I/O subsystems, or noisy neighbors in shared environments.
  • Resource limits in Oracle Cloud service tiers or mis-sized database shapes.

If the server or cloud instance is starved for CPU or memory, no amount of SQL tuning inside Oracle will fully compensate. Performance must be evaluated end-to-end: from the moment a transaction enters Oracle until the result is returned to the user.

Core Oracle Performance Metrics

Oracle 23c exposes a rich set of performance views (for example, V$ views, AWR, and ASH) that help you understand what the database is doing. The following are common metrics and metric families you should be able to name and recognize.

  1. Time-based metrics and active sessions
    Modern tuning is driven by time:
    • DB Time – total time spent in the database across all sessions.
    • Average Active Sessions (AAS) – how many sessions are active (working or waiting) on average.
    • Top consumers of DB Time by SQL ID, wait event, or session.

    These metrics appear prominently in AWR and ADDM reports and are the primary way to identify the biggest performance problems.

  2. Wait events and wait classes
    Wait events describe why sessions are not on CPU. Common examples include:
    • db file sequential read – single-block I/O (index lookups, OLTP reads).
    • db file scattered read – multi-block I/O (full table scans, large reads).
    • log file sync – waits for redo to be flushed during COMMIT.
    • Enqueue and latch waits – contention for locks or internal structures.

    Waits are grouped into wait classes (User I/O, System I/O, Commit, Concurrency, Network, etc.) to help you quickly see where time is going.

  3. Buffer cache performance
    The buffer cache stores frequently accessed data blocks in memory. Important metrics include:
    • Logical vs. physical reads (blocks read from memory vs. from disk).
    • Buffer cache hit ratio (how often requests are satisfied from cache).

    A reasonable hit ratio is helpful, but it is no longer the only or primary metric. A better question is: “How much DB Time is spent waiting on I/O, and can I reduce it?”

  4. Redo and log-related metrics
    These metrics measure how efficiently the redo stream is handled:
    • Redo generation rate (bytes per second, per transaction).
    • Waits related to redo: log file sync, log file parallel write, log buffer space waits.

    High redo-related waits may indicate commit-heavy workloads, undersized redo logs, or I/O issues on the redo log devices.

  5. Row access and storage efficiency
    Metrics such as row chaining and migration help you understand how efficiently rows fit into blocks:
    • Chained/migrated rows and “continued row” fetches.
    • Block size vs. row length considerations.

    Excessive chaining or migrated rows can increase I/O and degrade query performance.

  6. Sorts and temporary space usage
    Sorting, hashing, and some joins use work areas that may spill to the temporary tablespace if memory is not sufficient. Key metrics include:
    • Number of sorts in memory vs. sorts on disk (TEMP).
    • Temp tablespace usage and related I/O waits.

    In older releases, tuning focused on SORT_AREA_SIZE; in modern Oracle, automatic PGA memory management (for example, PGA_AGGREGATE_TARGET and WORKAREA_SIZE_POLICY) plays the larger role.

  7. Library cache and SQL parsing
    The library cache holds parsed SQL and PL/SQL objects. Important metrics include:
    • Library cache hit ratio (how often parsed objects are reused).
    • Soft parse vs. hard parse rates.
    • Shared pool memory pressure (for example, excessive invalidations or reloads).

    Excessive hard parsing can drive up CPU usage and contention. Using bind variables and avoiding unnecessary SQL text variation are key best practices.

  8. SQL-level performance metrics
    Ultimately, users experience performance at the SQL or business-operation level. Oracle tracks:
    • Top SQL by elapsed time, CPU time, buffer gets, and executions.
    • Execution plans and plan changes over time.
    • Per-SQL wait event breakdowns.

    Focusing on the small set of SQL statements that consume most of the DB Time is often the fastest way to achieve meaningful improvements.

Together, these metrics provide a structured answer to the question, “Is my database properly tuned?” Rather than guessing, you can point to concrete indicators of buffer cache efficiency, I/O latency, CPU pressure, and SQL-level behavior.

In older releases, many of these statistics were collected with utilities such as utlbstat and utlestat. In Oracle 23c, the primary mechanisms are:

  • Automatic Workload Repository (AWR) – time-based performance snapshots and reports.
  • Active Session History (ASH) – detailed history of active sessions and waits.
  • Automatic Database Diagnostic Monitor (ADDM) – expert analysis of AWR data.
  • Oracle Enterprise Manager / Cloud Console – graphical dashboards over these metrics.

These tools summarize the metrics above and highlight where tuning effort will have the greatest impact.

Measure Oracle Performance – Exercise

Click the exercise link below to complete a matching exercise about tuning metrics.

Measure Oracle Performance - Exercise


SEMrush Software 8 SEMrush Banner 8