Database Backup   «Prev  Next»

Lesson 5Using Export to create incremental backups
ObjectiveUse Export for incremental and cumulative backups.

Using Export to create Incremental Backups

Now that you have learned how to do a full mode backup, we will review cumulative and incremental backups. You should always perform a complete backup before a cumulative or incremental backup.These exports will identify and export only those objects that have changed since the last export. Any modification (insert, update, or delete) on a table qualifies that table for an incremental export.
  • Cumulative
    A cumulative export will send only tables that have been modified or created since the last cumulative or complete export. To perform this type of export, you need to set the parameter inctype=cumulative. Any preceding incremental exports can be discarded.
    The following command illustrates the use of the inctype parameter with a value of cumulative:
    C:/Oracle/bin> exp80 userid=system/manager full=y 
    inctype=cumulative file=C:\export\exp002.dmp
    
  • Incremental
    An incremental export will send all tables modified or created since the last incremental, cumulative, or complete export. To perform this type of export, you need to set the parameter inctype=incremental.
    The following command illustrates the use of the inctype parameter with a value of incremental:
    C:/Oracle/bin> exp80 userid=system/manager full=y  
    inctype=incremental file=C:\export\exp003.dmp
    

It is important to note that all rows for a table are exported even if only 1 byte changes. Use an incremental export when your application has several small tables that are regularly modified. If you have an application that is made up of a few large tables that are regularly modified, the performance benefits of incremental backups are lost. Doing a cumulative or complete backup may take the same amount of time as doing an incremental backup. The following series of images displays some of the output from these exports:

Using the Oracle Export Utility (Data Pump Export in Oracle 19c) to create incremental backups. Here's an overview of the process and the best practices for achieving this in Oracle 19c.
What You Need to Know:
  1. Data Pump Export/Import: In Oracle 19c, the legacy `EXP` and `IMP` utilities have been replaced by Data Pump Export (expdp) and Data Pump Import (impdp), which are more efficient and versatile.
  2. Incremental Backups: Data Pump Export supports incremental exports through a combination of table partitioning, flashback options, or user-defined queries.

Steps for Incremental Export Using Data Pump in Oracle 19c:
  1. Set Up a Directory Object

    Data Pump Export requires a directory object in the database to store the dump files:

    CREATE OR REPLACE DIRECTORY dpump_dir AS '/path/to/your/directory';
    GRANT READ, WRITE ON DIRECTORY dpump_dir TO your_user;
        
  2. Use the QUERY Parameter

    To export only modified or new rows since the last backup, use the QUERY parameter in the expdp command. For example:

    expdp user/password DIRECTORY=dpump_dir DUMPFILE=incremental_%U.dmp LOGFILE=incremental.log TABLES=your_table QUERY="WHERE last_updated > TO_DATE('YYYY-MM-DD', 'YYYY-MM-DD')"
        

    This command exports only rows that were updated after the specified date.

  3. Flashback Technology

    If you need consistent incremental exports, you can use the FLASHBACK_TIME or FLASHBACK_SCN options to export a snapshot of the database at a specific point in time:

    expdp user/password DIRECTORY=dpump_dir DUMPFILE=incremental_%U.dmp LOGFILE=incremental.log FULL=Y FLASHBACK_TIME=SYSTIMESTAMP
        
  4. Tablespace-Level Incremental Exports

    To export specific tablespaces incrementally:

    expdp user/password DIRECTORY=dpump_dir DUMPFILE=ts_incremental_%U.dmp LOGFILE=ts_incremental.log TABLESPACES=users,example
        
  5. Combining Export Jobs

    You can schedule these export jobs using Oracle Scheduler (DBMS_SCHEDULER) to run on a regular basis, automating incremental backups.


Key Considerations:
  • Backup Strategy: Incremental exports are not a complete substitute for RMAN backups, which are more robust and designed for database recovery.
  • Performance: Incremental exports may be slower if tables or partitions are large.
  • Tracking Changes: Consider using Oracle's Change Data Capture (CDC) or triggers to keep track of modified rows.
  • Restore Limitations: Data Pump dumps are useful for logical backups but not for physical recovery (e.g., recovering from media failure). Use RMAN for physical backups.
Example: Incremental Export Script
Here’s a basic script for an incremental export:
expdp system/password DIRECTORY=dpump_dir DUMPFILE=incremental_%U.dmp LOGFILE=incremental.log FULL=Y QUERY="WHERE last_updated > TO_DATE('2025-01-01', 'YYYY-MM-DD')"

Conclusion: For logical, incremental backups in Oracle 19c, Data Pump Export is your best tool, especially when combined with filtering (`QUERY`) or flashback options for consistency. However, if you're considering a full backup and recovery strategy, RMAN (Recovery Manager) should be your go-to.

Full export screenshots and Full Export Mode

A full export is specified using the FULL parameter. In a full database export, the entire database is unloaded. This mode requires that you have the EXP_FULL_DATABASE role.
  • Metadata Filters
    Metadata filtering is implemented through the EXCLUDE and INCLUDE parameters.
    The EXCLUDE and INCLUDE parameters are mutually exclusive. Metadata filters identify a set of objects to be included or excluded from an Export or Import operation. For example, you could request a full export, but without Package Specifications or Package Bodies. To use filters correctly and to get the results you expect, remember that dependent objects of an identified object are processed along with the identified object. For example, if a filter specifies that an index is to be included in an operation, then statistics from that index will also be included. Likewise, if a table is excluded by a filter, then indexes, constraints, grants, and triggers upon the table will also be excluded by the filter. If multiple filters are specified for an object type, an implicit AND operation is applied to them. That is, objects pertaining to the job must pass all of the filters applied to their object types.
    The same metadata filter name can be specified multiple times within a job. To see which objects can be filtered, you can query the following views:
    1. DATABASE_EXPORT_OBJECTS for Full-mode exports,
    2. SCHEMA_EXPORT_OBJECTS for schema-mode exports, and
    3. TABLE_EXPORT_OBJECTS for table-mode and tablespace-mode exports.

For example, you could perform the following query:
SQL> SELECT OBJECT_PATH, COMMENTS FROM SCHEMA_EXPORT_OBJECTS
2 WHERE OBJECT_PATH LIKE '%GRANT' AND OBJECT_PATH NOT LIKE '%/%';

The output of this query looks similar to the following:
OBJECT_PATH
--------------------------------------------------------------------------------
COMMENTS
--------------------------------------------------------------------------------
GRANT
Object grants on the selected tables

OBJECT_GRANT
Object grants on the selected tables

PROCDEPOBJ_GRANT
Grants on instance procedural objects

PROCOBJ_GRANT
Schema procedural object grants in the selected schemas

ROLE_GRANT
Role grants to users associated with the selected schemas

SYSTEM_GRANT
System privileges granted to users associated with the selected schemas

Export Utility - Quiz

Click the Quiz link below to check your understanding of using the Export utility.
Export Utility - Quiz
In the next lesson, you will learn how to perform a direct path export.

SEMrush Software Target 5SEMrush Software Banner 5