Transporting a tablespace involves two parts:
-
Datafiles for the tablespace(s) you are transporting (these contain the table/index segment blocks).
-
Metadata describing the objects inside those tablespaces (tables, indexes, constraints, grants, etc.).
In modern Oracle workflows, metadata is commonly moved with Data Pump.
The important design constraint is that the transport set must be
self-contained:
objects inside the transported tablespaces must not depend on objects that remain outside the set.
Transportable tablespaces are used when you want to move or share
large, relatively stable collections of data efficiently, such as:
- Migrations and consolidations where moving many gigabytes (or terabytes) row-by-row is too slow.
- Data warehousing workflows where historical data is loaded and then transported into a warehouse environment.
- Distributing read-mostly reference data across environments.
- Archiving by moving older data into a separate database while keeping application data smaller.
1) Validate that the transport set is self-contained
Before you move anything, verify that objects inside the tablespaces do not depend on objects outside the set.
A common approach is to run Oracle’s transport set check and review violations.
-- Example (conceptual): check that the transport set is self-contained
-- (The specific procedure name and parameters depend on your Oracle release and environment.)
-- After the check, query the violations view/table to see what must be included or refactored.
2) Put the tablespaces in READ ONLY (consistency window)
Oracle requires a consistent point-in-time view of the transported datafiles. The usual method is to set the tablespace(s) to READ ONLY:
ALTER TABLESPACE warehouse1 READ ONLY;
If you are transporting a set of tablespaces, repeat this for each one in the transport set.
3) Export metadata (Data Pump)
In modern Oracle environments, metadata movement is commonly handled using Data Pump export. The export captures the object definitions and related metadata for the transport set.
-- Example shape of a Data Pump transport export (adjust user, directory, filenames, and tablespaces)
expdp user/password DIRECTORY=dp_dir DUMPFILE=tts_meta.dmp LOGFILE=tts_meta.log TRANSPORT_TABLESPACES=warehouse1
4) Copy the datafiles (and convert if cross-platform)
Copy the tablespace datafiles to storage that is accessible by the target database host.
If you are moving between platforms with different endian formats, datafile conversion may be required (commonly via RMAN conversion workflows).
-- Example (conceptual): cross-platform conversion is typically handled with RMAN conversion workflows
-- The exact command varies by platform pair and storage layout.
5) Import metadata at the target (Data Pump)
On the target database, import the metadata and point Data Pump to the transported datafiles.
-- Example shape of a Data Pump transport import (adjust directory, dumpfile, and datafile paths)
impdp user/password DIRECTORY=dp_dir DUMPFILE=tts_meta.dmp LOGFILE=tts_imp.log TRANSPORT_DATAFILES='/u02/oradata/warehouse1_01.dbf'
6) Set READ WRITE and validate
After import and verification steps, place the tablespace back in READ WRITE mode:
ALTER TABLESPACE warehouse1 READ WRITE;
Then perform validation appropriate to your environment (row counts, key constraints, application smoke tests, and query performance checks).
The next step is to practice the concept. Click the Exercise link below to work through a transportable tablespaces scenario.
Transportable TableSpaces - Exercise
In the next lesson, you will examine operational considerations related to READ ONLY tablespaces.