Continuation of Data Pump ....
With Data Pump, we can do all exp/imp activities, except incremental backups.
Format: expdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
USERID must be the first parameter on the command line. This user must have read & write permissions on DIRECTORY.
$ expdp help=y
The following commands are valid while in interactive mode.
Note: abbreviations are allowed
Data Pump Export interactive mode
While exporting is going on, press Control-C to go to interactive mode, it will stop the displaying of the messages on the screen, but not the export process itself.
Export> [[here you can use the below interactive commands]]
Note: values within parenthesis are the default values.
The options in sky blue color are the enhancements in Oracle 11g Release1.
The options in blue color are the enhancements in Oracle 11g Release2.
Whenever Data Pump export or import is running, Oracle will create a table with the JOB_NAME and will be deleted once the job is done. From this table, Oracle will find out how much job has been completed and from where to continue etc.
Data Pump Export Examples
Exporting using Data Pump API (DBMS_DATAPUMP package)
data objects between Oracle databases.
Format: impdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
USERID must be the first parameter on the command line. This user must have read & write permissions on DIRECTORY.
$ impdp help=y
The following commands are valid while in interactive mode.
Note: abbreviations are allowed
Data Pump Import interactive mode
While importing is going on, press Control-C to go to interactive mode.
Import> [[here you can use the below interactive commands]]
Note: values within parenthesis are the default values.
The options in sky blue color are the enhancements in Oracle 11g Release1.
The options in blue color are the enhancements in Oracle 11g Release2.
Data Pump Import Examples
Importing using Data Pump API (DBMS_DATAPUMP package)
Here is a general guideline for using the PARALLEL parameter:
- Set the degree of parallelism to two times the number of CPUs, then tune from there.
- For Data Pump Export, the PARALLEL parameter value should be less than or equal to the number of dump files.
- For Data Pump Import, the PARALLEL parameter value should not be much larger than the number of files in the dump file set.
- A PARALLEL greater than one is only available in Enterprise Edition.
Original Export is desupported from 10g release 2.
Original Import will be maintained and shipped forever, so that Oracle Version 5.0 through Oracle9i dump files will be able to be loaded into Oracle 10g and later. Data Pump Import can only read Oracle Database 10g (and later) Data Pump Export dump files. Oracle recommends that customers convert to use the Oracle Data Pump.
expdp utility
The Data Pump export utility provides a mechanism for transferring data objects between Oracle databases.With Data Pump, we can do all exp/imp activities, except incremental backups.
Format: expdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
USERID must be the first parameter on the command line. This user must have read & write permissions on DIRECTORY.
$ expdp help=y
Keyword | Description (Default) |
---|---|
ATTACH | Attach to an existing job, e.g. ATTACH [=job name]. |
COMPRESSION | Reduce the size of a dumpfile. Valid keyword values are: ALL, (METADATA_ONLY), DATA_ONLY and NONE. |
CONTENT | Specifies data to unload. Valid keyword values are: (ALL), DATA_ONLY, and METADATA_ONLY. |
DATA_OPTIONS | Data layer flags. Valid value is: XML_CLOBS - write XML datatype in CLOB format. |
DIRECTORY | Directory object to be used for dumpfiles and logfiles. (DATA_PUMP_DIR) e.g. create directory extdir as '/path/'; |
DUMPFILE | List of destination dump files (EXPDAT.DMP), e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp. |
ENCRYPTION | Encrypt part or all of a dump file. Valid keyword values are: ALL, DATA_ONLY, METADATA_ONLY, ENCRYPTED_COLUMNS_ONLY, or NONE. |
ENCRYPTION_ALGORITHM | Specify how encryption should be done. Valid keyword values are: (AES128), AES192, and AES256. |
ENCRYPTION_MODE | Method of generating encryption key. Valid keyword values are: DUAL, PASSWORD, and (TRANSPARENT). |
ENCRYPTION_PASSWORD | Password key for creating encrypted data within a dump file. |
ESTIMATE | Calculate job estimates. Valid keyword values are: (BLOCKS) and STATISTICS. |
ESTIMATE_ONLY | Calculate job estimates without performing the export. |
EXCLUDE | Exclude specific object types. e.g. EXCLUDE=TABLE:EMP |
FILESIZE | Specify the size of each dumpfile in units of bytes. |
FLASHBACK_SCN | SCN used to reset session snapshot. |
FLASHBACK_TIME | Time used to find the closest corresponding SCN value. |
FULL | Export entire database (N). To use this option user must have EXP_FULL_DATABASE role. |
HELP | Display help messages (N). |
INCLUDE | Include specific object types. e.g. INCLUDE=TABLE_DATA. |
JOB_NAME | Name of export job (default name will be SYS_EXPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE). |
LOGFILE | Specify log file name (EXPORT.LOG). |
NETWORK_LINK | Name of remote database link to the source system. |
NOLOGFILE | Do not write logfile (N). |
PARALLEL | Change the number of active workers for current job. |
PARFILE | Specify parameter file name. |
QUERY | Predicate clause used to export a subset of a table. e.g. QUERY=emp:"WHERE dept_id > 10". |
REMAP_DATA | Specify a data conversion function. e.g. REMAP_DATA=EMP.EMPNO:SCOTT.EMPNO |
REUSE_DUMPFILES | Overwrite destination dump file if it exists (N). |
SAMPLE | Percentage of data to be exported. |
SCHEMAS | List of schemas to export (login schema). |
SOURCE_EDITION | Edition to be used for extracting metadata (from Oracle 11g release2). |
STATUS | Frequency (secs) job status is to be monitored where the default (0) will show new status when available. |
TABLES | Identifies a list of tables to export. e.g. TABLES=HR.EMP,SH.SALES:SALES_1995. |
TABLESPACES | Identifies a list of tablespaces to export. |
TRANSPORTABLE | Specify whether transportable method can be used. Valid keyword values are: ALWAYS, (NEVER). |
TRANSPORT_FULL_CHECK | Verify storage segments of all tables (N). |
TRANSPORT_TABLESPACES | List of tablespaces from which metadata will be unloaded. |
VERSION | Version of objects to export. Valid keywords are: (COMPATIBLE), LATEST, or any valid database version. |
The following commands are valid while in interactive mode.
Note: abbreviations are allowed
Data Pump Export interactive mode
Export> [[here you can use the below interactive commands]]
Command | Description |
---|---|
ADD_FILE | Add dumpfile to dumpfile set. |
CONTINUE_CLIENT | Return to logging mode. Job will be re-started if idle. |
EXIT_CLIENT | Quit client session and leave job running. |
FILESIZE | Default filesize (bytes) for subsequent ADD_FILE commands. |
HELP | Summarize interactive commands. |
KILL_JOB | Detach and delete job. |
PARALLEL | Change the number of active workers for current job. PARALLEL= |
REUSE_DUMPFILES | Overwrite destination dump file if it exists (N). |
START_JOB | Start/resume current job. Valid value is: SKIP_CURRENT. |
STATUS | Frequency (secs) job status is to be monitored where the default (0) will show new status when available. STATUS[=interval] |
STOP_JOB | Orderly shutdown of job execution and exits the client. STOP_JOB=IMMEDIATE performs an immediate shutdown of Data Pump job. |
Note: values within parenthesis are the default values.
The options in sky blue color are the enhancements in Oracle 11g Release1.
The options in blue color are the enhancements in Oracle 11g Release2.
Whenever Data Pump export or import is running, Oracle will create a table with the JOB_NAME and will be deleted once the job is done. From this table, Oracle will find out how much job has been completed and from where to continue etc.
Data Pump Export Examples
SQL> CREATE DIRECTORY dp_dir AS '/u02/dpdata';
SQL> GRANT READ, WRITE ON DIRECTORY dp_dir TO user_name;
==> creating an external directory and granting privileges.
$ expdp DUMPFILE=liv_full.dmp LOGFILE=liv_full.log FULL=y PARALLEL=4
==> exporting whole database, with the help of 4 processes.
$ expdp DUMPFILE=master.dmp LOGFILE=master.log SCHEMAS=satya
(or)
$ expdp system/manager SCHEMAS=hr DIRECTORY=data_pump_dir LOGFILE=example1.log FILESIZE=300000 DUMPFILE=example1.dmp JOB_NAME=example1
==> exporting all the objects of a schema.
$ expdp ATTACH=EXAMPLE1
==> continuing or attaching job to background process.
$ expdp DUMPFILE=search.dmp LOGFILE=search.log SCHEMAS=search,own,tester
==> exporting all the objects of multiple schemas.
$ expdp anand/coffee TABLES=kick DIRECTORY=ext_dir DUMPFILE=expkick_%U.dmp PARALLEL=4 JOB_NAME=kick_export
==> exporting all the rows in table.
$ expdp DUMPFILE=t5.dmp LOGFILE=t5.log SCHEMAS=ym ESTIMATE_ONLY=Y
(or)
$ expdp LOGFILE=t5.log SCHEMAS=manage ESTIMATE_ONLY=Y
==> estimating export time and size.
$ expdp DUMPFILE=extdir:avail.dmp LOGFILE=extdir:avail.log
==> exporting without specifying DIRECTORY option and specifying the external directory name within the file names.
$ expdp SCHEMAS=u1,u6 .... COMPRESSION=metadata_only
==> exporting two schemas and compressing the metadata.
$ expdp SCHEMAS=cpp,java .... COMPRESSION=all
==> exporting two schemas and compressing the data (valid in 11g or later).
$ expdp username/password FULL=y DUMPFILE=dba.dmp DIRECTORY=dpump_dir INCLUDE=GRANT INCLUDE=INDEX CONTENT=ALL
==> exporting an entire database to a dump file with all GRANTS, INDEXES and data
$ expdp DUMPFILE=dba.dmp DIRECTORY=dpump_dir INCLUDE=PROCEDURE
==> exporting all the procedures.
$ expdp username/password DUMPFILE=dba.dmp DIRECTORY=dpump_dir INCLUDE=PROCEDURE:\"=\'PROC1\'\",FUNCTION:\"=\'FUNC1\'\"
==> exporting procedure PROC1 and function FUNC1.
$ expdp username/password DUMPFILE=dba.dmp DIRECTORY=dpump_dir
INCLUDE=TABLE:"LIKE 'TAB%'"
(or)
$ expdp username/password DUMPFILE=dba.dmp DIRECTORY=dpump_dir EXCLUDE=TABLE:"NOT LIKE 'TAB%'"
==> exporting only those tables whose name start with TAB.
$ expdp TABLES=hr.employees VERSION=10.1 DIRECTORY=dpump_dir1 DUMPFILE=emp.dmp
==> exporting data with version. Data Pump Import can always read dump file sets created by older versions of Data Pump Export.
$ expdp TABLES=holder,activity REMAP_DATA=holder.cardno:hidedata.newcc
REMAP_DATA=activity.cardno:hidedata.newcc DIRECTORY=dpump_dir DUMPFILE=hremp2.dmp
==> exporting and remapping of data.
declare handle number; begin handle := dbms_datapump.open ('EXPORT', 'SCHEMA'); dbms_datapump.add_file(handle, 'scott.dmp', 'EXTDIR'); dbms_datapump.metadata_filter(handle, 'SCHEMA_EXPR','=''SCOTT'''); dbms_datapump.set_parallel(handle,4); dbms_datapump.start_job(handle); dbms_datapump.detach(handle); exception when others then dbms_output.put_line(substr(sqlerrm, 1, 254)); end; / |
impdp utility
The Data Pump Import utility provides a mechanism for transferringdata objects between Oracle databases.
Format: impdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
USERID must be the first parameter on the command line. This user must have read & write permissions on DIRECTORY.
$ impdp help=y
Keyword | Description (Default) |
---|---|
ATTACH | Attach to an existing job, e.g. ATTACH [=job name]. |
CONTENT | Specifies data to load. Valid keywords are:(ALL), DATA_ONLY, and METADATA_ONLY. |
DATA_OPTIONS | Data layer flags. Valid value is: SKIP_CONSTRAINT_ERRORS-constraint errors are not fatal. |
DIRECTORY | Directory object to be used for dump, log, and sql files. (DATA_PUMP_DIR) |
DUMPFILE | List of dumpfiles to import from (EXPDAT.DMP), e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp. |
ENCRYPTION_PASSWORD | Password key for accessing encrypted data within a dump file. Not valid for network import jobs. |
ESTIMATE | Calculate job estimates. Valid keywords are:(BLOCKS) and STATISTICS. |
EXCLUDE | Exclude specific object types. e.g. EXCLUDE=TABLE:EMP |
FLASHBACK_SCN | SCN used to reset session snapshot. |
FLASHBACK_TIME | Time used to find the closest corresponding SCN value. |
FULL | Import everything from source (Y). To use this option (full import of the database) the user must have IMP_FULL_DATABASE role. |
HELP | Display help messages (N). |
INCLUDE | Include specific object types. e.g. INCLUDE=TABLE_DATA. |
JOB_NAME | Name of import job (default name will be SYS_IMPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE). |
LOGFILE | Log file name (IMPORT.LOG). |
NETWORK_LINK | Name of remote database link to the source system. |
NOLOGFILE | Do not write logfile. |
PARALLEL | Change the number of active workers for current job. |
PARFILE | Specify parameter file name. |
PARTITION_OPTIONS | Specify how partitions should be transformed. Valid keywords are: DEPARTITION, MERGE and (NONE). |
QUERY | Predicate clause used to import a subset of a table. e.g. QUERY=emp:"WHERE dept_id > 10". |
REMAP_DATA | Specify a data conversion function. e.g. REMAP_DATA=EMP.EMPNO:SCOTT.EMPNO |
REMAP_DATAFILE | Redefine datafile references in all DDL statements. |
REMAP_SCHEMA | Objects from one schema are loaded into another schema. |
REMAP_TABLE | Table names are remapped to another table. e.g. REMAP_TABLE=EMP.EMPNO:SCOTT.EMPNO. |
REMAP_TABLESPACE | Tablespace object are remapped to another tablespace. |
REUSE_DATAFILES | Tablespace will be initialized if it already exists(N). |
SCHEMAS | List of schemas to import. |
SKIP_UNUSABLE_INDEXES | Skip indexes that were set to the Index Unusable state. |
SOURCE_EDITION | Edition to be used for extracting metadata (from Oracle 11g release2). |
SQLFILE | Write all the SQL DDL to a specified file. |
STATUS | Frequency (secs) job status is to be monitored where the default (0) will show new status when available. |
STREAMS_CONFIGURATION | Enable the loading of streams metadata. |
TABLE_EXISTS_ACTION | Action to take if imported object already exists. Valid keywords: (SKIP), APPEND, REPLACE and TRUNCATE. |
TABLES | Identifies a list of tables to import. e.g. TABLES=HR.EMP,SH.SALES:SALES_1995. |
TABLESPACES | Identifies a list of tablespaces to import. |
TARGET_EDITION | Edition to be used for loading metadata (from Oracle 11g release2). |
TRANSFORM | Metadata transform to apply to applicable objects. Valid keywords: SEGMENT_ATTRIBUTES, STORAGE, OID and PCTSPACE, LOB_STORAGE (new Oracle Database 12c). |
TRANSPORTABLE | Options for choosing transportable data movement. Valid keywords: ALWAYS and (NEVER). Only valid in NETWORK_LINK mode import operations. |
TRANSPORT_DATAFILES | List of datafiles to be imported by transportable mode. |
TRANSPORT_FULL_CHECK | Verify storage segments of all tables (N). |
TRANSPORT_TABLESPACES | List of tablespaces from which metadata will be loaded. Only valid in NETWORK_LINK mode import operations. |
VERSION | Version of objects to export. Valid keywords are:(COMPATIBLE), LATEST, or any valid database version. Only valid for NETWORK_LINK and SQLFILE. |
The following commands are valid while in interactive mode.
Note: abbreviations are allowed
Data Pump Import interactive mode
While importing is going on, press Control-C to go to interactive mode.
Import> [[here you can use the below interactive commands]]
Command | Description (Default) |
---|---|
CONTINUE_CLIENT | Return to logging mode. Job will be re-started if idle. |
EXIT_CLIENT | Quit client session and leave job running. |
HELP | Summarize interactive commands. |
KILL_JOB | Detach and delete job. |
PARALLEL | Change the number of active workers for current job. PARALLEL= |
Start/resume current job. START_JOB=SKIP_CURRENT will start the job after skipping any action which was in progress when job was stopped. | |
STATUS | Frequency (secs) job status is to be monitored where the default (0) will show new status when available. STATUS[=interval] |
STOP_JOB | Orderly shutdown of job execution and exits the client. STOP_JOB=IMMEDIATE performs an immediate shutdown of Data Pump job. |
The options in sky blue color are the enhancements in Oracle 11g Release1.
The options in blue color are the enhancements in Oracle 11g Release2.
The order of importing objects is:
Tablespaces
Users
Roles
Database links
Sequences
Directories
Synonyms
Types
Tables/Partitions
Views
Comments
Packages/Procedures/Functions
$ impdp DUMPFILE=aslv_full.dmp LOGFILE=aslv_full.log PARALLEL=4
==> importing all the exported data, with the help of 4 processes.
$ impdp system/manager DUMPFILE=testdb_emp.dmp LOGFILE=testdb_emp_imp.log TABLES=tester.employee
==> importing all the records of table (employee table records in tester schema).
$ impdp DUMPFILE=visi.dmp LOGFILE=ref1imp.log TABLES=(brand, mba)
==> importing all the records of couple of tables.
$ impdp system DUMPFILE=example2.dmp REMAP_TABLESPACE=system:example2 LOGFILE=example2imp.log JOB_NAME=example2
==> importing data of one tablespace into another tablespace.
$ impdp DUMPFILE=prod.dmp LOGFILE=prod.log REMAP_TABLESPACE=FRI:WED TABLE_EXISTS_ACTION=REPLACE PARALLEL=4
==> importing data and replacing already existing tables.
$ impdp user1/user1 DUMPFILE=btw:avail.dmp INCLUDE=PROCEDURE
==> importing only procedures from the dump file.
$ impdp username/password DIRECTORY=dpump_dir DUMPFILE=scott.dmp TABLES=scott.emp REMAP_SCHEMA=scott:jim
==> importing of tables from scott’s account to jim’s account
$ impdp DIRECTORY=dpump_dir FULL=Y DUMPFILE=db_full.dmp
REMAP_DATAFILE=”’C:\DB1\HR\PAYROLL\tbs6.dbf’:’/db1/hr/payroll/tbs6.dbf’”
==> importing data by remapping one datafile to another.
$ impdp username/password DIRECTORY=dpump DUMPFILE=expfull.dmp SQLFILE=dpump_dir2:expfull.sql INCLUDE=TABLE,INDEX
==> will create sqlfile with DDL that could be executed in another database/schema to create the tables and indexes.
$ impdp DIRECTORY=dpump_dir DUMPFILE=emps.dmp REMAP_DATA=emp.empno:fixusers.newempid REMAP_DATA=card.empno:fixusers.newempi TABLE_EXISTS_ACTION=append
==> importing and remapping of data.
$ impdp DUMPFILE=PRD_epi_Tue_dump.dmp logfile=PRD_epi_Tue_impdp.log PARALLEL=2 SCHEMAS=epi remap_tablespace=index:data exclude=grant
$ impdp DUMPFILE=PRD_epi_Tue_dump.dmp logfile=PRD_epi_Tue_impdp.log PARALLEL=2 SCHEMAS=epi remap_tablespace=index:data exclude=grant
Importing using Data Pump API (DBMS_DATAPUMP package)
declare handle number; begin handle := dbms_datapump.open ('IMPORT', 'SCHEMA'); dbms_datapump.add_file(handle, 'scott.dmp', 'EXTDIR'); dbms_datapump.set_parameter(handle,'TABLE_EXISTS_ACTION','REPLACE'); dbms_datapump.set_parallel(handle,4); dbms_datapump.start_job(handle); dbms_datapump.detach(handle); exception when others then dbms_output.put_line(substr(sqlerrm, 1, 254)); end; / |
Here is a general guideline for using the PARALLEL parameter:
- Set the degree of parallelism to two times the number of CPUs, then tune from there.
- For Data Pump Export, the PARALLEL parameter value should be less than or equal to the number of dump files.
- For Data Pump Import, the PARALLEL parameter value should not be much larger than the number of files in the dump file set.
- A PARALLEL greater than one is only available in Enterprise Edition.
Original Export is desupported from 10g release 2.
Original Import will be maintained and shipped forever, so that Oracle Version 5.0 through Oracle9i dump files will be able to be loaded into Oracle 10g and later. Data Pump Import can only read Oracle Database 10g (and later) Data Pump Export dump files. Oracle recommends that customers convert to use the Oracle Data Pump.
Thanks Sachin for this elaborate article, I would be pleased if you could let me know, is there any way that i could reduce refresh-time(Schema and Table Level). Because everytime i need to do a full Schema level Datapump Export/Import.Is'nt there anything like (incremental export or any other method of doing refresh). My Environment(3 node Rac,SAN,Oracle 10gR2). would appreciate your reply, my mailid (mannan.mabdul@gmail.com)
ReplyDeleteI actually enjoyed reading through this posting.Many thanks.
ReplyDeleteData Conversion Services India