Export & Import utilities in Oracle
Export and Import are the Oracle utilities that allow us to make exports & imports of the data objects, and transfer the data across databases that reside on different hardware platforms on different Oracle versions.Export (exp) and import (imp) utilities are used to perform logical database backup and recovery. When exporting, database objects are dumped to a binary file which can then be imported into another Oracle database.
catexp.sql (in $ORACLE_HOME/rdbms/admin) will create EXP_FULL_DATABASE & IMP_FULL_DATABASE roles (no need to run this, if you ran catalog.sql at the time of database creation).
Before using these commands, you should set ORACLE_HOME, ORACLE_SID and PATH environment variables.
Oracle exp utility
Objects owned by SYS cannot be exported.
If you want to export objects of another schema, you need EXP_FULL_DATABASE role.
Format: exp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
USERID must be the first parameter on the command line.
$ exp help=y
Keyword | Description (Default) |
---|---|
USERID | username/password |
FULL | export entire file (N). To do full database export, that user must have EXP_FULL_DATABASE role |
BUFFER | size of data buffer. OS dependent |
OWNER | list of owner usernames |
FILE | output files (EXPDAT.DMP) |
TABLES | list of table names |
COMPRESS | import into one extent (Y) |
RECORDLENGTH | length of IO record |
GRANTS | export grants (Y) |
INCTYPE | incremental export type. valid values are COMPLETE, INCREMENTAL, CUMULATIVE |
INDEXES | export indexes (Y) |
RECORD | track incremental export (Y) |
DIRECT | direct path (N) |
TRIGGERS | export triggers (Y) |
LOG | log file of screen output |
STATISTICS | analyze objects (ESTIMATE) |
ROWS | export data rows (Y) |
PARFILE | parameter filename |
CONSISTENT | cross-table consistency(N). Implements SET TRANSACTION READ ONLY |
CONSTRAINTS | export constraints (Y) |
OBJECT_CONSISTENT | transaction set to read only during object export (N) |
FEEDBACK | display progress (a dot) for every N rows (0) |
FILESIZE | maximum size of each dump file |
FLASHBACK_SCN | SCN used to set session snapshot back to |
FLASHBACK_TIME | time used to get the SCN closest to the specified time |
QUERY | select clause used to export a subset of a table |
RESUMABLE | suspend when a space related error is encountered(N) |
RESUMABLE_NAME | text string used to identify resumable statement |
RESUMABLE_TIMEOUT | wait time for RESUMABLE |
TTS_FULL_CHECK | perform full or partial dependency check for TTS |
VOLSIZE | number of bytes to write to each tape volume (not available from Oracle 11g Release2) |
TABLESPACES | list of tablespaces to export |
TRANSPORT_TABLESPACE | export transportable tablespace metadata (N) |
TEMPLATE | template name which invokes iAS mode export |
Note: values within parenthesis are the default values.
Examples:
$ exp system/manager file=emp.dmp log=emp_exp.log full=y
==> exporting full database.
$ exp system/manager file=owner.dmp log=owner.log owner=owner direct=y STATISTICS=none
==> exporting all the objects of a schema.
$ exp file=schemas.dmp log=schemas.log owner=master,owner,user direct=y STATISTICS=none
==> exporting all the objects of multiple schemas.
$ exp file=testdb_emp.dmp log=testdb_emp.log tables=scott.emp direct=y STATISTICS=none
==> exporting all the rows in table (emp table records in scott schema).
$ exp file=itinitem.dmp log=itinitem.log tables=tom.ITIN,tom.ITEM
query=\"where CODE in \(\'OT65FR7H\',\'ATQ56F7H\'\)\"
statistics=none
==> exporting the records of some tables which satisfies a particular criteria.
$ exp transport_tablespace=y tablespaces=THU statistics=none file=THU.dmp log=thu_exp.log
==> exporting at tablespace level.
$ exp FILE=file1.dmp,file2.dmp,file3.dmp FILESIZE=10M LOG=multiple.log
==> exporting to multiple files.
$ exp file=scott.dmp log=scott.log inctype=complete
==> exporting full database (after some incremental/cumulative backups).
$ exp file=scott.dmp log=scott.log inctype=cumulative
==> exporting cumulatively (taking backup from last complete or cumulative backup).
$ exp file=scott.dmp log=scott.log inctype=incremental
==> exporting incrementally (taking backup from last complete or cumulative or incremental backup).
Oracle imp utility
imp provides backward compatibility i.e. it will allows you to
import the objects that you have exported in lower Oracle versions also.
imp doesn't recreate already existing objects. It either abort the import process (default) or ignores the errors (if you specify IGNORE=Y).
Format: imp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
USERID must be the first parameter on the command line.
$ imp help=y
The following keywords only apply to transportable tablespaces
TRANSPORT_TABLESPACE import transportable tablespace metadata (N)
TABLESPACES tablespaces to be transported into database
DATAFILES datafiles to be transported into database
TTS_OWNERS users that own data in the transportable tablespace set
Note: values within parenthesis are the default values.Examples:
$ imp system/manager file=emp.dmp log=emp_imp.log full=y
==> importing all the exported data.
$ imp system/manager file=testdb_emp.dmp log=testdb_emp_imp.log tables=tester.employee
==> importing all the records of table (employee table records in tester schema).
$ imp FILE=two.dmp LOG=two.log IGNORE=Y GRANTS=N INDEXES=N COMMIT=Y TABLES=(brand, book)
==> importing all the records of couple of tables.
$ imp system/manager file=intenary.dmp log=intenary.log FROMUSER=tom TOUSER=jerry ignore=y
==> importing data of one schema into another schema
$ imp "/as sysdba" file=TUE.dmp TTS_OWNERS=OWNER tablespaces=TUE transport_tablespace=y datafiles=TUE.dbf
$ imp file=transporter3.dmp log=transporter3.log inctype=system
==> importing definitions from backup.
$ imp file=transporter3.dmp log=transporter3.log inctype=restore
==> importing data from backup.
$ imp file=spider.dmp log=spider.log show=y
==> checks the validity of the dumpfile.
==> exporting all the objects of a schema.
$ exp file=schemas.dmp log=schemas.log owner=master,owner,user direct=y STATISTICS=none
==> exporting all the objects of multiple schemas.
$ exp file=testdb_emp.dmp log=testdb_emp.log tables=scott.emp direct=y STATISTICS=none
==> exporting all the rows in table (emp table records in scott schema).
$ exp file=itinitem.dmp log=itinitem.log tables=tom.ITIN,tom.ITEM
query=\"where CODE in \(\'OT65FR7H\',\'ATQ56F7H\'\)\"
statistics=none
==> exporting the records of some tables which satisfies a particular criteria.
$ exp transport_tablespace=y tablespaces=THU statistics=none file=THU.dmp log=thu_exp.log
==> exporting at tablespace level.
$ exp FILE=file1.dmp,file2.dmp,file3.dmp FILESIZE=10M LOG=multiple.log
==> exporting to multiple files.
$ exp file=scott.dmp log=scott.log inctype=complete
==> exporting full database (after some incremental/cumulative backups).
$ exp file=scott.dmp log=scott.log inctype=cumulative
==> exporting cumulatively (taking backup from last complete or cumulative backup).
$ exp file=scott.dmp log=scott.log inctype=incremental
==> exporting incrementally (taking backup from last complete or cumulative or incremental backup).
Oracle imp utility
imp provides backward compatibility i.e. it will allows you to
import the objects that you have exported in lower Oracle versions also.
imp doesn't recreate already existing objects. It either abort the import process (default) or ignores the errors (if you specify IGNORE=Y).
Format: imp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
USERID must be the first parameter on the command line.
$ imp help=y
Keyword | Description (Default) |
---|---|
USERID | username/password |
FULL | import entire file (N). To do the full database import, that user must have IMP_FULL_DATABASE role |
BUFFER | size of data buffer. OS dependent |
FROMUSER | list of owner usernames |
FILE | input files (EXPDAT.DMP) |
TOUSER | list of usernames |
SHOW | just list file contents (N), will be used to check the validity of the dump file |
TABLES | list of table names |
IGNORE | ignore create errors (N) |
RECORDLENGTH | length of IO record |
GRANTS | import grants (Y) |
INCTYPE | incremental import type. valid keywords are SYSTEM (for definitions), RESTORE (for data) |
INDEXES | import indexes (Y) |
COMMIT | commit array insert (N) |
ROWS | import data rows (Y) |
PARFILE | parameter filename |
LOG | log file of screen output |
CONSTRAINTS | import constraints (Y) |
DESTROY | overwrite tablespace datafile (N) |
INDEXFILE | will write DDLs of the objects in the dumpfile into the specified file |
SKIP_UNUSABLE_INDEXES | skip maintenance of unusable indexes (N) |
FEEDBACK | display progress every x rows(0) |
TOID_NOVALIDATE | skip validation of specified type ids |
FILESIZE | maximum size of each dump file |
STATISTICS | import precomputed statistics (ALWAYS) |
RESUMABLE | suspend when a space related error is encountered(N) |
RESUMABLE_NAME | text string used to identify resumable statement |
RESUMABLE_TIMEOUT | wait time for RESUMABLE |
COMPILE | compile procedures, packages, and functions (Y) |
STREAMS_CONFIGURATION | import streams general metadata (Y) |
STREAMS_INSTANTIATION | import streams instantiation metadata (N) |
VOLSIZE | number of bytes in file on each volume of a file on tape (not available from Oracle 11g Release2) |
DATA_ONLY | import only data (N) (from Oracle 11g Release2) |
The following keywords only apply to transportable tablespaces
TRANSPORT_TABLESPACE import transportable tablespace metadata (N)
TABLESPACES tablespaces to be transported into database
DATAFILES datafiles to be transported into database
TTS_OWNERS users that own data in the transportable tablespace set
Note: values within parenthesis are the default values.Examples:
$ imp system/manager file=emp.dmp log=emp_imp.log full=y
==> importing all the exported data.
$ imp system/manager file=testdb_emp.dmp log=testdb_emp_imp.log tables=tester.employee
==> importing all the records of table (employee table records in tester schema).
$ imp FILE=two.dmp LOG=two.log IGNORE=Y GRANTS=N INDEXES=N COMMIT=Y TABLES=(brand, book)
==> importing all the records of couple of tables.
$ imp system/manager file=intenary.dmp log=intenary.log FROMUSER=tom TOUSER=jerry ignore=y
==> importing data of one schema into another schema
$ imp "/as sysdba" file=TUE.dmp TTS_OWNERS=OWNER tablespaces=TUE transport_tablespace=y datafiles=TUE.dbf
$ imp file=transporter3.dmp log=transporter3.log inctype=system
==> importing definitions from backup.
$ imp file=transporter3.dmp log=transporter3.log inctype=restore
==> importing data from backup.
$ imp file=spider.dmp log=spider.log show=y
==> checks the validity of the dumpfile.
$ imp file=scott.dmp log=scott.log indexfile=scott_schema.sql
==> will write DDLs of the objects in exported dumpfile (scott schema) into specified file. This command won't import the objects.
How to improve Export & Import
exp:
1. Set the BUFFER parameter to a high value. Default is 256KB.
2. Stop unnecessary applications to free the resources.
3. If you are running multiple sessions, make sure they write to different disks.
4. Do not export to NFS (Network File Share). Exporting to disk is faster.
5. Set the RECORDLENGTH parameter to a high value.
6. Use DIRECT=yes (direct mode export).
imp:
1. Place the file to be imported in separate disk from datafiles.
2. Increase the DB_CACHE_SIZE.
3. Set LOG_BUFFER to big size.
4. Stop redolog archiving, if possible.
5. Use COMMIT=n, if possible.
6. Set the BUFFER parameter to a high value. Default is 256KB.
7. It's advisable to drop indexes before importing to speed up the import process or set INDEXES=N and building indexes later on after the import. Indexes can easily be recreated after the data was successfully imported.
8. Use STATISTICS=NONE
9. Disable the INSERT triggers, as they fire during import.
10. Set Parameter COMMIT_WRITE=NOWAIT(in 10g) or COMMIT_WAIT=NOWAIT (in 11g) during import.
Related Views
DBA_EXP_VERSION
DBA_EXP_FILES
DBA_EXP_OBJECTS
Related Oracle Articles: Datapump in Oracle Data Pump Export Import expdp & impdp
How to improve Export & Import
exp:
1. Set the BUFFER parameter to a high value. Default is 256KB.
2. Stop unnecessary applications to free the resources.
3. If you are running multiple sessions, make sure they write to different disks.
4. Do not export to NFS (Network File Share). Exporting to disk is faster.
5. Set the RECORDLENGTH parameter to a high value.
6. Use DIRECT=yes (direct mode export).
imp:
1. Place the file to be imported in separate disk from datafiles.
2. Increase the DB_CACHE_SIZE.
3. Set LOG_BUFFER to big size.
4. Stop redolog archiving, if possible.
5. Use COMMIT=n, if possible.
6. Set the BUFFER parameter to a high value. Default is 256KB.
7. It's advisable to drop indexes before importing to speed up the import process or set INDEXES=N and building indexes later on after the import. Indexes can easily be recreated after the data was successfully imported.
8. Use STATISTICS=NONE
9. Disable the INSERT triggers, as they fire during import.
10. Set Parameter COMMIT_WRITE=NOWAIT(in 10g) or COMMIT_WAIT=NOWAIT (in 11g) during import.
Related Views
DBA_EXP_VERSION
DBA_EXP_FILES
DBA_EXP_OBJECTS
Related Oracle Articles: Datapump in Oracle Data Pump Export Import expdp & impdp
Thanks Sir.
ReplyDeleteIt is really helpful for me.
This comment has been removed by the author.
ReplyDeleteExcellent
ReplyDeleteThanks for posting a informative post, we provide India Export Database
ReplyDeleteis there will be any issued in import if we use statics=none during the export
ReplyDeleteThanks for sharing this valuable information https://classofy.com/
ReplyDeletethank you bro , this is really helpful
ReplyDeleteCan we have the same utilities for a website which is based on wordpress as CMS?
ReplyDelete