- 最新
- 最多得票
- 最多評論
First, you will have to export the schemas from the source database. The DBMS_DATAPUMP package has an open procedure that you can use to start an export job. You can specify the SCHEMA as the job_mode in this procedure. You will also have to add files for the dump file and the log file using the add_file procedure. Finally, you will use the metadata_filter procedure to specify the schema to be exported. This process should be repeated for each schema you want to export
declare
l_dp_handle number;
begin
-- Open a schema export job.
l_dp_handle := dbms_datapump.open(
operation => 'EXPORT',
job_mode => 'SCHEMA',
remote_link => NULL,
job_name => 'TESTUSER1_EXPORT',
version => 'LATEST');
-- Specify the dump file name and directory object name.
dbms_datapump.add_file(
handle => l_dp_handle,
filename => 'TESTUSER1.dmp',
directory => 'TEST_DIR');
-- Specify the log file name and directory object name.
dbms_datapump.add_file(
handle => l_dp_handle,
filename => 'expdpTESTUSER1.log',
directory => 'TEST_DIR',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
-- Specify the schema to be exported.
dbms_datapump.metadata_filter(
handle => l_dp_handle,
name => 'SCHEMA_EXPR',
value => '= ''TESTUSER1''');
dbms_datapump.start_job(l_dp_handle);
dbms_datapump.detach(l_dp_handle);
end;
You would need to run similar code for each schema you want to export.
Next, you can import the schemas into the destination database. The import process is similar to the export process. You will use the open procedure to start an import job, and the add_file procedure to specify the dump file and the log file. You can also use the metadata_remap procedure if you want to remap the schema name
declare
l_dp_handle number;
begin
-- Open a schema import job.
l_dp_handle := dbms_datapump.open(
operation => 'IMPORT',
job_mode => 'SCHEMA',
remote_link => NULL,
job_name => 'TESTUSER1_IMPORT',
version => 'LATEST');
-- Specify the dump file name and directory object name.
dbms_datapump.add_file(
handle => l_dp_handle,
filename => 'TESTUSER1.dmp',
directory => 'TEST_DIR');
-- Specify the log file name and directory object name.
dbms_datapump.add_file(
handle => l_dp_handle,
filename => 'impdpTESTUSER1_IMPORT.log',
directory => 'TEST_DIR',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
-- Specify the remap schema name.
dbms_datapump.metadata_remap(
handle => l_dp_handle,
name => 'REMAP_SCHEMA',
old_value => 'TESTUSER1',
value => 'TESTUSER2');
dbms_datapump.start_job(l_dp_handle);
dbms_datapump.detach(l_dp_handle);
end;
You can exclude certain schemas (like the SYS and SYSTEM schemas) during the export process by using the exclude parameter.
expdp username/password directory=datapump dumpfile=x.dmp logfile=y.log full=y exclude=schema:"in ('SYS', 'SYSTEM')"
When importing, you can remap multiple schemas at once by using the remap_schema option in the impdp command
impdp system/**** directory=DATA_PUMP_DIR dumpfile=export_something.dmp logfile=imp.log remap_schema=name:newname,name2:newname2,name3:banana
Please note that in RDS oracle we dont have access to the Unix Operating System so we cannot run "expdp" and "impdp" as shown in above answer for exclusions.
We are limited to running DBMS_DATAPUMP pl/sql package. It seems the export does support adding a list of schemas according to this answer. I am not sure if the import can support all the schemas at same time though?
https://forums.oracle.com/ords/apexds/post/dbms-datapump-exclude-6730
"expdp" and "imdp" are client utilities. The actual work of processing dump files happens on the server, but there has never been a requirement that the clients be run on the database server itself (the older versions "exp" and "imp" did all of the actual work of exporting and importing data so it was best for performance reasons to run them on the server to reduce round trips; the newer Data Pump clients are essentially just wrappers for PL/SQL that runs on the database server). You can just install these wherever your other Oracle client utilities are installed (e.g. SQLPlus, SQLLoader). Alternatively, you can use the PL/SQL APIs directly.
相關內容
- AWS 官方已更新 2 年前
- AWS 官方已更新 2 年前
- AWS 官方已更新 1 年前
So if i have 30 schemas in source RDS, I need to run the export 30 times in SOURCE RDS to create 30 dump files and then move files to destination RDS using S3 integration and then run the import 30 times on RDS destination?
If I am doing the export at schema level why do I need to exclude SYS and SYSTEM schemas since the import is only for one application schema I select? Did you mean using EXCLUSION when doing export at DATAVASE level.
Not necessarily, you could do a single export/import that specifies 30 schemas - the DBMS_DATAPUMP metadata filter and the expdp "SCHEMAS" argument can take multiple values. Or you could do 30 exports/imports, or some combination of the two.
Hi
Do you have an example or link that shows how to export/import multiple schemas using DBMS_DATAPUMP? Let us say you want to export/import 3 schemas "testuser1", "testuser2", testuser3".
Thanks,