Easy Listening with Datapump in the SQLDeveloper DBA Module

There are a number of ways to transfer data between Oracle Databases, one of which is to use the PL/SQL Datapump API – DBMS_DATAPUMP.
If you wish to avail yourself of this utility but find the syntax a bit fiddly, you always have the option of getting SQLDeveloper to do (most of) it for you.
What we’re talking about here is how to persuade the SQLDeveloper DB module to :

  • Create and execute a custom Datapump export job
  • do most of the work creating an import of a subset of the exported data

The Application

I’ve got a simple application which consists of a single table and some data :

select title, artist, original_artist, is_mellow
from mike.tracks
/

TITLE                          ARTIST                ORIGINAL_ARTIST                                    IS_MELLOW
------------------------------ --------------------- -------------------------------------------------- ---------
MOTORHEAD                      COURDUROY             HAWKWIND                                           Y        
MOTORHEAD                      MOTORHEAD             HAWKWIND                                           N        
THUNDERSTRUCK                  STEVE N SEAGULLS      AC/DC                                              Y        
WHISKY IN THE JAR              METALLICA             THIN LIZZY                                         N        
HIGHWAY TO HELL                CARLA BRUNI           AC/DC                                              Y        
SHIPPING UP TO BOSTON          GODDESSES OF BAGPIPES DROPKICK MURPHYS                                   Y        
ANOTHER GIRL ANOTHER PLANET    BLINK 182             THE ONLY ONES                                      N        
SMOKE ON THE WATER             PAT BOONE             DEEP PURPLE                                        Y        
EMERGENCY                      MOTORHEAD             GIRL SCHOOL                                        N        
SMELLS LIKE TEEN SPIRIT        2 CELLOS              NIRVANA                                            Y        
WHOLE LOTTA LOVE               CHICO ARNEZ           LED ZEPPLIN                                        Y        
KING OF TWILIGHT               IRON MAIDEN           NEKTAR                                             N        
PARANOID                       GUNTER NORIS          BLACK SABBATH                                      Y        
C'MON FEEL THE NOISE           OASIS                 SLADE                                              N        

14 rows selected. 


The Datapump Export

Note that I’ve deliberately kept things simple in this example. If you were looking at ways of transferring this kind of data-volume then Datapump is probably overkill.

Anyhow, I want to export this table and all of it’s data. To do this in SQLDeveloper, connected as a DBA user I can to go to the View menu and select DBA.
In the DBA tree, I need to right-click the Data Pump node and select the Data Pump Export Wizard

This brings up the first screen of the Export Wizard. Slightly confusingly, it’s labelled as Step 2.
The table I want to export is in a different Schema to the one I’m connected as so I select Schemas from the radio group :

After clicking the Next button we get the chance to choose one or more schemas to include in the export. In my case, I just want the one schema :

The next screen affords us the opportunity to include or exclude types of object. The Filter Types are the same as those you’d see in the export log file :

On this occasion however, we’re not going use these filters so the screen for this step looks like this :

The next screen allows us to select which specific tables we may want to include in the export.

If I enter a search string in the Name field and click the Lookup button, I will eventually get a list of matching tables :

I can then highlight the required table(s) and add them to the list for export by clciking the down arrow…

In this case I want to export all of the data in all of the columns in the table so I have not specified a column list or a predicate.

Next, I can specify the name and location of a logfile for the export.
I’ve left the other options on this screen at their default setting…

I now need to specify the name and location of the dump file. Once again, all other settings are left at the default :

I’d like to run the job immediately so I don’t need to change any of the scheduling information :

Finally, I can see a summary of the job :

Interestingly, if I click on the PL/SQL tab, I can see the code that SQLDeveloper has generated for this export :

That’s quite interesting. Maybe this feature will come in handy in a bit.
For now though, I want to run the export immediately, so I just click the Finish button and SQLDeveloper starts the job whilst providing me a nice interactive view of it :

If I go to the Data Pump/Export Jobs node in the DBA tree, I can see a record of the job.
It will also display the contents of the log file :

If we now check the directory on the OS – the path for which can be found with…

select directory_path
from dba_directories
where directory_name = 'DATA_PUMP_DIR'
/

DIRECTORY_PATH                                              
------------------------------------------------------------
/opt/oracle/admin/XE/dpdump/7DE2A3259D9C3747E0530F84F25CE87C

… we can see that the export file has been created, with a timestamp appended in the filename :

cd /opt/oracle/admin/XE/dpdump/7DE2A3259D9C3747E0530F84F25CE87C

ls -l
-rw-r-----. 1 oracle oinstall 331776 Mar 28 12:30 mike_tracks-12_30_11.dmp

We need to make a note of the file name because we’ll need it when we come to do…

The Datapump Import

I want to import this table into the HR schema. However, I don’t want all of the data, just the mellow tracks because I really need something to chill out to. Oh, and drown out the sound of the country going down the toilet.

Fortunately, SQLDeveloper can get us most of the way.

Once again, we need to right-click the Data Pump node of the DBA tree and select Data Pump Import Wizard.
We need to start by telling the Wizard the type of Import we want to do ( Tables) and the name of the dump file ( the export file we’ve just created) :

Next, we select the table using the shuttle buttons :

We want to import into the HR schema rather than the original schema :

I only want to create the table if it does not already exist :

Once again, I don’t want to schedule the job :

Finally, we have the summary :

Unlike the export, you’ll notice that the import Wizard has not given us the option to restrict the actual table rows we will be importing.
However, we do still get the code that has been generated in the PL/SQL tab of the summary window :

This time, rather than executing the job, I’m going to copy the code into an editor and add a manual “tweak” – namely :

    --
    -- Make sure we only get the mellow tracks because I really need to chill out...
    --
    dbms_datapump.data_filter(handle => h1, name => 'SUBQUERY', value => q'[where is_mellow = 'Y']', table_name => 'TRACKS', schema_name => 'MIKE');

The full script now looks like this :

set scan off
set serveroutput on
set escape off
whenever sqlerror exit 
DECLARE
    s varchar2(1000); 
    h1 number;
    errorvarchar varchar2(100):= 'ERROR';
    tryGetStatus number := 0;
begin
    h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'TABLE', job_name => 'mike_playlist', version => 'COMPATIBLE'); 
    tryGetStatus := 1;
    dbms_datapump.set_parallel(handle => h1, degree => 1); 
    dbms_datapump.add_file(handle => h1, filename => 'mike_tracks_imp.log', directory => 'DATA_PUMP_DIR', filetype=>DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE); 
    dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 1); 
    dbms_datapump.metadata_filter(handle => h1, name => 'SCHEMA_EXPR', value => 'IN(''MIKE'')'); 
    dbms_datapump.metadata_filter(handle => h1, name => 'NAME_EXPR', value => 'IN(''TRACKS'')'); 
    dbms_datapump.add_file(handle => h1, filename => 'mike_tracks-12_30_11.dmp', directory => 'DATA_PUMP_DIR', filetype => 1); 
    dbms_datapump.metadata_remap(handle => h1, name => 'REMAP_SCHEMA', old_value => 'MIKE', value => UPPER('HR') ); 
    
    --
    -- Make sure we only get the mellow tracks because I really need to chill out...
    --
    dbms_datapump.data_filter(handle => h1, name => 'SUBQUERY', value => q'[where is_mellow = 'Y']', table_name => 'TRACKS', schema_name => 'MIKE');
    dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1); 
    dbms_datapump.set_parameter(handle => h1, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC'); 
    dbms_datapump.set_parameter(handle => h1, name => 'TABLE_EXISTS_ACTION', value => 'SKIP'); 
    dbms_datapump.set_parameter(handle => h1, name => 'SKIP_UNUSABLE_INDEXES', value => 0);
    dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0); 
    dbms_datapump.detach(handle => h1); 
    errorvarchar := 'NO_ERROR'; 
EXCEPTION
    WHEN OTHERS THEN
    BEGIN 
        IF ((errorvarchar = 'ERROR')AND(tryGetStatus=1)) THEN 
            DBMS_DATAPUMP.DETACH(h1);
        END IF;
    EXCEPTION 
    WHEN OTHERS THEN 
        NULL;
    END;
    RAISE;
END;
/

I can now run this code in a SQLDeveloper Worksheet…

Whilst the feedback is not quite as good in this environment, you can still find the job itself in the DBA module, where you can see all of the details…

If I now check the new HR.TRACKS table :

Appropriately enough given current events, eight tracks is the precise number needed to form a Desert Island Discs playlist.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.