Databases Migrating Oracle Enterprise Edition database To Standard Edition using Transportable Tablespaces – AMIS, Data Driven Blog – Oracle & Microsoft Azure

0

0

Read Time:

11 Minute, 21 Second

Introduction

Oracle database Standard Edition (SE)  has the main features of a RDBMS database and cost significantly less than Oracle Enterprise Edition (EE).

Migrating a database from Oracle Enterprise Edition  to Standard Edition can be a consideration if
the Enterprise Edition options or features are not used (or not required) in the current database.

A query on view DBA_FEATURE_USAGE_STATISTICS can provide detailed information on the use of options or features in the database.

As mentioned in the Database Upgrade Guide, the only supported method to convert your Enterprise
Edition database to a Standard Edition database is by creating a new database and use (Data Pump) Export/Import to migrate the data

Database Upgrade Guide

Chapter 1  Introduction to Upgrading Oracle Database
                     About Converting Databases During Upgrades
                        Migrating from Enterprise Edition to Standard Edition of Oracle Database
               
https://docs.oracle.com/en/database/oracle/oracle-database/19/upgrd/migrating-from-enterprise-to-standard-database.html

Doc ID: 139642.1 Converting An Enterprise Edition Database To Standard Edition

However, this is easier said than done.
When starting this operation for one of our customers we faced several challenges like:

The new SE Database should be the same as the original in sizing, performance, charset etc.

  • The size of the database is 600 GB,
  • (C)LOB objects, which have impact on export/import performance,
  • Oracle Forms objects in system schema,
  • We should keep de EE database intact for a while,
  • And for most, there should be as little downtime as possible

The applications which are using the EE database are not available during the migration, and initial tests show that it will take a lot of downtime when we just use full Oracle Data Pump export/import or schema export/import.

Therefore we proceeded with a method of creating a new Standard Edition Home (using OUI) and empty database (using DBCA) and used Transportable Tablespaces for all tables and indexes. Followed by full export/import, skipping the already present tables and indexes.

We did this operation twice. First with a copy of the EE Prod database to a new production-like acceptation DB on another system. Hence the file copy was a SCP copy over the network. Second Operation was EE Prod to SE Prod on the same system. We later also renamed the SE Prod database back to its original name and file location, but this is not further discussed in this article. In major steps, leaving out most details, we did the following:

Inventorize the Enterprise database

1. Perform a verification of the used features. If Enterprise Edition options are actually in use then this is
a show stopper which has to be dealt with first.

Querying the view DBA_FEATURE_USAGE_STATISTICS will show the present features and if they are
being used.

Use the Database Licensing Information User Manual for more information on these options and features.

2. Verify if Transportable Tablespaces can be used.

Use package DBMS_TTS.TRANSPORT_SET_CHECK

SQL> execute dbms_tts.transport_set_check('app1_data,app2_data,app3_data,app1_index,app2_index,a
p3_index', true);

SQL> select * from transport_set_violations;

The metadata export can further be tested using Data Pump option TTS_CLOSURE_CHECK=TEST_MODE
For this test, the tablespaces does not have to be set to READ ONLY, and the created dump file is not valid, as mentioned in the log output.

Perform a test Data Pump export using a parameter file:
Example:

expdp parfile=export_TEST_metadata.par
--
userid=system 
transport_full_check=y
metrics=y
directory=DP_DIR
logfile=export_TEST_metadata.log
transport_tablespaces=(APP1_DATA,APP2_DATA,APP3_DATA,APP1_INDEX,APP2_INDEX,APP3_INDEX)
logtime=all
dumpfile=export_TEST_metadata.dmp
tts_closure_check=TEST_MODE
--

As the log mentions, this dump file is for testing only:

09-MAR-22 13:03:55.518: Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
09-MAR-22 13:03:55.520: /u01/app/oracle/admin/prdnew/dpdump/export_TEST_metadata.dmp
09-MAR-22 13:03:55.520: Dump file set is unusable. TEST_MODE requested.

Install the SE home

We need an SE Oracle Home. Run the universal installer after unzipping the database software into the right location.

When running the universal installer just install software only, we created the database later.

Remember to tick the right box so a standard edition home is used.

When the Oracle Home is installed you need to patch the new SE home to the same level as the EE home.  Use opatch lsinventory  to check the latest patch that is installed in the EE home.

Or install all in one go :
./runInstaller -applyRU <patch_location>  -applyOneOffs <patch_location>

Create empty Standard Edition database

Before we install the new SE database we need to define the following settings of the EE database. We need to change the default settings in the SE database so it is exactly the same as the old one.

Now we can install a database in the new oracle home using DBCA. We can use the settings above to configure it correctly. When the installation is complete it is advisable to check the settings again using the pfile.

On both databases,  create pfile from spfile,  and verify the pfile, make changes if needed
If needed stop the SE database and start it with the new pfile, then create a spfile and restart again.

It is important to check how de service_name is defined in the EE database and make sure it is the same on the new SE database. We found that applications were not able to connect when the case was different (uppercase or lowercase).

SQL> alter system set service_names=’<SERVICE_NAME>’ scope=spfile;
SQL> shutdown immediate
SQL> startup

It is advisable to set the job_queue_processes parameter to zero to prevent jobs from running and maybe manipulating data before we go live with this SE database.

Now we have a new SE database we can create tablespaces and users.
You can use any method you want to get the DDL for the users from the EE database and to create them into the SE database.

Because of the use of Transportable Tablespaces (TTS) we have to create the schema owners before import. The schema owners are the users which have tables that are migrated/imported using TTS import.

The schema owners may have default tablespaces. These tablespaces are pre-created (1 datafile, size 100K) just to be able to create all schema owners, after the owners/users have been created, the small tablespaces have to be dropped again.

Another step is to check if application data has been put into the USERS tablespace. If this is the case we need to remove the users tablespace on the SE database as well.

Normally the users tablespace cannot be removed because it is the database default tablespace. A workaround for this is to create a new users tablespace with a different name and set it as default, and then to remove the old USERS tablespace:

Once the migration is finished, the database default tablespace can be set back to original USERS and  this users_temp tablespace can be dropped.

Verify and copy the EE_HOME/network/admin  SQLNet files to the new Home /network/admin
Verify and copy the EE_HOME/sqlplus/admin/glogin.sql to the new Home /sqlplus/admin

Verify on charset and dbtimezone, if needed set time_zone correct.

Further preparation

Once the new (empty) SE database is available we start to prepare it to receive the imports.

Some objects are not migrated using full export/import or it may be needed to review and change them,
like ACL’s, public database links, directory objects, public synonyms, profiles, roles, etc.
We have created this objects using scripts in the SE database before we started the actual migration.

Also the Oracle Forms view SYSTEM.FRM50_ENABLED_ROLES and its grants and synonyms has to be
created separately.

For extracting DDL statements from the source EE DB we use a set of scripts, some are based on the
scripts from Tim Hall’s website: https://oracle-base.com.

We create a database link from the SE DB to EE DB for the full import and verifications afterwards.

Create directory objects in source EE en target SE databases for the location of the dump files and logfiles.

Migration

We perform two export/import operations:

A. The metadata export/import of transportable tablespaces, with an OS (disk/network) copy of the datafiles which contain the tables and indexes. The bulk of all the data is transferred by a file copy on disk/network.

B. Once finished, a full export/import is done to transfer of all other objects (user pl/sql, sequences, jobs, etc.)
using TABLE_EXISTS_ACTION=SKIP and a database link.

Once finished, we run checks, add missing grants etc, run utlrp.sql, run dictionary stats etc.

Start of the migration

1. On the source, the applications are stopped, make sure all users are disconnected, disable the job queue.

2. In the source Enterprise Edition database, all user tablespaces are set on READ ONLY.

3. Create a metadata Transportable Tablespaces export of the user tablespaces.

Example:

expdp parfile=export_PROD_metadata.par
 
---
userid=system
metrics=y
directory=DP_DIR
logfile=export_PROD_metadata.log
transport_full_check=y
transport_tablespaces=(APP1_DATA,APP2_DATA,APP3_DATA,APP1_INDEX,APP2_INDEX,APP3_INDEX)
logtime=all
dumpfile=export_PROD_metadata.dmp
---

4. In our case we keep de EE database intact for fallback and testing.

We copy de datafiles of all user tablespaces (all TS except system, sysaux, undo and temp) to the datafile
location of the new SE database.

5. Import of the TTS metadata with the new datafile location

This will import the DDL of the tablespaces and the tables and indexes which are in the copied datafiles.

Example:

impdp parfile=import_PROD_metadata.par

----
userid=system
metrics=y
directory=DP_DIR
logfile=import_PROD_metadata.log
logtime=all
dumpfile=export_PROD_metadata.dmp
TRANSPORT_DATAFILES='/ora1/oradata/PRD02/app1_data.dbf'\, 
'/ora1/oradata/PRD02/app2_data.dbf'\, 
'/ora1/oradata/PRD02/app3_data.dbf'\, 
'/ora1/oradata/PRD02/app1_index.dbf'\, 
'/ora1/oradata/PRD02/app2_index.dbf'\, 
'/ora1/oradata/PRD02/app3_index.dbf'\
----

6. Set the TTS tablespaces on READ WRITE

In the source EE database AND target SE database, set all the tablespaces on READ WRITE.

On the target SE database the tablespaces are on READ ONLY after import, this needs to be changed to
READ WRITE to receive the next (full) import.

At first, we kept the tablespaces on READ ONLY in de EE database, however, this leads to a problem with
the full export/import. Not all objects are exported/imported if at the source the tablespaces are still READ ONLY.

Especially scheduler job arguments and scheduler program arguments are NOT exported when
tablespaces in the source are READ ONLY.

. . imported “APP1″.”SCHEDULER$_JOB_ARG”                    1 rows
. . imported “APP1″.”SCHEDULER$_PROGRAM_ARG”         5 rows
. . imported “APP2″.”SCHEDULER$_JOB_ARG”                    6 rows
. . imported “APP2″.”SCHEDULER$_PROGRAM_ARG”         3 rows

7. Start the full export/import using a database link.

All the other objects of a full export/import are now been added.
The TABLE_EXISTS_ACTION=SKIP parameter will indeed skip all the tables and indexes in the copied datafiles and
imported tablespaces.

This saves us lots of hours of the export/import elapsed time by skipping all the table and index data which is the bulk of the export/import and which is already transferred.

impdp parfile=import_PROD_all_FULL.par
 
--
userid=system
full=y
directory=DP_DIR
logfile=imp_PROD_all_FULL.log
network_link=TO_PRD01
flashback_time=systimestamp
TABLE_EXISTS_ACTION=SKIP
---

8. Verify the full export/import.

Check the Data Pump output and logfile, there may be things that needs to be corrected or (re)created.

Verify if all the user objects match, like normal full export/import.

9. Recompile invalid objects.

Run utlrp.sql from the SE database, to recompile invalid objects.
Verify for invalid objects and fix them if needed.

In our case we also had to remove the Intermedia/TEXT option.

10.  Post Actions

We had kept the SE database in non-archivelog mode, now set the SE database in archivelog mode again.
Although non-archivelog may have been unnecessary, given the much smaller impact of the full import
.
We set the (user) tablespaces of the source EE database now back in READ ONLY, to avoid mistakes/wrong
connections, the EE database is kept on the system for a while for verifications.

Start a full backup of the SE database. We also renamed the EE and SE databases, so the SE ended in the original name of EE database and later moved datafiles to original EE location. But this goes beyond the scope of this article.

If you have any questions, feel free to contact us at AMIS.

Share

Facebook

Twitter

Pinterest

LinkedIn

About Post Author

Johan Buijser

[email protected]

Happy

Happy

0

0

% Sad

Sad

0

0

% Excited

Excited

0

0

% Sleepy

Sleepy

0

0

% Angry

Angry

0

0

% Surprise

Surprise

0

0

%

Share this:

Like this:

Like

Loading…