Oracle source database
This page documents known issues you may encounter and suggested workarounds when migrating data from Oracle to YugabyteDB.
Contents
- Some numeric types are not exported
- RAW data is not imported in some cases
- A unique index which is also a primary key is not migrated
- Issue in some unsupported cases of GIN indexes
- Partition key column not part of primary key columns
- Negative scale is not supported
- Error in CREATE VIEW DDL in synonym.sql
- Large-sized CLOB/NCLOB data is not supported
Some numeric types are not exported
GitHub: Issue #207
Description: For cases where the precision is less than the scale in a numeric attribute, the numeric attribute fails to get imported to YugabyteDB.
Workaround: Manually remove the explicit precision and scale values from the exported numeric or decimal attributes. PostgreSQL and YugabyteDB do not allow setting the precision less than the scale explicitly.
Example
An example schema on the source Oracle database is as follows:
CREATE TABLE numeric_size(
num_min number(1,-84),
num_max number(38,127),
numeric_min numeric(1,-84),
numeric_max numeric(38,127),
float_val FLOAT(5),
dec_min_Val dec(1,-84),
dec_max_Val dec(38,127),
decimal_min_Val decimal(1,-84),
decimal_max_Val decimal(38,127)
);
The exported schema is as follows:
CREATE TABLE numeric_size (
num_min real,
num_max decimal(38,127),
numeric_min real,
numeric_max decimal(38,127),
float_val double precision,
dec_min_val real,
dec_max_val decimal(38,127),
decimal_min_val real,
decimal_max_val decimal(38,127)
) ;
Suggested change to the schema is as follows:
CREATE TABLE numeric_size (
num_min real,
num_max decimal,
numeric_min real,
numeric_max decimal,
float_val double precision,
dec_min_val real,
dec_max_val decimal,
decimal_min_val real,
decimal_max_val decimal
) ;
RAW data is not imported in some cases
GitHub: Issue #584
Description: When attempting to migrate a (LONG) RAW attribute from an Oracle instance, you may face an invalid hexadecimal error.
Workaround: None. A workaround is currently being explored.
A unique index which is also a primary key is not migrated
GitHub: Issue #571
Description: If your Oracle schema contains a unique index and a primary key on the same set of columns, the unique index does not get exported.
Workaround: Manual intervention needed. You have to manually add the unique index to the exported files.
Example
An example schema on the source database is as follows:
CREATE TABLE employees(employee_id NUMBER(6),email VARCHAR2(25));
CREATE UNIQUE INDEX EMAIL_UNIQUE ON employees (email) ;
ALTER TABLE employees ADD ( CONSTRAINT email_pk PRIMARY KEY (email));
Suggested change to the schema is to manually add the unique index to the exported files as follows:
CREATE UNIQUE INDEX email_unique ON public.employees USING btree (email);
Issue in some unsupported cases of GIN indexes
GitHub: Issue #724
Description: If there are some GIN indexes in the schema which are not supported by YugabyteDB, it will display an error during import schema.
Workaround: Modify those indexes with some supported cases based on your database configuration.
Example
An example schema on the source database is as follows:
CREATE TABLE members(
member_id INT GENERATED BY DEFAULT AS IDENTITY,
first_name VARCHAR2(100) NOT NULL,
last_name VARCHAR2(100) NOT NULL,
gender CHAR(1) NOT NULL,
dob DATE NOT NULL,
email VARCHAR2(255) NOT NULL,
PRIMARY KEY(member_id)
);
CREATE BITMAP INDEX members_gender_bm_index on members(gender,member_id);
The exported schema is as follows:
CREATE TABLE members (
member_id bigint GENERATED BY DEFAULT AS IDENTITY (START WITH 1 INCREMENT BY 1 MAXVALUE 9223372036854775807 MINVALUE 1 NO CYCLE CACHE 20 ),
first_name varchar(100) NOT NULL,
last_name varchar(100) NOT NULL,
gender char(1) NOT NULL,
dob timestamp NOT NULL,
email varchar(255) NOT NULL,
PRIMARY KEY (member_id)
) ;
CREATE INDEX members_gender_bm_index ON members USING gin(gender, member_id);
Error when exporting the schema is as follows:
ERROR: data type character has no default operator class for access method "ybgin" (SQLSTATE 42704)
Partition key column not part of primary key columns
GitHub: Issue #578
Description: In YugabyteDB, if a table is partitioned on a column, then that column needs to be a part of the primary key columns. Creating a table where the partition key column is not part of the primary key columns results in an error.
Workaround: Add all partition columns to the primary key columns.
Example
An example exported schema is as follows:
CREATE TABLE employees (
employee_id integer NOT NULL,
first_name varchar(20),
last_name varchar(25),
email varchar(25),
phone_number varchar(20),
hire_date timestamp DEFAULT statement_timestamp(),
job_id varchar(10),
salary double precision,
part_name varchar(25),
PRIMARY KEY (employee_id)) PARTITION BY RANGE (hire_date) ;
The preceding example will result in an error as follows:
ERROR: insufficient columns in the PRIMARY KEY constraint definition
DETAIL: PRIMARY KEY constraint on table "employees" lacks column "hire_date" which is part of the partition key.
An example table with the suggested workaround is as follows:
CREATE TABLE employees (
employee_id integer NOT NULL,
first_name varchar(20),
last_name varchar(25),
email varchar(25),
phone_number varchar(20),
hire_date timestamp DEFAULT statement_timestamp(),
job_id varchar(10),
salary double precision,
part_name varchar(25),
PRIMARY KEY (employee_id, hire_date)
) PARTITION BY RANGE (hire_date) ;
Negative scale is not supported
GitHub: Issue #779
Description: Oracle supports negative scale where you can round down the values to the power of tens corresponding to the scale provided. Negative scale is not supported in PostgreSQL and therefore in YugabyteDB.
Workaround: Remove the precision/scale from the exported schema, or change to any other supported datatype.
Example
An example source schema is as follows:
CREATE TABLE num_check (n1 number(5,-2));
An example exported schema is as follows:
CREATE TABLE num_check (n1 decimal(5,-2));
An example table with the suggested workaround is as follows:
CREATE TABLE num_check (n1 decimal);
Error in CREATE VIEW DDL in synonym.sql
GitHub: Issue #673
Description: When exporting synonyms from Oracle, the CREATE OR REPLACE VIEW DDLs gets exported with full classified name of the object, and while the schema in the DDLs will be same as the schema in which the synonym is present in Oracle, the schema with that name may not be present in the target YugabyteDB database, and so import schema fails with a does not exist error.
Workaround: Manual intervention needed. You can resolve the issue with one of the following options:
- Create the target schema with the name mentioned in the object name of DDLs present in
synonym.sql
. - Remove the schema name from all the object names from the DDLs.
Example
An example DDL on the source schema test
is as follows:
CREATE OR REPLACE PUBLIC SYNONYM pub_offices for offices;
An example exported schema is as follows:
CREATE OR REPLACE VIEW test.offices AS SELECT * FROM test.locations;
Suggested changes to the schema are as follows:
-
Execute the following DDL on the target database:
CREATE SCHEMA test;
OR
-
Modify the DDL by removing the schema name
test
from the DDL:CREATE OR REPLACE VIEW offices AS SELECT * FROM locations;
Large-sized CLOB/NCLOB data is not supported
GitHub: Issue #385
Description: YugabyteDB Voyager ignores any values of BLOB types by default, but for CLOB and NCLOB, it migrates the data as text. However, if the size of rows for such CLOB/ NCLOB type columns exceeds 240 MB, it may result in errors and the migration may fail.
Workaround: None. A workaround is being currently explored.