Oracle to Azure Database PostgressSQL
- Anand Nerurkar
- 5 days ago
- 6 min read
Executive summary (1 line)
Use Azure DMS to convert schema and perform full load + CDC from Oracle ExaCC to Azure Database for PostgreSQL (Flexible Server); refactor PL/SQL where required; validate parity with checksums and business totals; cutover with minimal downtime.
Pre-requisites (do these first)
Network: Private connectivity (ExpressRoute or site-to-site VPN) between ExaCC and Azure VNet.
Azure subscription with rights to create Resource Groups, VNet, DMS, PostgreSQL Flexible Server.
Oracle DB in ARCHIVELOG mode (required for LogMiner CDC).
Migration accounts:
Oracle: MIGR_USER with SELECT, LogMiner/DBMS privileges and supplemental logging enabled.
Azure PostgreSQL admin user.
Tools on migration bastion: sqlplus, psql, ora2pg (optional), Azure CLI (az) installed.
Teams: Oracle DBA, Postgres DBA, App Dev, QA, SRE, Security.
High-level phases
Provision Azure infra (VNet, PG Flexible Server, DMS).
Prepare Oracle (ARCHIVELOG, supplemental logging, migration user).
Convert schema (DMS/Ora2Pg) and apply to PG (dev/test).
Perform full data load via DMS.
Enable CDC for continuous sync.
App remediation (drivers, SQL fixes, PL/SQL refactor).
Validation (counts, checksums, business totals, performance).
Cutover (freeze, final sync, switch), hypercare.
Post-migration hardening & decommission.
Phase 0 — Important Decisions (before you start)
Target: Azure Database for PostgreSQL — Flexible Server recommended for production.
Cutover strategy: Online (CDC) chosen here. If downtime acceptable, choose Offline (faster, simpler).
Schema conversion tool: Use Azure DMS schema assessment (uses Ora2Pg) + manual fixes for PL/SQL. Optionally run ora2pg locally for extra control.
Phase 1 — Provision Azure infrastructure
1. Create Resource Group and VNet (CLI)
az group create -n rg-mig-abc -l eastus
az network vnet create -g rg-mig-abc -n vnet-mig --address-prefix 10.10.0.0/16 \
--subnet-name subnet-dms --subnet-prefix 10.10.1.0/24
2. Create Azure Database for PostgreSQL – Flexible Server (Portal or CLI)
Example CLI (adjust SKU, version, storage):
az postgres flexible-server create \
--resource-group rg-mig-abc \
--name pg-lending-prod \
--location eastus \
--sku-name Standard_D4s_v3 \
--tier GeneralPurpose \
--storage-size 512 \
--admin-user pgadmin \
--admin-password 'StrongPass@123' \
--vnet vnet-mig \
--subnet subnet-dms
Enable private endpoint or VNet integration.
Configure pg_stat_statements and wal_level = logical if using logical replication.
3. Provision Azure Database Migration Service (DMS)
Create DMS instance in same VNet/subnet:
Portal: Resource > Create > Database Migration Service > provide name, SKU, VNet/subnet.Or CLI (example):
az dms create -g rg-mig-abc -n dms-abc --location eastus --sku-name Premium_4vCores --subnet /subscriptions/<sub>/resourceGroups/rg-mig-abc/providers/Microsoft.Network/virtualNetworks/vnet-mig/subnets/subnet-dms
4. Supporting resources
Key Vault for secrets, Log Analytics workspace for monitoring, Storage account for any exports.
Phase 2 — Prepare the Oracle source
1. Check ARCHIVELOG mode
SQL> SELECT LOG_MODE FROM V$DATABASE;
If not ARCHIVELOG, coordinate downtime to enable.
2. Enable supplemental logging (required for CDC)
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
ALTER TABLE LENDING.LOAN_APPLICATION ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
-- Repeat as needed for large tables or set at DB level.
3. Create migration user and grant required privileges
CREATE USER MIGR_USER IDENTIFIED BY "StrongPass!";
GRANT CREATE SESSION TO MIGR_USER;
GRANT SELECT ANY DICTIONARY TO MIGR_USER;
GRANT SELECT ANY TABLE TO MIGR_USER;
GRANT EXECUTE ON SYS.DBMS_LOGMNR TO MIGR_USER;
GRANT SELECT ON V_$DATABASE TO MIGR_USER;
-- Additional roles may be necessary for LogMiner
4. Collect baseline metrics
Top 50 queries (AWR), table sizes, peak TPS, batch windows, growth rates.
These feed sizing for PG and DMS parallelism settings.
Phase 3 — Schema conversion (Azure DMS / Ora2Pg)
Option A — Use Azure DMS schema assessment (recommended)
In Azure Portal, open your DMS instance → Create Migration Project → choose Oracle to Azure Database for PostgreSQL.
Run Schema Assessment/Conversion. DMS will produce converted DDL and a report of incompatible objects.
Review converted DDL: check types (NUMBER→NUMERIC/INT), DATE→TIMESTAMP, CLOB→TEXT, sequences→IDENTITY, triggers.
Manual fixups: for PL/SQL packages, CONNECT BY, MERGE semantics, materialized views, and proprietary functions.
Option B — Run ora2pg locally for control
ora2pg config example:
ORACLE_DSN dbi:Oracle:host=exacc-scan;sid=ORCL;port=1521
ORACLE_USER MIGR_USER
ORACLE_PWD StrongPass!
SCHEMA LENDING
TYPE TABLE,SEQUENCE,VIEW,FUNCTION,PROCEDURE,TRIGGER
PARALLEL_JOBS 8
OUTPUT ./out
Commands:
ora2pg -c ora2pg.conf -t TABLE -o schema_tables.sql
ora2pg -c ora2pg.conf -t SEQUENCE -o sequences.sql
Apply DDL to dev PG:
psql "host=<pg-host> dbname=lending user=pgadmin" -f schema_tables.sql
Phase 4 — Full data load with Azure DMS
1. Create DMS migration task (Portal)
Project → Add Migration Task → Choose Online data migration (full load + CDC).
Source: Oracle connection string (SCAN/service name): host=<exacc-scan> port=1521 service=ORCL
Target: Azure PostgreSQL connection (private endpoint)
Select objects (schemas/tables) — for pilot pick LENDING schema
Options:
Disable indexes & constraints during load (recommended), have DMS recreate them after load.
Parallel table copy workers — tune per network/IO capacity.
LOB handling mode: full copy.
2. Start the migration task (Full load)
Monitor DMS UI: job status, throughput (rows/sec), errors.
For large tables, consider chunking into ranges if necessary.
3. After full load finishes
DMS will enable CDC (LogMiner) and begin applying deltas to PG.
Monitor CDC apply rate and lag.
Phase 5 — Application remediation (parallel work while CDC runs)
1. Update JDBC and application configs in dev/stage
Replace Oracle driver with PostgreSQL driver: org.postgresql.Driver
Parameterize connection strings and secrets via Key Vault.
2. SQL dialect fixes (examples)
NVL(col, val) → COALESCE(col, val)
SYSDATE → now()
ROWNUM → LIMIT / window functions
CONNECT BY → recursive CTEs
MERGE → INSERT ... ON CONFLICT or upsert logic
3. Oracle PL/SQL conversion strategy
Business logic in complex PL/SQL → refactor into microservices (recommended).
Simple set-based routines → PL/pgSQL functions.
Use vFunction to identify extraction candidates and scaffold services.
4. Replace triggers with outbox pattern
Instead of trigger-driven side-effects, write to outbox table and a dispatcher publishes to Event Hub/Kafka.
Phase 6 — Validation & testing (do not skip)
Validation checklist (automate where possible)
Row counts
Oracle:
SELECT COUNT(*) FROM LENDING.LOAN_APPLICATION;
Postgres:
SELECT COUNT(*) FROM lending.loan_application;
Chunked checksums (for big tables)
Postgres:
SELECT md5(string_agg(id::text||'|'||coalesce(amount::text,''), '')) as h FROM (SELECT id, amount FROM lending.loan_application WHERE id BETWEEN 1 AND 100000 ORDER BY id) s;
Do equivalent on Oracle and compare per chunk.
Business totals
SELECT SUM(amount) FROM lending.loan_application WHERE created_dt >= '2024-01-01';
Compare with Oracle totals.
Referential integrity
Check FK violations (should be none once constraints re-enabled).
Functional parity
Run key business flows: create loan app, eligibility calc, repayment generation.
Performance
Replay top SQLs; use EXPLAIN ANALYZE and tune indexes/partitioning.
Security
Validate roles, encryption, auditing (pgAudit).
Phase 7 — Cutover (online CDC, minimal downtime)
Go/No-Go prerequisites
CDC lag = near-zero for 30+ minutes
Parity checks passed/within tolerance
App remediation deployed to staging and smoke-tested
Runbook and rollback plan ready, hypercare team on standby
Cutover steps
Enter short freeze window — stop writes or redirect to queue (if feasible).
Confirm final CDC lag = 0.
Pause consumers / drain connection pools on applications.
Switch connection strings (Key Vault) from Oracle → PostgreSQL (update config and restart services or hot-reload).
Smoke tests: critical paths (loan create, payment posting, balance query).
Monitor metrics: DB errors, p95 latency, CPU/IOPS, application logs.
If stable for agreed window, stop CDC and archive DMS job. If not, rollback:
Switch connection strings back to Oracle.
Reconcile any writes made to PG (log them during short window) — usually avoided by freezing writes.
Phase 8 — Post-cutover hardening & decommission
Post-cutover tasks
Set sequence/identity values:
SELECT setval(pg_get_serial_sequence('lending.loan_application','application_id'), (SELECT COALESCE(MAX(application_id),0) FROM lending.loan_application)+1, false);
VACUUM ANALYZE key tables.
Tune autovacuum, maintenance_work_mem, shared_buffers as needed.
Setup backups & PITR; test restore.
Create monitoring dashboards (pg_stat_statements, slow queries).
Run parity daily for N days (as agreed).
Decommission Oracle objects / schedule ExaCC decommission when safe.
Rollback plan (short)
If post-cutover issues are blocking and not resolvable in the stabilization window:
Switch service connection strings back to Oracle (Config/Key Vault rollback).
Resume normal operations on Oracle.
Investigate delta writes on PG and reconcile later (warning: costly) — better to freeze writes during cutover to avoid this.
Useful commands & snippets
Enable supplemental logging (Oracle)
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
ALTER TABLE schema.table ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
Ora2Pg sample commands
ora2pg -c ora2pg.conf -t ESTIMATE -o estimate.txt
ora2pg -c ora2pg.conf -t TABLE -o schema_tables.sql
ora2pg -c ora2pg.conf -t COPY -o data.sql -j 8
Apply DDL to PG
psql "host=<pg> dbname=lending user=pgadmin" -f schema.sql
KPIs to track (during & after migration)
Migration progress: % objects migrated, % data rows copied.
CDC lag: p50/p95 in seconds.
Data parity: row counts & checksum match rate (%).
Cutover downtime: minutes.
Performance delta: P95 query times vs baseline.
Incidents: migration-related incidents per week post cutover.
Cost delta: monthly infra cost vs Oracle baseline.
Top risks & mitigations (concise)
PL/SQL complexity → Mitigation: identify early (ora2pg/DMS reports), refactor to services.
CDC lag due to archivelog retention → Mitigation: increase retention & monitor.
Performance regression → Mitigation: perf testing & query tuning; implement indexes/partitioning.
Data loss → Mitigation: dual verification (checksums), dual-run tests, extended CDC window.
Rollback complexity → Mitigation: short freeze window, practice rehearsals, keep rollback simple connection switch.
Optional: Use CAST & vFunction to accelerate
CAST: run portfolio-level scans to find Oracle-specific hotspots, PL/SQL dependency maps, and quantify conversion effort per schema/table.
vFunction: for monoliths, run static+runtime analysis to propose domain boundaries and extract PL/SQL business logic into microservices before or during migration — reduces DB conversion scope and risk.
Final checklist (ready to sign off)
Network validated (ExpressRoute/VPN)
Oracle ARCHIVELOG & supplemental logging enabled
Migration user created with needed privileges
Azure PG Flexible Server provisioned & private endpoint configured
DMS instance provisioned in same VNet & connectivity tested
Schema converted & applied to dev PG (reviewed)
Full load completed for pilot schema; data validated
CDC enabled; lag acceptable for 24h
Application SQL remediation done & deployed to staging
Cutover rehearsal successful (rollback tested)
Production cutover executed; hypercare in place
Comments