AWS DMS – Oracle to PostgreSQL and Object Case Issues

One of the differences between Oracle and PostgreSQL is the default behaviour of identifiers when they are not quoted. In Oracle it folds to uppercase (SQL Standard) while in PostgreSQL it folds to lowercase.

So when you migrate from Oracle to PostgreSQL with AWS DMS, the DMS wraps the DDL with quotes and makes the PostgreSQL objects case sensitive. It becomes an issue as instead of running your queries with:

you now have to quote the identifiers so it looks like:

I’m not a fan of case sensitive identifiers so thankfully the DMS has table mappings which can perform transformations on columns, tables and schema.

In my Oracle to PostgreSQL migration scenario I needed to migrate a schema and all its tables and objects. The steps I used to achieve this are as follows:

  • AWS Schema Conversion Tool to identify compatibility
  • AWS Schema Conversion Tool to generate DDL for indexes and other objects
  • Manipulate this file to suit my requirements
  • Create the JSON file with the appropriate rules for the migration. See code block below
  • Create the DMS task in console and paste in custom table mappings and run
  • In Oracle user and schema are synonymous. The user gives you login ability and you can also store objects within that “user” namespace. You also have the flexibility to separate this out with multiple users accessing a schema with different permission models and so forth. In PostgreSQL a user is a role and schema is a namespace to store objects. So when migrating to PostgreSQL you may need to add an extra layer as you cannot use the schema as a login role to access migrated data.

    My solution to this was to migrate the Oracle schema as a different name into PostgreSQL and then create a role with appropriate permissions to the new schema in PostgreSQL with the same name as the Oracle user.

    My lowercase table mapping JSON which migrates an Oracle Schema and converts it all to lowercase and renames the schema looks like:

    This was a pretty small database and the migration complete within 10 minutes ready for testing.

    AWS S3 Object ACL and 403 Error

    Recently I fell into the trap of S3 object ACL’s and the issue of an object having a different owner to the bucket. So when I tried to do something simple like the following I got a forbidden error.

    This stumped me for a while as I couldn’t understand why an admin user connecting to an account where the bucket was created could not access the file that was in there. The bucket policy allowed read and write access across multiple AWS accounts. The mistake that I made was in one terminal session I had my cli profile accidentally set to another account so the owner of the object was set to that account when uploaded. Basically account-a uploaded a file into a bucket in account-b.

    Lets go through an example of how this can happen. First we need a bucket with a policy that allows cross account read/write access.

    Upload file with Account A.

    Set profile to Account B and and try to copy the file.

    Lets take a look at the owner of that file and see why this is happening.

    We can see the DisplayName key as having the value account-a. When uploading a object – S3 creates a default ACL that grants the resource owner full control. In this case account-a had full control over a an object which lives in a bucket in account-b. account-b had no permissions on the object even though it owns the bucket. To solve my issue I could have deleted then re-added it correctly but I decided to modify the object acl so that I can read it via the accounts I need. This can be done via the put-object-acl api call. Here’s what I did to fix my issue.

    The object now had read permission from account-b and I could successfully read and copy the file. If you need this cross account permissions for a bucket and its objects then it may be advisable to set the object acl so that the relevant accounts have read access like:

    You can read more about S3 Object ACL’s here.

    AWS Database Migration Service

    Amongst the many services being offered by AWS, one of the most anticipated from a database perspective is the Database Migration Service – DMS. It is now readily available across all regions after being in preview for while.

    What better reason to give it a trial run. The AWS Blog has a nice article on how to get it going in a few clicks so instead of repeating the same steps I’ll use the cli to setup a replication.

    The DMS is a heterogeneous replication tool allowing you to migrate between different database systems. In my test I’ll keep it simple and migrate a few tables from an on premise Oracle 11.2.0.3 database to an AWS Oracle RDS Instance.

    The service also supports change data capture (CDC) allowing you to keep the tables in sync. I’ll configure my replication to use CDC.

    The basic setup of the service is as follows:

  • Create a Replication Subnet Group
  • Create a Replication Instance
  • Create your Replication Endpoints (source and target)
  • Create your Replication Task
  • Before we start we need to prep our databases to make sure they meet the requirements of the service and in my case CDC. Archivelog mode needs to be enabled and we need to setup supplemental logging on the source. You also need to add supplemental logging to each tables primary keys. Oracle source requirements are detailed here.

    Another prep item is to configure the user (I’m using awsdms) that will be used for the connection endpoints. This user must be granted certain privileges. The source user ddl can be found here while the target ddl can be found here.

    Once that is all done we can move onto the DMS service. First lets create the replication subnet group using two private subnets in the VPC.

    Now we can create the replication instance.

    Once we have our replication instance up and running we can create our endpoints starting with source. For server name we need to specify an IP address as the DMS service cannot resolve on premise hostnames. A workaround to this is to create an AWS Route 53 address that points back to our on premise hosts IP but we’ll leave it as is for this test.

    And now lets test connectivity and make sure our replication instance can talk to our source database.

    Now lets create the target endpoint. You can run the same tests as above but I’ll omit it for this exercise.

    We are just about ready to create our replication task. The replication task requires two json files. One for our source table mappings and one for our task settings. Here is my source table mapping:

    And now the task settings. There are a number of configurable options here but I settled for the following:

    Putting this all together we can create the replication task with a migration type of full-load-and-cdc.

    And finally lets start it.

    After a few minutes you should see the tables created on the target database. I can update, insert, delete data on these two tables and they’ll be kept in sync.

    Overall a pretty straightforward and simple tool to use to migrate and keep data in sync. I came across a few issues while testing this and currently have a support case open. Issues include:

  • Target user permissions missing delete any table in DMS Online documentation – bug raised
  • When delete fails due to permission problem, exception does not get logged properly
  • DMS Logminer reader doesn’t like standby databases configured on source database and tries to read non existent logs – working with AWS support on this
  • DMS cannnot resolve our on premise hostnames and we can’t influence which DNS server it uses. Route 53 workaround.
  • Diagnosing replication issues with RDS target can be tricky as its a managed service. I had to replicate on an EC2 instance.
  • Oracle RDS with SSL

    Amazon just recently announced support for SSL connections with Oracle RDS. What better reason to try and test it out.

    To use SSL with Oracle you’ll need:

  • Option Group with SSL enabled
  • VPC Security Group allowing port 2484
  • With that in mind I’ve created a cloudformation template to automate the build.

    You can run the template via the cli assuming you have all your environment setup.

    Once we have all the config complete and we create our stack we need to setup the Oracle client and wallet. First we create a directory to store everything and download the RDS CA certificate file from https://s3.amazonaws.com/rds-downloads/rds-ca-2015-root.pem.

    Create our wallet and add the cert with the following commands.

    Finally, add the following parameters to $ORACLE_HOME/network/admin/sqlnet.ora

    We are now ready to connect with SSL on port 2484.

    Restore & Clone PostgreSQL from Snapshot Backup

    If you are using snapshots as your backup strategy then there may be times where you want to clone the snapshot and restore it on another server. I’ll go through that scenario. My environment is Redhat 6.1, PostgreSQL 9.2 and Netapp Snapshots.

    It’ll be a similar process with other snapshot technologies but the cloning and mounting of disks may differ. Firstly, we need to get the mounts cloned and presented to the appropriate server which can be done by a trusty storage admin.

    We add the above details to /ets/fstab and mount our volumes. We need to create directories first, mount and change permissions. The server that I’m working on already has a database running on it under /pg01/data so I’ve appended “_r” to my mounts.

    Now that we have our cloned filesystems ready to go, we can modify the appropriate values in /pg01_r/data/postgresql.conf. The following were relevant in my environment:

    I needed to change the listen_address as I was on a new host. I already had a postgresql cluster running on the default port so that needed to change and my filesystem was slightly different from the original server so my archive command needed a tweak.

    I also had to change my tablespace and xlog symlinks as they were still pointing to the “/pg01” directory and not the new cloned mounts.

    And finally create the recovery.conf in the $PGDATA directory.

    I wanted to recovery to our last transaction so I didn’t specify any point in time values but it could have easily be done with recovery_target_time. Now we can start our cluster.

    After a few minutes our cluster should be recovered, cloned and ready to go. We can check our postgresql.log file and see output like this.

    CentOS 7 Teamed Interface on Virtual Box

    I wanted to have a look at the NetworkManager utilities in CentOS 7 so I thought I’d test out a teamed (aka bonded) interface. So here’s my setup.

  • Virtual Box Version 5.0.14 r105127
  • CentOS Linux release 7.2.1511 (Core)
  • Network 1 – NAT
  • Network 2 & 3 – Host Only


  • Setup looks like the following.

    CentOS 7 Setup

    So now on to the config. I’ll be using the nmcli command line utility to configure it in an active-backup setup. First create the teamed interface.

    One thing I found out early was that I needed to set the fail_over_mac parameter to 1 as Virtual Box does not like the bond to get the same MAC address for all nics. Setting this parameter means the bond will get the MAC of the active nic.

    Now I can add the slaves.

    Startup up all the components.

    We can check our setup and it looks good for now.

    And now we can down some interfaces and see the bonding in action.

    Our network connectivity survived and we can see from the above the MAC address for the bond changed to the active nic.

    Extracting IP’s From Oracle Listener Log

    There may be times when you want to extract all the client and host IP’s from the listener log to see who’s connecting to a particular database or service.

    I was interested in a particular service so the following will extract all IP’s connecting in to that service. Please not that our listener logs are rotated and zipped.

    You could tweak it and pipe it to nslookup to get actual hostname but for my requirements the above was enough.

    Authorizing AWS Inbound Traffic For Your IP Using CLI

    One of the security measures you may want to implement in your AWS setup is to have a bastion host or limit ssh access from the whole world and only use your private network.

    If you have your own account and your instances are exposed to the public you should at least limit ssh access by IP. Here’s how to do it.

    First, get your current IP.

    Find out the security group to which your instance is assigned.

    Once we have our security group we can see what inbound rules we have.

    Now we need to revoke our old IP and authorize our new IP.

    You should now be able to ssh into your ec2 instance and only your IP will be allowed through. You can read more about securing your ec2 instance at Tips for Securing Your EC2 Instance

    Change of Direction

    I created this blog a few years ago to get a head start on Oracle 12c as I didn’t think the uptake in business would match my curiosity.

    Technology changes very quickly and there have been lots of other things to keep me busy and interested since that time. I’ve found myself slowly drifting away and using various other tools. I’ve also been frustrated with the state of play in Oracle’s licensing model and cost and the battle that you have to fight all to regularly.

    With that in mind the blog will take a change in direction and I’ll start writing about anything tech that piques my interest or anything else I’m currently working on. So you’ll start seeing things like AWS, PostgreSQL, Python, Flask, maybe MySQL, Linux and whatever else.

    Hopefully I can add something of value and learn along the way.

    Simple PostgreSQL RDS CloudFormation Template

    I’ve been working with the Amazon RDS offerings lately and used a few of the services and tools. Here’s a simple cloudformation sample to create a PostgreSQL RDS.

    The template creates a security group allowing access on port 5432 to subnet 10.0.0.0/16. It also creates a database parameter group with the pg_stat_statements module as coming from an Oracle background I like to have executions stats of my sql.

    Lets validate the template to make sure our json is well formed.

    In this test I’ll use the aws cli to run the cloudformation template but the CloudFormation Management console is another option.

    We can see how its going by describing the stack events.

    After about 10 minutes we should have our database created. As this database lives in our private subnet and is not open to the world we need to test connectivity by going to a server in our public subnet and running our psql command from there.

    So thats pretty much it for our simple cloudformation RDS test. There are a lot more options and parameters out there and this all can be done on the AWS free tier.