Solaris: PRVG-1509 when active/passive IPMP is being used

Upon installing 12C Grid Infrastructure on Solaris 5.10 the following problem was encountered during the pre installation checks.

Looking at our interface setup all seems ok. We have an active/passive IPMP group setup for our public interfaces to cater for failover. Confirmed with my trusty Unix admin that the setup was all good.

Off to metalink we go and we can see the following issue – Solaris: PRVG-1509 : IPMP fail-over group “ipmpub” with interface list “nxge1,nxge2” on node “racnode1” (Doc ID 1401658.1)

Unfortunately the note claims this problem is fixed in 11.2.0.4 and above and we are installing 12.1. The metalink note also mentions you can add the additional network interface after the GI is configured with the following 11.2 syntax (theres a bit of doco bug in the note with nodeapps/network but I’ll assume they meant network)

So if we follow those steps and use the 12C syntax I come to another problem.

GI allows you to add a second network so interestingly enough the following works.

This would mean that I’d need to reconfigure all the vips, nodeapps and anything else using netnum 1 to point to netnum 2. I don’t particularly want to go through all that config especially if I have to replicate it against 4 other clusters. So now I have two options. Raise an SR with Oracle and wait for a response and kill project timelimes or try another method to get this passive interface part of the GI stack. I’ll go with option 2 but I’ll still raise the SR and hopefully we can work both in parallel.

The initial setup was done via the OUI and the plan was to do every subsequent cluster with a silent install and response file (generated from first install) and automate it as much as possible. Looking into the response file we can see the network interfaces config below.

How about we configure our passive interface (vnet1) here and see if the GI install successfully installs it.

Now we run a silent install with

But this bombs out early in the installation with cannot have multiple interfaces on the same subnet. I removed vnet1 from the response file and successfully installed the cluster but it leaves us back to where we started without proper failover on the public network. SR raised and now a waiting game. I’ll try the second network option but I’ll leave that to another post.

Update

Oracle support have come back quickly and confirmed its a bug.

Leave a Reply

Your email address will not be published. Required fields are marked *

CAPTCHA * Time limit is exhausted. Please reload CAPTCHA.