Tuesday, April 8, 2014

GoldenGate Sequence Sync with "row cache lock"

Had real trouble with this one.
Setting up GoldenGate replication for 40+ TB database in the process of migration and GG Replicat was just getting hung (well not really but runs way too slow that takes hours to move to next transaction). 

Checking in the background processes the only thing that I notice with GGAdmin user session has way too much of wait event on "row cache lock"!!!

Kept on refreshing to current sql that its being executed and only thing that its doing is:
select seq.nextval from dual;
Along with:
BEGIN
   ggadmin.replicateSequence (TO_NUMBER (702400001),
                              TO_NUMBER (20),
                              TO_NUMBER (1),
                              'SCHEMA_OWNER',
                              TO_NUMBER (100000),
                              'LOGID',
                              UPPER ('ggadmin'),
                              TO_NUMBER (2),
                              TO_NUMBER (0),
                              '');
END;

The above statements are repeatedly seen and made me to check on the sequence cache size to see if that is enough because the Target database is running on 2 node cluster. Well, cache size for this is set to 100000 which is quite normal in our database (source database runs in 7 node cluster) as these are RAC databases and the higher the cache is lesser "row cache locks". So, why am I still getting this "row cache lock" in Target even though the cache size is 100000???

Well, I read this reading:
"When using higher cache values the problem of hard parses gets somewhat mitigated but there is another issue. When replicating such a sequence GoldenGate follows these steps: 
  1. Sets sequence to nocache
  2. Executes sequence.nextval until it reaches the target value
  3. Restores altered sequence properties
So if you have, say, a sequence with cache 1000 then each time a new value gets written into seq$.highwater on the source, GoldenGate is going to execute sequence.nextval one thousand times on the destination! As you can imagine this aren't performing particularly fast as getting every next value will result in Oracle updating actual row in the data dictionary. All of the above means that replicating sequences can sometimes put quite a bit of strain on the system. A much faster (and simpler) approach would be to use step to reach target sequence value instead of fetching it on-by-one in nocache mode. Last but not least, you can end up with nocache sequence if procedure crashes in the middle."

Well, that is a problem. So, the only solution that I can think of at this time is to remove Syncing the Sequences option for this group and restart the group. Now, no more row cache locks and will take care of syncing the sequences at one shot for all sequences at the time of cut-over.

No comments:

Post a Comment