Monday, December 5, 2011

Oracle DUAL equivalent in Netezza

There is a view named _v_dual in Netezza that can be used to replace Oralce's Dummy DUAL Table.

For ex.,
In Oracle:
select sysdate from dual;

In Netezza:
select current_date from _v_dual;

(Please note also that there is no sysdate in Netezza)

Sunday, December 4, 2011

Windows 64bit RAM Limit to 4GB (Or even Less)

Its called PAE (Physical Address Execution).
Once enabled, your system can see all the RAM installed on your PC.
In my case, I had 8GB RAM Installed but my windows 2008 server 64bit was showing me only 3070 used!!! what happened to remaining almost 5GB RAM!!!
After doing some research, found that PAE is disabled (dont ask me how to find if it disabled or not as seeing 3070 in Task Manager is enough for me to say that its disabled) and thats why my system was not utilizing all my 8GB.

This PAE is directly related with DEP (Data Execution Prevention) and as I disabled DEP (check my previous post) due to issues with Cygwin install, PAE is also disabled. So, to get my 8GB back I have to enable PAE but leave DEP Disabled so this is what I had to do to enable PAE.

OK. How do you enable that PAE thingy???

Open Command window (Run as Administrator) and type the following and reboot and wola you will see all your RAM.
C:>BCDedit /set PAE forceenable

I am sorry, I am no expert in Windows Internal and all these PAE and DEP stuff. http://www.giyf.com/ to find out more about these terminologies!!!


Cygwin - not able to install properly with error log "abnormal exit: exit code=-6"

First install of cygwin as Administrator account was successful on my new Windows Server 2008 and everything was working fine.

So, decided not to use Administrator account and thus removed all cygwin installation and tried to install again with the different OS Account ORACLE (Local Admin Group) and damn thing wont install.
Cygwin install takes about half hour to finish and then you see that cygwin\bin folder is almost empty and when you look into the log files (c:\cygwin\var\log\setup.log) you will see this error "abnormal exit: exit code=-6" for almost every single exe that needs to be copied into my c:\cygwin floder!!!

After doing some research found that its the thing DEP (Data Execution Prevention) is enabled and that thing is somehow blocking these cygwin installation to copy files!!!


How to Disable DEP:
Open Command window (Run as Administrator) and enter the following and reboot.

C:\>bcdedit /set nx AlwaysOff

Once booted, cygwin install went thru fine without any errors.

Check the next blog which is related when you disable DEP.



Monday, November 7, 2011

Windows Server 2003 Install Blue Screen...

wtf...

Why the heck I get blue screen when trying to install Windows Server 2003 on my brand new PC?
I can install Vista 64 bit fine without any issues!!!
What's wrong with this 2003 Server Software? Is my installation CD bad?

Nope. The issue is with the hard drives coming up with all new technologies and making the BIOS setting for the HDD to be AHCI enabled... Take this out change it to good old fashioned IDE and you are good to go.

Thursday, November 3, 2011

Crontab entry to generate an output file with 'date' format...


No time to blog too much on this.

Here is the simple change that I wanted from crontab.

Instead of writing the log into one file (appending) every time the crontab is initiated (and you know how big the file is going to be after few days/weeks/months) I decided to write into its own file with the Date and Time attached to the file name.

Original crontab entry with a file being appended with the results:

00 15 * * 1-6 /home/san/test.sh >> /home/san/test.log

So, changed to below to have the date and time part appended to the file name instead:

00 15 * * 1-6 /home/san/test.sh > /home/san/test.crontab.log.`date +%d.%m.%y.%R`

Damn thing wont work... No file is generated...

what the hell... that part of the date portion is correct and sure thing works fine in shell script...

Well, found that % is also treated as "comment" in crontab so gottaa escape that character with backslash to make it work.

Here is the correct entry:

00 15 * * 1-6 /home/san/test.sh > /home/san/test.crontab.log.`date +\%d.\%m.\%y.\%R`

Now, there is a file that gets created with the date format that I wanted...

Saturday, August 27, 2011

Maximum datafile size limits...

Max. Number of Data Files in 9i and 10g is limited to 65,536.

Max. Number of blocks in a data file is:
4,194,304 (4 million) in 9i and increased to
4,294,967,296 (4 billions) in 10g when used BIGFILE Tablespace.

How to calcualte Max.Datafile Size:

Max.DataFile Size = db_block_size * Max. Number of Blocks

How to calculate Max.Database Size:

Max.Data file Size * Max. Number of Datafiles.
Here is the chart with SMALLFILE Tablespace (using 4 million blocks):

Block Size Max. Data File Size Max. Database Size
32 K 128 GB 8,388,608 GB
16 K 64 GB 4,194,304 GB
8 K 32 GB 2,097,152 GB
4 K 16 GB 1,048,579 GB
2 K 8 GB 524,288 GB


Here is the chart with BIGFILE Tablesapce (using 4 million blocks):

Block Size Max. Data File Size Max. Database Size
32 K 131,072 GB (128T) 8,589,934,592 GB
16 K 65,536 GB (64T) 4,294,967,296 GB
8 K 32,768 GB (32T) 2,147,483,648 GB
4 K 16,384 GB (16T) 1,073,741,824 GB
2 K 8,192 GB (8T) 536,870,912 GB

The maximum amount of data for a 32K block size database is eight exabytes (8,388,608 Terabytes) in Oracle 10g.

Note:
The BIGFILE syntax must be specified during the tablespace creation as such:
CREATE BIGFILE TABLESPACE my_ts;

The BIGFILE tablespace can ONLY have a SINGLE datafile.
By creating a tablespace using this syntax, Oracle increases the maximum number of blocks in a datafile from the 4 Million blocks to a maximum of 4 Billion.

Wednesday, August 24, 2011

Parallel Execution on the same node where it started in RAC




By default, in an Oracle RAC environment, a SQL statement executed in parallel can run across all of the nodes in the cluster. For this cross-node or inter-node parallel execution to perform, the interconnection in the Oracle RAC environment must be size appropriately because inter-node parallel execution may result in a lot of interconnect traffic. If the interconnection has a considerably lower bandwidth in comparison to the I/O bandwidth from the server to the storage subsystem, it may be better to restrict the parallel execution to a single node or to a limited number of nodes. Inter-node parallel execution does not scale with an undersized interconnection.

10g:
Utilize the parameters instance_group and parallel_instance_group to limit this execution to particular node.

ex.,
Instance#1 parameter:
instance_groups='pqgrp1','pqallnodes';
Instance#2 parameter:
instance_groups='pqgrp2','pqallnodes';

Now, While running the batch process or any program that needs to be executed in only one instance then:
alter session set parallel_instance_group = 'pqgrp1'; -- This will make the following program executions to be used only Instance#1.

11g on-wards:
To limit inter-node parallel execution, you can control parallel execution in an Oracle RAC environment using the PARALLEL_FORCE_LOCAL initialization parameter. By setting this parameter to TRUE, the parallel server processes can only execute on the same Oracle RAC node where the SQL statement was started.

Thursday, August 18, 2011

Managing multiple archive destinations with RMAN backup and restore process

Goal is to Explain how multiple archive destinations are handled during an RMAN backup and restore process

Specifying Archive Destinations

You can choose whether to archive redo logs to a single destination or multiplex them. If you want to archive only to a single destination, you specify that destination in the LOG_ARCHIVE_DEST initialization parameter. If you want to multiplex the archived logs, you can choose whether to archive to up to ten locations (using the LOG_ARCHIVE_DEST_n parameters) or to archive only to a primary and secondary destination (using LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST).

Backing Up Archived Redo Log Files with BACKUP ARCHIVELOG

To back up archived redo logs, use the BACKUP ARCHIVELOG command at the RMAN prompt. This example uses a configured disk or sbt channel to back up one copy of each log sequence number for all archived redo logs:

BACKUP ARCHIVELOG ALL;

Even if your redo logs are being archived to multiple destinations and you use RMAN to back up archived redo logs, RMAN selects only one copy of the archived redo log file to include in the backup set. (Since logs with the same log sequence number are identical, there is no need to include more than one copy.)

RMAN Command: DELETE INPUT

DELETE INPUT will remove archivelogs from the first destination. It will also remove archivelogs from the second destination that was PREVIOUSLY backed up.

For example:

Backup #1

  • Backup archivelogs older than seven days from destination #1, and remove these files
  • Do not backup or remove any archivelogs in destination #2 has all files

Backup #2

  • For destination #1, backup archivelogs older than seven days and remove these files
  • For destination #2, backup archivelogs that satisfied backup criteria #1 (more than 14 days old) and remove them

Whereas a DELETE ALL INPUT will backup from one destination and delete both copies of the archivelog.

Backup and Delete from a Specified Archive Destination

If you'd like to only backup and remove from a single archive destination, use the LIKE clause:

eg:

RMAN> backup archivelog until time 'sysdate -7' like '/u04/oracle/admin/TEST/arch/%' delete
input;

The above will only backup and delete from the LIKE destination

If archive logs are backed up to multiple destinations, and if the required retention policy is required to be different for different destinations, then the following can be done:

RMAN> run {

backup archivelog all;
delete archivelog until time 'sysdate -1' like '/am3/oradata/arch/%';
delete archivelog until time 'sysdate -5' like '/am3/oradata/arch1/AM3P2/%';
}

The FRA and DELETE INPUT

RMAN will backup and remove the archivelogs in numerical order from _dest_1 to dest_10. However,
if one of the archive destinations is an FRA, RMAN will always backup and remove from the FRA
first, and then work on the numerical order of the log_archive_dest_x

eg.
If the following parameters were set:

  • log_archive_dest_1='location=/u002/oraarch/ORA1020
  • log_archive_dest_2='LOCATION=USE_DB_RECOVERY_FILE_DEST
  • log_archive_dest_3='location=/u002/oraarch3/ORA1020'

Oracle will backup and remove from the FRA, folllowed by archivelogs in log_archive_dest_1.
Finally, logs in log_archive_dest_3 will be removed.

Where do the restored archivelogs go?

During the restore process RMAN will check all archive destinations to make sure that the archivelog requested does not already exist. If the archivelog already exist in one of the destinations RMAN will not restore the file. If the archivelog does not exist in any of the destinations RMAN will restore it to the FRA if it exists. Otherwise it will restore it to the highest archive destination defined.


Restoring archivelog to FRA in ASM

When using the FRA and ASM, the archivelog will be restored to the current directory, rather than the time at which it was generated. Even if using the SET ARCHIVELOG DESTINATION, an alias will be created to the current directory.

For example, on the 16 Aug 2010, when restoring the 13 Aug archivelogs:

RMAN> run {
allocate channel c1 type 'sbt_tape';
set archivelog destination to '
+SHARED_FRA_DG01/P132/ARCHIVELOG/2010_08_13';
restore archivelog from time "to_date('13/08/2010:00:00:00','dd/mm/yyyy:hh24:mi:ss')"
until time "to_date('14/08/2010:00:00:00','dd/mm/yyyy:hh24:mi:ss')";
}

+SHARED_FRA_DG01/P132/ARCHIVELOG/2010_08_13
ASMCMD> ls -ltr
Type Redund Striped Time Sys Name
N 1_94528_708660567.dbf => +SHARED_FRA_DG01/P132/ARCHIVELOG/2010_08_16/thread_1_seq_94528.5341.727224533
N 1_94529_708660567.dbf => +SHARED_FRA_DG01/P132/ARCHIVELOG/2010_08_16/thread_1_seq_94529.3425.727224661
N 1_94530_708660567.dbf => +SHARED_FRA_DG01/P132/ARCHIVELOG/2010_08_16/thread_1_seq_94530.5644.727224779

PS: This content is an excerpt from oracle's metalink Article ID 443814.1 with few additional notes at the top.

Tuesday, May 31, 2011

Missing Sequence Numbers in a Table...

Title says it all I guess.
But still want to blog out though...
Its quite common that at least one column in one table in the database will be depending on Oracle SEQUENCE and there will be a code in place to get the NEXTVAL from that sequence to populate.
And I am pretty sure that because of few reasons the NEXTVAL that is fetched is not really committed to the table and thus there will be a sequence miss in the table.

The following query gets you all the sequences that were generated but never made it table.
----------------------------------------------------------------------------------------------
--with one sql. only catch is that the number of rows in the actual table should be
--less than or equal to the number of rows in dba_objects.
select missing_seqs
from (
select rownum missing_seqs from dba_objects
minus
select seq from santest --replace with the actual column an table name here
)
where missing_seqs <= (select max(seq) from santest); --replace with the actual column an table name here
----------------------------------------------------------------------------------------------
--pl/sql block for without any catches and gotchas...
set serveroutput on size unlimited
declare
t_missing_seq number;
t_start_seq number := 1; --replace this with the actual seq value that is started off in the table
t_max_seq number;
x char;
begin
select max(seq) into t_max_seq from santest; --replace santest with the actual table where the sequence column is. also replace (seq) witht the actual column name
for i in t_start_seq..t_max_seq loop
begin
select 'x'
into x
from santest --replace santest with the actual table
where seq = i; --replace seq with the actual column name
exception
when no_data_found then
dbms_output.put_line('Missing Sequence --> '||i);
end;
end loop;
end;
/
----------------------------------------------------------------------------------------------

Wednesday, May 25, 2011

Number of Rows per Partition in a Table

Simple and handy anonymous block to share that provides the total number of rows per partition in a given table:

___________________________________________________________________
set serveroutput on size unlimited
set echo off
set verify off
declare
cursor c1 is
select table_name,partition_name
from all_tab_partitions
where table_name = upper('&Table_Name');
t_count number:=0;
begin
for i in c1
loop
execute immediate 'select count(1) from '||i.table_name||' partition('||i.partition_name||')' into t_count;
dbms_output.put_line('The Partition '||i.partition_name||' of table '||i.table_name||' has '||t_count||' rows');
end loop;
end;
/
___________________________________________________________________

Friday, March 18, 2011

Unable to drop DBLink - Error: ORA-02082

ORA-02082: a loopback database link must have a connection qualifier
Reason is due to the global_name for the database:
1. select * from global_name;
2. note down the value
3. alter database rename global_name to some_new_name;
4. drop public database link linkname.whatever;
5. now change the global_name back using alter database rename global_name to original value;

11g - Missing Tables after successful IMP...

Seriously, If I introduce a new feature, I would not make that feature as default enabled!!! I let the end user make a decision on whether to enable that feature or not.

But, what did 11g do? It enables the new features by default and forcing us to know about them? Well, you know what one way it's good at least we are forced to learn new features immediately instead of waiting for an organization to start using it :)

Ok, enough blogging...

The issue is simple:
Developers starts complaining that there are few tables that are missing in database after the IMPort is done. They are correct. When looked into the database tables, I don't see few tables in there which they exists at the source database when the EXPort was done.
Time to look into IMP Log files (another reason why one has to make sure they provide log= at the time of IMP or EXP). Wondered to see that there is no reference of the missing table in that import log.

So, scan EXPort log nog. Hmm... even the export don't have any reference to those missing tables.

Well, I am pretty sure I did export for all tables. So, what happened? Why is EXP not exporting those few tables???

Looking into the source database where EXP was done, found that the missing tables are actually empty. Meaning there is no records into those tables. So, that does not mean that table is not there? It is there. Why cant it Export?

Oh, there you go...

Its because of 11g new feature called "deferred_segment_creation". This thing is set to TRUE by default and what it does is basically, it wont create a segment in the datafile at the time of table creation.

It only creates a segment at the time when a row gets into that table (need to wipe out the basics that learnt decades ago and get along with the new basics...).

Alright, thats the issue, so, when EXP happens it fetches the objects from dba_segments and this empty table wont have a segment so it wont be exported. end of story...

Well, how can i handle my export then?
Here you have a solution to chose:
1. for already created tables:
a. alter table empty_table allocate extent;
2. for all new tables if you want to create extent immediately then change the value from TRUE to FALSE for the parameter "deferred_segment_creation"

No. I have bunch of empty tables and have to do export import then what? Sorry... your only option is to use datapump (EXPDP and IMPDP).

ORA-00304: requested INSTANCE_NUMBER is busy

OMG...
Its a new database that i want to create and Oracle yells at me stating that INSTANCE_NUMBER is busy?
Oracle Error Code Says:
ORA-00304:
requested INSTANCE_NUMBER is busy
Cause: An instance tried to start by using a value of the initialization parameter INSTANCE_NUMBER that is already in use.
Action: Either a) specify another INSTANCE_NUMBER, b) shut down the running instance with this number c) wait for instance recovery to complete on the instance with this number.

But that is not true in this case. This is neither a RAC to have instance_number parameter assigned nor the database is already running. So, whats going on then...

Found that there are some wrong values for compatible parameter in init.ora file!!! So, do I get the INSTANCE_NUMBER busy error? Wow.... anyways, next time when this shows up, don't panic, just scan thru init.ora to make sure values are correct for compatible, service_names, db_name etc.,