Monday, November 16, 2015

Case insensitive search...

Well, I know this is not a new feature and this was exists from 10g but I just came across with this and wanted to record in my repository.

We know that there are functional indexes available for searching data in either UPPER or LOWER case searches.

But one can still achieve this case insensitive search by changing NLS parameters as demonstrated below.

Please note that this will not work for LIKE operator. 

SQL> show parameter nls_comp

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
nls_comp                             string      BINARY
SQL> show parameter nls_sort

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
nls_sort                             string      BINARY
SQL>

SQL> create table san_ci_test (user_name varchar2(30));

Table created.

SQL>
SQL> insert into san_ci_test values ('santhosh');

1 row created.

SQL> insert into san_ci_test values ('Santhosh');

1 row created.

SQL> insert into san_ci_test values ('SANTHOSH');

1 row created.

SQL>
SQL> commit;

Commit complete.

SQL> select *from san_ci_test where user_name = 'santhosh';

USER_NAME
------------------------------
santhosh

As you see there is only one row selected above.

Now change the NLS parameters and check the same query results.

SQL> alter session set nls_comp = ansi;

Session altered.

SQL> alter session set nls_sort = binary_ci; (ci - Case Insensitive)

Session altered.

SQL> select * from san_ci_test where user_name = 'santhosh';

USER_NAME
------------------------------
santhosh
Santhosh
SANTHOSH

SQL>

Yes these parameters can be set at database/instance level but I definitely wouldn't recommend that. 

Friday, October 2, 2015

Adding column with default value on a large table...

We all know that we can easily add a column to an existing table with DEFAULT value. It's not a problem when the the table size is small because when adding a column with default value, all existing rows needs to be updated with the value specified in default clause.

Problem is only when the table has millions of records. It will take anywhere from minutes to close to hours (depending on the size of the table).

Thanks to 11g with its new feature:

Alter Script:

alter table table_name add column_name varchar2(3) default 'YES' NOT NULL;

The above statement will not issue an update to existing records. New records will have the value YES. So, what will happen to existing records data for this column? Here comes the 11g new feature, when a user selects the column for an existing record, oracle gets the default value from the data dictionary.

Now you can happily add a column with default value to even billion records table in fraction of a second!!! sweet...

Here is the test results with and without NOT NULL:

15:59:35 SQL> select count(1)from ln_stage_data;

  COUNT(1)
----------
  12540411

Elapsed: 00:00:00.93
15:59:40 SQL> alter table ln_stage_data add test_flag char(1) default 'Y';

Table altered.

Elapsed: 01:02:03.38
17:01:51 SQL> alter table ln_stage_data add test_flag2 char(1) default 'Y' not null;

Table altered.

Elapsed: 00:00:01.13
17:02:14 SQL>

Friday, July 17, 2015

Index Usage and Monitoring...

Thought of blogging about this long ago but never took a step and now is the time to put it up here instead of keep digging my repository for queries.

Creating an Index is the first solution for every developer to overcome the issue :)
But once its created its way too hard to decide whether to keep it or drop it after days go by...

How do we know if the particular index is being used or not? Big Question...

Well, one can turn on monitoring on index:

alter index index_name monitoring usage;

Once done, you can check v$object_usage to see if the index is being used or not.
Great. But how often? Sorry view wont have answer for that.

Below query helps in finding how many times the index is being used (and one can digg thru more to find the actual query also by using sql_id):

select di.table_name "Table Name", di.index_name "Index Name",
       round(sum((ds.bytes)/1024/1024),2) "Index Size -MB",
       dhsp.options "Index Operation", count(dhsp.options) "No# of Executions"
  from dba_hist_sql_plan dhsp, dba_indexes di, dba_segments ds
 where di.owner = user
   and dhsp.object_name(+) = di.index_name
   and dhsp.object_owner(+) = di.owner
   and di.index_name = ds.segment_name
   and di.owner = ds.owner
   and not exists (select 1 from dba_constraints dc where dc.constraint_name = di.index_name and dc.constraint_type = 'P')
   --and di.table_name in ('TABLE1', 'TABLE2')
 group by di.table_name, di.index_name, dhsp.options
 having count(dhsp.options) = 0
 order by 1,2,3;

As you can see, the above query is filtering for count = 0 meaning, I am only interested in showing indexes which were never used.

Note: Recently executed queries wont show up in DBA_HIST_SQL_PLAN so replace with V$SQL_PLAN. Also check the min(timestamp) from dba_hist_sql_plan to see the retention of the history. 

Tuesday, March 24, 2015

ORA-02050: transaction 11.9.23629 rolled back, some remote DBs may be in-doubt is local tran 11.9.23629 (hex=0b.09.5c4d)

Full stack of error message:
ORA-02050: transaction 11.9.23629 rolled back, some remote DBs may be in-doubt
  is local tran 11.9.23629 (hex=0b.09.5c4d)
ORA-02051: another session or branch in same transaction failed or finalized
  insert pending collecting tran, scn=7859252260641 (hex=725.dff06f21)

Developer reported this error while he is trying to perform an update.

First thought it might be due to exceeded connections over db links.

Later found it was due to Materialized View Log that was created on the base table on which the update was being performed. This log was not being refreshed due to some reasons and thus causing this failure.

Check the alert log for any additional/following errors for this ORA-02050 messages:

Wednesday, March 18, 2015

Resetting HWM (High Water Mark) on a Table...

The high-water mark is the level at which blocks have never been formatted to receive data.

Prior to 10g, only option to reset HWM (High Water Mark) was to either truncate data and re-insert or export import.
With 10g, there is an option that can be used to reset HWM on a table using "shrink space".

eg.,
alter table table_name shrink space; --can be used cascade along with it (Read the manual)

Here is an example showing how HWM takes place and how this option helps to reset it back:

create table hm_test as select *from all_tables where 1=2;

select blocks, extents
  from dba_segments
 where segment_name='HM_TEST';

--no records (read this to see why)

insert into hm_test select *from all_tables where rownum < 2;

commit;

select blocks, extents
  from dba_segments
 where segment_name='HM_TEST';

--blocks  extents
--8       1

analyze table hm_test compute statistics;

select blocks,empty_blocks,num_rows from dba_tables where table_name='HM_TEST';

--Querying dba_tables for 'Blocks' and 'Empty_blocks' should give the high water mark.
--
--Blocks -- > Number blocks that has been formatted to recieve data
--Empty_blocks ---> Among the allocated blocks, the blocks that were never used

--BLOCKS  EMPTY_BLOCKS    NUM_ROWS
--5                3                                 1

insert into hm_test select *from all_tables;

commit;

select blocks, extents
  from dba_segments
 where segment_name='HM_TEST';

--blocks  extents
--88        11

analyze table hm_test compute statistics;

select blocks,empty_blocks,num_rows from dba_tables where table_name='HM_TEST';

--BLOCKS  EMPTY_BLOCKS    NUM_ROWS
--80              8                                 2285

--now delete the records to see if empty_blocks number changes

delete hm_test;

commit;

select blocks,empty_blocks,num_rows from dba_tables where table_name='HM_TEST';

--BLOCKS  EMPTY_BLOCKS    NUM_ROWS
--80              8                                 2285

analyze table hm_test compute statistics;

select blocks,empty_blocks,num_rows from dba_tables where table_name='HM_TEST';
--BLOCKS  EMPTY_BLOCKS    NUM_ROWS
--80              8                                 0

--as you see empty_blocks is still 8

--To determine the exact number of blocks that contain data i.e. space used by table below the high water mark, query rowid and get the first and last used block from the rowid.
select count (distinct dbms_rowid.rowid_block_number(rowid)) "used blocks" from hm_test;

--0

--This works fine if only one file is used for the segment.
--If more files, we need to include the file number in some way, for instance:

select count (distinct dbms_rowid.rowid_block_number(rowid)||'-'||dbms_rowid.rowid_relative_fno (rowid)) "used blocks" from hm_test;

--o

--resetting HWM

alter table hm_test shrink space compact cascade;

--ORA-10636: ROW MOVEMENT is not enabled

alter table hm_test enable row movement;

alter table hm_test shrink space compact cascade;

select blocks,empty_blocks,num_rows from dba_tables where table_name='HM_TEST';

--BLOCKS  EMPTY_BLOCKS    NUM_ROWS
--80              8                                  0

analyze table hm_test compute statistics;

select blocks,empty_blocks,num_rows from dba_tables where table_name='HM_TEST';

--BLOCKS  EMPTY_BLOCKS    NUM_ROWS
--80              8                                  0

--still no change

--now lets to shrink space cascade

alter table hm_test shrink space cascade;

select blocks,empty_blocks,num_rows from dba_tables where table_name='HM_TEST';

--BLOCKS  EMPTY_BLOCKS    NUM_ROWS
--80              8                                  0

analyze table hm_test compute statistics;

select blocks,empty_blocks,num_rows from dba_tables where table_name='HM_TEST';

--BLOCKS  EMPTY_BLOCKS    NUM_ROWS
--1                7                                 0