Monday, September 12, 2011

11g ASM Features

New Disk Group Compatibility Attributes

Oracle 11g ASM includes two new compatibility attributes that determine the version of the ASM and database software that can use specific disk groups:

  • COMPATIBLE.ASM - The minimum version of the ASM software that can access the disk group. In 11g, the default setting is 10.1.
    it defines the format of the data on ASM disks
  • COMPATIBLE.RDBMS - The minimum COMPATIBLE database initialization parameter setting for any database instance that uses the disk group. In 11g, the default setting is 10.1.
    Compatible.asm should always be equal or greater than Compatible.rdbms parameter

The compatibility versions of a disk group can only be increased not decreased. If you have increased the version by mistake, you will need to create a new disk group.
Once this parameter is set they cannot be rolled back, each disk group can have it own compatible parameter. So different database version can connect to same ASM instance.
The disk group compatibility attributes can be set during disk group creation by adding the ATTRIBUTE clause to the CREATE DISKGROUP command.

CREATE DISKGROUP data DISK '/dev/raw/*'
  ATTRIBUTE 'compatible.asm' = '11.1';
 
CREATE DISKGROUP data DISK '/dev/raw/*' 
  ATTRIBUTE 'compatible.rdbms' = '11.1', 'compatible.asm' = '11.1';

The disk group compatibility attributes for existing disk groups can be altered using the SET ATTRIBUTE clause to the ALTER DISKGROUP command.

ALTER DISKGROUP data SET ATTRIBUTE 'compatible.asm' = '11.1';
ALTER DISKGROUP data SET ATTRIBUTE 'compatible.rdbms' = '11.1';

The new remap command of asmcmd can be used to recover a range of unreadable blocks. ASM would read a bad block from the good copy(failover group) and restore it to a new location on the disk

ASM drops disks if they remain offline for more than 3.6 hours. The disk groups default time limit is altered by changing the DISK_REPAIR_TIME parameter with a unit of minutes (M or m) or hours (H or h).
The parameter will become applicable to all
disks that are online.

-- Set using the hours unit of time.
ALTER DISKGROUP disk_group_1 SET ATTRIBUTE 'disk_repair_time' = '4.5h';
 
-- Set using the minutes unit of time.
ALTER DISKGROUP disk_group_1 SET ATTRIBUTE 'disk_repair_time' = '300m';

Oracle introduces the SYSASM privilege and the OSASM operating system group, which provide two mechanisms to enable the separation of storage and database administration duties. Granting these users the SYSASM privilege allows them to connect to the ASM instance and perform administration tasks. First, connect to the ASM instance.

$ export ORACLE_SID=+ASM
$ sqlplus / as sysasm
 
Allocation Unit (AU)
In Oracle 10g each extent mapped directly to one allocation unit (AU), but in Oracle 11g an extent can be made up of one or more allocation units. As files get larger, the extent size can grow to reduce the size of the extent map describing the file, thus saving memory.

The first 20,000 extents match the allocation unit size (1*AU). The next 20,000 extents are made up of 8 allocation units (8*AU). Beyond that point, the extent size becomes 64 allocation units (64*AU).

This causes better performance for large objects.
Oracle 11g also allows control over the allocation unit size using the ATTRIBUTE clause in the CREATE DISKGROUP statement, with values ranging from 1M to 64M.

CREATE DISKGROUP disk_group_2
  EXTERNAL REDUNDANCY
  DISK '/dev/sde1'
  ATRRIBUTE 'au_size' = '32M';
By default the AU size is 1 Mb. The size can be defined using the power of 2(1, 2,4,8 etc.).

Disk_repair_time – Causes the resync process to begin to keep track of changes to disk extents that belong to offline disk. By default is 3.6 hrs. It defines the time to evacuate the offline disk for 3.6 hrs. In case we change the parameter value it is applicable only to online disks and not to offline disk.



ASM Fast Disk Resync

It allows to resynchronize ASM disks within an ASM disk group
 
ASMCMD Commands

asmcmd> md_backup

It will backup the entire ASM metadata repository. A file will be created in the current working directory. On my linux system this file was called ambr_backup_intermediate_file.

md_backup
command makes a copy of the metadata for one or more disk groups


lsdsk -
lists information about ASM disks from the V$ASM_DISK_STAT and V$ASM_DISK views.



lsdsk -d data -k
Total_MB  Free_MB  OS_MB  Name       Failgroup  Library  Label  UDID  Product  Redund   Path
    8189     6961   8189  DATA_0000  DATA_0000  System                         UNKNOWN  /dev/sdc1
    8189     6961   8189  DATA_0001  DATA_0001  System                         UNKNOWN  /dev/sdd1
    8189     6950   8189  DATA_0002  DATA_0002  System                         UNKNOWN  /dev/sde1

The cp command allows files to be copied between ASM and local or remote destinations. The summary usage is shown below.

cp [-ifr] <[\@connect_identifier:]src> <[\@connect_identifier:]tgt>
ASMCMD [+] > cp +DATA/db11g/datafile/users.273.661514191 /tmp/users.dbf

The md_backup command makes a copy of the metadata for one or more disk groups. The summary usage is shown below.

md_backup [-b location_of_backup] [-g dgname [-g dgname …]]

An example of the command is shown below. The resulting file contains all the metadata needed to recreate the ASM setup.

ASMCMD [+] > md_backup -b /tmp/backup.txt -g data
Disk group to be backed up: DATA

The md_restore command allows you to restore a disk group from the metadata created by the md_backup command. It also allows a certain amount of manipulation of the final disk groups during the restore. The summary usage is shown below.

md_restore -b  [-li]
                   [-t (full)|nodg|newdg] [-f ]
                   [-g ',,...']
                   [-o ':,...']

A straight restore of the backup shown previously is shown below.

ASMCMD [+] > md_restore -b /tmp/backup.txt -t full -g data



The remap command repairs a range of physical blocks on disk. The contents of each block are not validated, so only blocks exhibiting read errors are repaired. The summary usage is shown below.

remap   

An example of the command is show below.

ASMCMD [+] > remap data data_0001 5000-5999



Preferred Read Failover Group
In Oracle 10g, ASM always reads the primary copy of the mirrored extent set. This isn't a problem when both nodes and both failure groups are all located in the same site, but it can be inefficient for extended clusters, causing needless network traffic. Oracle 11g allows each node to define a preferred failure group, allowing nodes in extended clusters to access local failure groups in preference to remote ones.
Once the compatibility options are correct, the ASM_PREFERRED_READ_FAILURE_GROUPS parameter is set to the preferred failure groups for each node.

ALTER SYSTEM SET ASM_PREFERRED_READ_FAILURE_GROUPS = 'data.data_0000', 'data.data_0001', 'data.data_0002';
The "ALTER DISKGROUP ... MOUNT" statement allows disk groups to be mounted in restricted mode.

SQL> ALTER DISKGROUP data MOUNT RESTRICTED;
 
Diskgroup altered.

Using RESTRICTED mode improves the performance of rebalance operations in a RAC environment as it elimitates the need for lock and unlock extent map messaging that occurs between ASM instances. Once the rebalance operation is complete, the disk group should be dismounted then mounted in NORMAL mode (the default).

The CHECK clause of the ALTER DISKGROUP command has been simplified so there are only two options, NOREPAIR and REPAIR, available, with NOREPAIR as the default. Summarized errors are displayed, with full error messages writen to the alert log.

ALTER DISKGROUP data CHECK; -- Like NOREPAIR

The above command will log the error in Alert log file.
ALTER DISKGROUP data CHECK NOREPAIR;
ALTER DISKGROUP data CHECK REPAIR;

Check repair will report the block error in alert log file and also repair it.

Disk groups that can't be mounted by ASM can now be dropped using the FORCE keyword of the DROP DISKGROUP command.

DROP DISKGROUP data FORCE;

ASM Rolling Upgrades
Before upgrading ASM , CRS need to be upgraded.

AMM – Automatic Memory Management

Memory_max_target

Memory_target

V$Memory_Target_Advice
V$Memory_Dynamic_Components
V$memory_resize_ops

SQL PLAN Management

It ensures that the performance doesn’t degrade due to change of execution plan. To guarantee this, only accepted execution plans will be used; any plan evolution will be tracked and evaluated at a later point in time and only be accepted as verified if new plan causes no runtime changes or an improvement of the runtime.

1. Capture

2. Selection

3. Evolution

Plan Capture
Create SQL plan baseline that represents accepted execution plans stored in SQL Management Base in SYSAUX tablespace



Automatic plan capture –
All repeatable sql staments will be automatically captured if OPTIMIZER_CAPTURE_SQL_PLAN_BASELINE is set to true.

A SQL Plan history will be created for the statement, which will include the information used by the optimizer to reproduce the execution plan, such as SQL text, outline, bind variable and compilation environment.
Only accepted plans used in future in case a new plan is found for the same SQL` text it is added to the plan history and is marked for verification. It will be accepted only if its performance is better than the plan from the current SQL plan baseline.
Bulk Loading
We can capture the plan into a SQL Tuning Set, and then load these plans into SQL Management Base as SQL Plan Baseline using the PL/SQL procedure DBMS_SPM.LOAD_PLAN_FROM_SQLSET or through OEM.
To load plans from a SQL Tuning Set, use the LOAD_PLANS_FROM_SQLSET function of the DBMS_SPM package:

DECLARE
my_plans pls_integer;
BEGIN
  my_plans := DBMS_SPM.LOAD_PLANS_FROM_SQLSET(
    sqlset_name => 'tset1');
END;
/

To load plans from the cursor cache, use the LOAD_PLANS_FROM_CURSOR_CACHE function of the DBMS_SPM package:

DECLARE
my_plans pls_integer;
BEGIN
  my_plans := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(
    sql_id => '99twu5t2dn5xd');
END;
/

Selecting SQL Plan Baseline
IF u set the OPTIMIZER_USE_SQL_BASELINE to true, each statement that is compiled the optimizer first uses a cost based search method to build a best cost plan then tried to find the matching plan in the SQL plan baseline. It evaluates the cost of each accepted plan in SQL plan baseline and selects the plan with lowest cost.
The plan generated during the execution of query also store in sql plan baseline but is accepted only if it doesnot causes any performance degradation.
Fixed SQL plan baselines indicate

to the optimizer that they are preferred. If the optimizer is costing SQL plan baselines and one of

the plans is fixed, the optimizer will only cost the fixed plan and go with that if it is reproducible.

If the fixed plan(s) are not reproducible the optimizer will go back and cost the remaining SQL

plan baselines and select the one with the lowest cost. Note that costing a plan is nowhere near

as expensive as a hard parse. The optimizer is not looking at all possible access methods but at

one specific access path.

Evolving SQL Plan Baseline.

When the optimizer finds a new plan for a SQL statement, the plan is added to the plan history

as a non-accepted plan that needs to be verified before it can become an accepted plan. It is

possible to evolve a SQL statement’s execution plan using Oracle Enterprise Manager or by

running the command-line function DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE. Using either of

these methods you have three choices:

1. Accept the plan only if it performs better than the existing SQL plan baseline

2. Accept the plan without doing performance verification

3. Run the performance comparison and generate a report without evolving the new plan.

If you choose option 1, it will trigger the new plan to be evaluated to see if it performs better

than a selected plan baseline. If it does, then the new plan will be added to the SQL plan baseline,

as an accepted plan. If not the new plan will remain in the plan history as a non-accepted plan

but its LAST_VERIFIED attribute will be updated with the current timestamp. A formatted text

report is returned by the function, which contains the actions performed by the function as well

as side-by-side display of performance statistics of the new plan and the original plan.

If you choose option 2, the new plan will be added to the SQL plan baseline as an accepted plan

without verifying its performance. The report will also be generated.

If you choose option 3 the new plan will be evaluated to see if it performs better than a selected

plan baseline but it will not be accepted automatically if it does. After the evaluation only the
report will be generated.

11g Database Replay

It allows you to capture workload on a production system and replay them exactly as they happened on a test system.

This provides an accurate method to test the impact of a variety of system change including:



1.    Steps to Capture
Create a directory at OS level to keep the capture files

CREATE OR REPLACE DIRECTORY db_replay_capture_dir AS '/u01/app/oracle/db_replay_capture/';
Shutdown the database
Shutdown immediate
Startup
Shutting down the database is optional; it is a good way to make sure any outstanding processes are complete before starting the capture process.
BEGIN
  DBMS_WORKLOAD_CAPTURE.start_capture (name     => 'test_capture_1', 
                                       dir      => 'DB_REPLAY_CAPTURE_DIR',
                                       duration => NULL);
END;
/

Once the work is complete you can finish the capture by below command.



BEGIN
  DBMS_WORKLOAD_CAPTURE.finish_capture;
END;
 
SELECT id, name FROM dba_workload_captures;
    ID NAME
---------- ------------------------------
        21 test_capture_1
 
We can generate the report with the help of below procedure
DECLARE
  l_report  CLOB;
BEGIN
  l_report := DBMS_WORKLOAD_CAPTURE.report(capture_id => 21,
                                           format     => DBMS_WORKLOAD_CAPTURE.TYPE_HTML);
END;
Print l_report;
/
 

The capture ID can be used to export the AWR snapshots associated with the specific capture run.

BEGIN
  DBMS_WORKLOAD_CAPTURE.export_awr (capture_id => 21);
END;
/
 
 
We can transfer all the files from the capture process to the test machine and create a similar directory on test machine
BEGIN
  DBMS_WORKLOAD_REPLAY.process_capture('DB_REPLAY_CAPTURE_DIR');
 
  DBMS_WORKLOAD_REPLAY.initialize_replay (replay_name => 'testreplay’
                                          replay_dir  => 'DB_REPLAY_CAPTURE_DIR');
 
  DBMS_WORKLOAD_REPLAY.prepare_replay (synchronization => TRUE);
END;
/
 

Before we can start the replay, we need to calibrate and start a replay client using the "wrc" utility. The calibration step tells us the number of replay clients and hosts necessary to faithfully replay the workload.

$ wrc mode=calibrate replaydir=/u01/app/oracle/db_replay_capture
 
Workload Replay Client: Release 11.1.0.6.0 - Production on Tue Oct 30 09:33:42 2007
 
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
 
 
Report for Workload in: /u01/app/oracle/db_replay_capture
-----------------------
 
Recommendation:
Consider using at least 1 clients divided among 1 CPU(s).
 
Workload Characteristics:
- max concurrency: 1 sessions
- total number of sessions: 3
 
Assumptions:
- 1 client process per 50 concurrent sessions
- 4 client process per CPU
- think time scale = 100
- connect time scale = 100

- synchronization =The calibration step suggest a single client on a single CPU is enough, so we only need to start a single replay client, which is shown below.

$ wrc system/password@test mode=replay replaydir=/u01/app/oracle/db_replay_capture
 
Workload Replay Client: Release 11.1.0.6.0 - Production on Tue Oct 30 09:34:14 2007
 
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
Wait for the replay to start (09:34:14)
 
 

The replay client pauses waiting for replay to start. We initiate replay with the following command.

BEGIN
  DBMS_WORKLOAD_REPLAY.start_replay;
END;
/
Once complete, we can see the DB_REPLAY_TEST_TAB table has been created and populated in the DB_REPLAY_TEST schema.
SELECT id, name FROM dba_workload_replays;
 
        ID NAME
---------- ------------------------------
        11 testreplay
 




Oracle 11g New Features

SQL Performance Analyzer

It is used to compare the performance of Statement in a tuning set before and after a database change.

EXEC DBMS_SQLTUNE.create_sqlset(sqlset_name => ‘SQLSET1');

Next, the SELECT_CURSOR_CACHE table function is used to retrieve a cursor containing all SQL statements that were parsed by the SPA_TEST_USER schema and contain the word "my_objects". The resulting cursor is loaded into the tuning set using the LOAD_SQLSET procedure.

DECLARE
  l_cursor  DBMS_SQLTUNE.sqlset_cursor;
BEGIN
  OPEN l_cursor FOR
     SELECT VALUE(a)
     FROM   TABLE(
              DBMS_SQLTUNE.select_cursor_cache(
                basic_filter   => 'sql_text LIKE ''%my_objects%'' and parsing_schema_name = ''SPA_TEST_USER''',
                attribute_list => 'ALL')
            ) a;
                                               
 
  DBMS_SQLTUNE.load_sqlset(sqlset_name     => 'spa_test_sqlset',
                           populate_cursor => l_cursor);
END;
/
 
VARIABLE v_task VARCHAR2(64);
EXEC :v_task :=  DBMS_SQLPA.create_analysis_task(sqlset_name => 'SQLSET1’);
BEGIN
  DBMS_SQLPA.execute_analysis_task(
    task_name       => :v_task,
    execution_type  => 'test execute',
    execution_name  => 'before_change');
END;
/
 
Now we make certain changes to the table my_object e.g. add index to it.

BEGIN
  DBMS_SQLPA.execute_analysis_task(
    task_name       => :v_task,
    execution_type  => 'test execute',
    execution_name  => 'after_change');
END;
/
BEGIN
  DBMS_SQLPA.execute_analysis_task(
    task_name        => :v_task,
    execution_type   => 'compare performance', 
    execution_params => dbms_advisor.arglist(
                          'execution_name1', 
                          'before_change', 
                          'execution_name2', 
                          'after_change'));
END;
/
 
SPOOL /tmp/execute_comparison_report.htm
 
SELECT DBMS_SQLPA.report_analysis_task(:v_task, 'HTML', 'ALL')
FROM   dual;
 
SPOOL OFF

Other functions includes
It cancels the currently running analysis task.
dbms_sqlpa.cancel_analysis_task(task_name=>t1)