Credit portal




What is awr report

Automated AWR reports in Oracle 10g/11g

Many Oracle DBA's are aware of power of Oracle AWR (Automated Workload Repository) feature. If you have license for it then using it's statistic reports may be very useful to find present hot spots as well one in the history in whole database. Interval retention of snapshot generation is controlled with history period and time between two snapshots.

Normally DBA control these settings by the predefined Oracle package dbms_workload_repository, by calling it's procedure dbms_workload_repository.modify_snapshot_settings

In next example, retention period will be defined as 30 days long (43200 min) and the interval between each snapshot is 15 min (every 15 minutes one snapshot will be taken). Those settings are enough and satisfactory for most of the today database configurations:

But however long retention policy is, most of DBAs run AWR statistic from time to time, manually. This is true in two cases:

  1. When something "strange" happened in their database (fire fighter DBA).
  2. Some of us run from time to time to see if anything "is strange", what ever that means (proactive DBA).
On the other hand when nothing "strange" happened for a longer time, many of data (reports) are lost because new data push old ones out of repository, loosing many important information how "healthy" database looks like and behave. This is also sometimes very interesting part indeed!

To overcome all mentioned, following solution gives you chance to automate collecting statistic and save them in plain html file, which could be stored and analyzed later with no chance of lost of any moment of life of your database.

To run this script there are

two minor requirements/constraints that must be obeyed prior running it:

  1. From my point of view, the most important time to monitor database is 07-18 hours, but you may change it as you wish.
  2. "xx_some_temp_dir" id dinamically created with v_dir value. So create directory privilege must be available to user which run this script. Keep in mind that Windows has different path definition against Linux (i.e. c_dir CONSTANT VARCHAR2(256) := 'c:\'; ). Change any of those values to apply your situation and configuration.

Last step of automation is to place this script in crontab (or windows schedule) and run it on daily basis at 18:16 (or later).

Result will be placed in v_dir directory, one file per day, giving you opportunity to analyze them whenever you like and need. Here is an example for RAC database MY_DB with 4 instances:

Last (but not least) benefit of this approach is that your retention period may be smaller-7 days would be perfectly fine for most of the cases, because there is statistic already recorded for the whole past period. As previously said, you define it like:

If you need whole day monitoring (let us say with night shift as well) my suggestion is to modify script to run against different period, assume 18:00-07:00. As you can see, automatically, result will be saved in different file name in the same directory. Dividing monitoring on two parts is, from my point of view, really necessary, and enables DBA to view database in two very different situations OLTP against night (batch) time. Time that in database life really differ in numbers, statistic values and logical interpretation of them.

Category: Bank

Similar articles: