Category: SQL

May 11th, 2007 by mathias

Time to expand on the last post. As promised this will be about how to get the ORA-00600 information out of an external table for the alert log. We will use the same user and setup as we had in that post.

Lets first review how an ORA-00600 is represented in the alert log.

   Tue Feb  1 02:09:44 2005
Errors in file /path/to/trace/file:
ORA-00600: internal error code, arguments: [1100],
[0x388F0D990], [0x382253888], [], [], [], [], []

The actual arguments does not matter, I took them from a forum post to just have an example. What we care about for what we're doing here is that the format is one line with a date, one line with a reference to a trace file, and one line specifying the error code (ORA-00600) and the needed arguments (for Oracle support).

Thus, we'll need to find the row with the ORA-00600 and match it up with the two previous rows. To do this we'll return to my favorite SQL key word, the with clause. It lets us get all rows, get a generated unique id per row, and know that the rows we look at will be identical as we're now using the same source. In theory, we could end up with the rows in different order if we joined the actual table with itself. Oracle does not guarantee the order you retrieve data in unless you order it, end we have nothing to order on here.

    with alert as (select rownum rn
,text_line
from alert_log)
select a.text_line date_msg
,b.text_line file_msg
,c.text_line err_msg
from alert a inner join alert b
on a.rn = b.rn - 1
inner join alert c
on b.rn = c.rn - 1
where c.text_line like '%ORA-00600%';

This gives us the alert log message in three columns. We can now process it any way we want. It could be used to create a report or generate an email with your ORA-00600 messages.

However, the actual ORA-00600 shows that we made a mistake when we created the table. It cuts off the text_line after a comma. That is not what we want, we want all data up to the end of the line. So we need to change the definition of the table to make sure we get all of that.

   conn alertlog/alertlog
drop table alert_log;
create table alert_log
(text_line varchar2(255))
organization external
(type oracle_loader
default directory bdump
access parameters(
fields terminated by x'10')
location ('alert_ORCL.log'));

That's better, the default delimiter of comma has been replaced with linefeed (ASCII 10) for UNIX platforms. Replace it with what you need. As long as it is a charcter that will never occur in a text line in your alert log, this will work fine.

Now that we have a select that gets all data for an ORA-00600 in each row it returns, it is easy to operate on that data. This model can of course be used to list other pieces of data from the alert log. One such use could be to report how long time it takes between log switches. That is left as an exercise for the reader.

My next post will show how to use this to create a RSS feed for your ORA-00600. That allows you to within minutes know when you have had an ORA-00600 occur and you get it delivered as a separate feed to your news/blog reader.

Posted in Alert Log, Oracle, SQL

May 7th, 2007 by mathias

I'm going to discuss how to get SQL access to your alert log in this entry. I will then build on that to make it even more useful in future posts.

Browsing the alert log can be convenient if you're already on the server. However, it is often not possible or at least not overly convenient. You may find that as the developer DBA you do not have access to the production server, or you have to log in to many different servers to check all your alert logs. Since SQL is the main tool for DBAs and one of the most critical for developers of database driven applications, it would make sense to access the key log for database operations and issues via the same interface and be able to use your SQL tool of choice.

The way to be able to access the alert log with SQL is to define an external table for it. So I'll use this need to write a little about how to create an external table for text file. It's pretty easy and can often be very useful.

create directory bdump as '/path/to/bdump';
create user alertlog identified by alertlog;
alter user alertlog quota unlimited on users;
grant create session, resource to alertlog;
grant read,write on directory bdump to alertlog;

The directory path needs to be what you have "background_dump_dest" set to. You can look the current value up in v$parameter. To be able to do this with one script across databases, you'll want to make this more dynamic with pl*sql such that the directory is retrieved from the database and used to build a dynamic DDL statement.

Now that we have a user with access to our bdump directory, we'll create our alert_log table.

create table alert_log
(text_line varchar2(255))
organization external
(type oracle_loader
default directory bdump
location ('alert_ORCL.log'));

This is a minimal create external table statement. There are many more options to use, but we'll leave those alone in this blog post.

The create table statement is pretty straight forward, here's what it will do:

  • Creates a table with data that is located external to the database
  • Take all data in each line into the column text_line
  • Use the normal oracle_loader (SQL*Loader) driver to access the data
  • It uses the bdump directory object to specify where the alert log file is located.
  • Get data from the file alert_ORCL.log.

That is all it takes. We can now access this table with SQL and use our normal techniques for filtering and aggregating information.

To look at the snapshot too old messages we generated in a previous blog post, we'd use this statement:

select text_line
from alert_log
where text_line like '%ORA-01555%';

TEXT_LINE

Posted in Alert Log, DBA, Oracle, SQL

April 29th, 2007 by mathias

Mutlitable insert is a feature I've read about many times, but I've never taken the time to read up on it or play with the way this SQL statement works. I figure making a post about it will give me the reason I need to learn more about it.

It is not so much a new statement as an extension of the existing INSERT statement. It allows you to insert into different tables and with different conditions from a select statement. That is, it is a variant of the common "insert into select from" version of the insert statement.

There are of course many uses of this. Splitting a table into multiple tables seems like an obvious use, but more common may be when data is added to more than one table at the same time. One such situation could be a situation where a blob is placed in it's own table or where a parent table needs to be maintained while loading the dependent table with new data.

In our test we'll read all rows from all_ind_columns and insert into the tables test_tb and test_col. The two tables will of course hold tables and indexed columns in tables respectively.

Lets first set up the user and the tables.

conn system
create user multiins identified by multiins;
alter user multiins quota unlimited on users;
grant create session, resource to multiins;
conn multiins/multiins
create table test_tb as
select table_name
from user_ind_columns
where 1 = 0;
create table test_col as
select table_name, column_name
from user_ind_columns
where 1 = 0;

We now have a table test_tb that holds just a table_name and a table test_col that holds table_name and column_name. This is a traditional parent child table relationship. The task is now to in one statement load both tables with data from all_ind_columns so we get one row per table in test_tb and one per indexed column in test_col.

insert
when cn = (select min(column_name)
from all_ind_columns a
where a.table_name = tn) then
into test_tb values (tn)
when 1=1 then
into test_col values (tn, cn)
select table_name  tn
,column_name cn
from all_ind_columns;

Pretty nice. This can be used to denormalize tables or just load tables in parallel when the data is in one single data source. It can even be used to have one join pull data and then put the data into different tables. The number of conditions and tables is virtually unlimited. I believe the limit is set to 127 and I'm sure I'll reach the limit for a single statement (64,000 bytes) before I reach that limit.

The subselect used in the first condition is to make sure that we only insert the table name once. In each table, there will just be one column that has the lowest value as column names are unique within a table.

You'd of course want to add primary keys and a foreign key if these tables were to now be used in a real application.

Posted in Oracle, SQL

April 9th, 2007 by mathias

To mix the last post with something that may be easier to digest, I'll make this one about one of my favorite keywords in SQL. It is the WITH clause. I can only think of one bad thing with it; it's name. It's virtually impossible to search for information on something with a name that results in over 4 billion hits on a Google search.

One reason for why I like it is for how I learned about it's existence. It was a few years ago and my manager at the time found it during leisure reading about Oracle and asked me what I thought about it. I probably answered something stupid like I'm not sure that feature even exists. We broke out the book and read about it, and I knew this was going to be an important feature for some problems. It was one of those things where you know that you have no immediate need for it, but that you have a new tool to solve problems with and it will come handy soon enough. Every DBA should be so lucky as to have a group manager that cares enough about technology that they read about it on their spare time.

Lets leave memory lane and return to reality so we can look a little on how this feature works. The technical name for it is "subquery refactoring", but I've never heard anyone refer to it as anything other than "the with clause".

It allows you to create one or more result tables you can refer to in the real select clause. Inline views can sometimes be used to achieve the same result, but the really cool thing with this feature is that you can refer to it as a table and refer to it in more than one place in your query.

Lets consider a situation where you have one table with products and one with services. Both of these tables have a category id field and that links to a category table. Lets assume that you want to retrive only the products and services that are in priority 1 categories. This is easy to do with normal SQL, but WITH can probably make it more effective and easier to maintain.

Let's first create the user and the tables we'll use in the test cases. Log on with a DBA user and execute the following commands.

drop user withtest cascade;
create user withtest identified by withtest;
alter user withtest quota unlimited on users;
grant create session, resource to withtest;
conn withtest/withtest
create table products
(prod_id     number(2)    not null
,prod_name   varchar2(20) not null
,category_id number(2)    not null);
insert into products
(prod_id, prod_name, category_id)
values(10,'Chair',20);
insert into products
(prod_id, prod_name, category_id)
values(11,'Sofa',21);
insert into products
(prod_id, prod_name, category_id)
values(12,'Bench',21);
insert into products
(prod_id, prod_name, category_id)
values(13,'Stool',22);
insert into products
(prod_id, prod_name, category_id)
values(14,'Armchair',21);
insert into products
(prod_id, prod_name, category_id)
values(15,'Ottoman',20);
create table services
(serv_id     number(2)    not null
,serv_name   varchar2(20) not null
,category_id number(2)    not null);
insert into services
(serv_id, serv_name, category_id)
values(30,'Watering',20);
insert into services
(serv_id, serv_name, category_id)
values(31,'Cooking',21);
insert into services
(serv_id, serv_name, category_id)
values(32,'Cleaning',21);
insert into services
(serv_id, serv_name, category_id)
values(33,'Shopping',22);
insert into services
(serv_id, serv_name, category_id)
values(34,'Washing',21);
insert into services
(serv_id, serv_name, category_id)
values(35,'Decorating',20);
create table categories
(category_id   number(2)    not null
,category_name varchar2(20) not null
,priority      number(1)    not null);
insert into categories
(category_id, category_name, priority)
values(20,'Unimportant',3);
insert into categories
(category_id, category_name, priority)
values(21,'Very Important',1);
insert into categories
(category_id, category_name, priority)
values(22,'A little important',2);
commit;

We now have tables and data to allow us to write the SQL we normally would have used.

select prod_name
from products a inner join categories b
on a.category_id = b.category_id
and b.priority = 1
union all
select serv_name
from services a inner join categories b
on a.category_id = b.category_id
and b.priority = 1;
PROD_NAME

Posted in Oracle, SQL