Category: Performance

June 12th, 2013 by mathias

Wrap-up

This is the last post in this series and I’ll not introduce anything new here, but rather just summarise the changes explained and talk a bit about the value the solution delivers to the organisation.

Let’s first review the situation we faced before implementing the changes.

The cost of writing the log-records to the database was that all the parallell writing from many different sources was such that it introduced severe bottlenecks to the point that the logging feature had to be turned off days at a time. This was not acceptable but rather than shutting down the whole system which would put lives in immediate danger, this was the only option available. Then even if that would have been fast enough, the moving of data was taking over twice the time available and it was fast approaching the point where data written in 24 hours would take more time to move to the historical store for log-data. That would of course have resulted in an ever growing backlog even if the data move was on 24×7. On top of that the data took up 1.5 TB of disk space, costing a lot of money and raising concerns with out ability to move it to EXADATA.

To resolve the issue during business hours of having contention causing a crippling impact on the overall system, we changed the table setup to not have any primary keys, no foreign keys and no indexes. We made the tables partitioned such that we get one partition per day.

To make the move from operational tables to historical tables faster, we opted to have both in the same instance on EXADATA. This allowed us to use partition exchange to swap out the partition from the operational table and swap it into the historical table. This took just a second as all we did was updating some metadata for which table the partition belongs to. Note that this ultra fast operation replaced a process that used to take around 16 hours, for which we had 6.5 and the time it took was expanding as business was growing.

Finally, to reduce the space consumed on disk we used HCC – Hybrid Columnar Compression. This is an EXADATA only feature for compressing data such that columns with repeating values gets a very good compression ratio. We went from 1.5 TB to just over 100 GB. This means that even with no purging of data it would take us over five years to even get back to the amount of storage this used to require.

So in summary

  • During business hours we use 20% of the computing power and even less of the wall clock time it used to take,
  • The time to move data to the historical store was reduced from around 16 hours to less than one second.
  • Disk space requirement was reduced from 1.5 TB to just over 100 GB.

And all of this was done without changing one line of code, in fact there was no rebuild, no configuration change or anything to allow this drastic improvement to work with all the different systems that was writing to these log-tables.

One more thing to point out here is that all these changes was done without using traditional SQL. The fact that it  is an RDBMS does not mean that we have to use SQL to resolve every problem. In fact, SQL is often not the best tool for the job. It is also worth to note that these kinds of optimisations cannot be done by an ORM, it is not what they do. This is what your performance or database architect needs to do for you.

For easy lookup, here are links to the posts in this series.

  1. Introduction
  2. Writing log records
  3. Moving to history tables
  4. Reducing storage requirements
  5. Wrap-up (this post)

Posted in DBA, EXA, Oracle, Partitioning, Performance Tagged with: , , ,

June 5th, 2013 by mathias

Reducing storage requirements

In the last post in this series I talked about how we sped up the move of data from operational to historical tables from around 16 hours down to just seconds. You find that post here.

The last area of concern was the amount of storage this took and would take in the future. As it was currently taking 1.5 TB it would be a fairly large chunk of the available storage and that raised concerns for capacity planning and for availability of space on the EXADATA for other systems we had plans to move there.

We set out to see what we could do to both estimate max disk utilisation this disk space would reach as well as what we could do to minimize the needed disk space. There were two considerations  minimize disk utilisation at the same time as query time should not be worsened. Both these were of course to be achieved without adding a large load to the system, especially not during business hours.

The first attempt was to just compress one of the tables with the traditional table compression. After running the test across the set of tables we worked with, we noticed a compression ratio of 57%. Not bad, not bad at all. However, this was now to be using an EXADATA. One of the technologies that are EXADATA only (to be more technically correct, only available with Oracle branded storage) is HCC. HCC stands for Hybrid Columnar Compression. I will not explain how it is different from normal compression in this post, but as the name indicates the compression is based around columns rather than on rows as traditional compression is. This can achieve even better results, at least that is the theory and the marketing for EXADATA says that this is part of the magic sause of EXADATA. Time to take it out for a spin.

After having set it up for our tables having the same exact content as we had with the normal compression, we had a compression rate of 90%. That is 90% of the needed storage was reduced by using HCC. I tested the different options available for the compression (query high and low as well as archive high and low), and ended up choosing query high. My reasoning there was that the compression rate of query high over query low was improved enough and the processing power needed was well worth it. I got identical results on query high and archive low. It took the same time, resulted in the same size dataset and querying took the same time. I could not tell that they were different in any way. Archive high however  is a different beast. It took about four times the processing power to compress and querying too longer and used more resources too. As this is a dataset I expect the users to want to run more and more queries against when they see that it can be done in a matter of seconds, my choice was easy, query high was easily the best for us.

How do we implement it then? Setting a table to compress query high and then run normal inserts against it is not achieving a lot. There is some savings with it, but it is just marginal compared to what can be achieved. For HCC to kick in, we need direct path writes to occur. As this data is written once and never updated, we can get everything compressed once the processing day is over. Thus, we set up a job to run thirty minutes past midnight which compressed the previous days partition. This is just one line in the job that does the move of the partitions described in the last post in this series.

The compression of  one very active day takes less than two minutes. In fact, the whole job to move and compress has run in less than 15 seconds for each days compression since we took this solution live a while back. That is a time well worth the 90% saving in disk consumption we achieve.

It is worth to note that while HCC is an EXADATA feature not available in most Oracle databases, traditional compression is available. Some forms of it requires licensing, but it is available so while you may not get the same ratio as described in this post you can get a big reduction in disk space consumption using the compression method available to you.

With this part the last piece of the puzzle fell in place and there were no concerns left with the plan for fixing the issues the organisation had with managing this log data. The next post in this serie will summarise and wrap up what was achieved with the changes described in this serie.

Posted in DBA, Oracle, Partitioning, Performance Tagged with: , , ,

May 29th, 2013 by mathias

Moving to history tables

In the last post I talked about how we made the speed of actually writing all those log-records much faster. It has to date been so fast that no a single report of a problem has been filed. you find that post here.

Once the data was written to the log-tables, it had to be moved to the history tables.This was a process that took around 16 hours. It was never allowed to run for that long as it had to be stopped before the business day started.

This move was done from an instance on EXADATA to a database on an old server. Yes, I can hear you all think “AHA! It must be the slow database link.”. That was the leading thought when I started looking at it. And yes it sure was slow, but no it was not having a big impact. The other area that had been tuned and tweaked over and over and over and … was the insert over the database link to the history tables. Sure enough it was taking a long time. However measuring it showed that it only accounted for 20% of the total time. Reducing that would let us save over three hours. While that would be good, where did the rest of the time go?

It went to a place no one had suspected. Nor tweaked Nor had any measurements been made. What else is new?

It was the part of the process that was EXADATA only. It was the delete of the rows that had been moved. Huh? How could this be? Well it turns out that deleting data based on an in-clause was not as fast as one would think (or at least want). The process was based on selecting a set of primary key values and putting them into a temporary table, this table was then used to identify rows to insert into the history table and to delete rows.

Yes, there are quite a few things in this process that one could attempt to optimise  However, no matter what, the speed would probably not be fast enough. If it ended up being, would it handle the projected growth of the business? And is there really no better way than essentially row by row processing?

Sure there is. Sometimes SQL is not the best or only tool at our disposal.

Everything doesn’t have to be done with SQL. ™

We had already removed the indexes, so the delete should now be faster. It was, just barely fast enough. Thus, just with that change we had squeezed into the seemingly unattainable window for this process. But business is growing and we would within weeks be back to tuning and tweaking.

Back to the idea of not using SQL for everything. But first, let’s revisit the approach that led to success with the write speed. What assumptions are made that we can question? Well… Why are we writing this data over a database link to the slowest database we have in our production environment? It has always been that way and yes we’re worried about the impact of doing this on the EXADATA. Both the impact of letting ad-hoc searches be made as well as the impact of storing all this data on the EXADATA. The storage concern is well founded as the log-data takes up close to 1.5 TB and the volume of logs written are increasing.

However, when we question this we all agree that these are assumed problems and assumed solutions to those problems. Based on that a PoC is produced to show what would happen if we could keep the historic log data in the same database instance on the EXADATA.

With the historic tables in the same database, we get a whole new set of tools to use. I build a PoC showing how data can be moved from the operational tables (the one logs are written to) to the historic ones in under a second for the whole days volume. To achieve this I partition the table on range where the partition key is the time when the log was inserted. Next part is to use a technology in the database called exchange partition.

When exchanging a partition, no data is actually moved. The partition with todays data is via exchange partition moved from table A to table B. However, this move is only updating metadata in the database. That is to say that the only change was to specify which table the partition belongs to. The rows in the partition remains in the same exact physical location on disk as they were from the beginning. They are not changed, not read, and not touched in any way.

This is what makes such a move so fast. Even better, it is transactionally safe. If a query started while it belonged to table A, it will be read even though it was moved to another table in the middle of that query. Queries on table A that starts after the move will of course not see the moved data at all.

Sub-second moving data of millions or billions rows is something that cannot be done with SQL no matter how much one tunes the SQL. So again, SQL is not the only tool at your disposal.

With this we proved that the process can be fast enough. I have not discussed it here, but during this process we also showed that the ad-hoc searches was of no concern either. EXADATA smart scan handles the actual ad-hoc queries very well and most of them are actually sub-second response time even with no indexes. This is for 1 billion+ row tables. Yes smart scan is one part of it and storage indexes is another. I will not discuss those in these posts, but take my word for it, when the time they took were presented the concern was immediately forgotten.

In the next post in this series, I will discuss how we dealt with the concern over the amount of disk space we would use now and in the future if we let the historic data stay on the EXADATA.

Posted in DBA, Partitioning, Performance Tagged with: , ,

May 22nd, 2013 by mathias

Writing log records

The last post in this series introduced the problem briefly. You find that post here.

In this post I’ll talk about the changes made to make that writing of log records fast enough. There were 50 million records that was written. Each of them pretty much in its own transaction. Of course the commit activity caused problem, as did log buffer issues. Some of this could be somewhat remedied with configuration.

The big issue though was that the writes themselves took too much time and too often many session ended up in long contention chains. Yes, it would have been great to have the luxury of redesigning the whole logging situation from the ground up. But, as is often the case, the solution was built such that all systems connecting were implemented in such a way that redesigning was not an option. Fixing the performance of this had to be done without requiring code changes to the systems performing the logging. Oh, joy.

So what caused the problem then? For the inserts it was pretty straight forward. Too many transactions making an insert and a commit. This caused indexes to be hotspots where all processes wanted to write at the same spot. Hash-partitioning had been introduced and that had led to less contention but slower performance. As the partitions existed on different parts on the disks the write head had to be constantly moved and that caused slower service times.

What could we do to make a big improvement while not affecting the code? We’re not talking about just 10-20% of improvement on any area in this case, and even more important was to make the performance stable. That is, the most important thing was to ensure that there were no spikes where an insert suddenly to 20 times longer than usual. The contention chains that was occurring made performance spike such that the whole system became unusable.

The solution here turned out to be something so far from advanced technologies as questioning assumptions. The first time I asked “why do we have these indexes”, most people in the room thought I was just joking around. Eventually they realised that I was serious. After an amusing period of silence where I could see them thinking “Do we need to inform him that indexes are needed to enforce uniqueness and to support referential integrity?”, someone went ahead and did just that. OK, now we were on to a productive discussion, as of course that wasn’t what I meant. The followup discussion about why we needed referential integrity and uniqueness for this set of data was very enlightening for everyone. To make a long story short, it was not needed at all. It was there because it had always been there and nobody had questioned the need before.

How come we didn’t need data to be unique? Well, this is log-data. That is it tells us what actions has been performed by the system. If some activity would be reported twice, it really wouldn’t be the end of the world. The possible problem that some activity isn’t logged cannot be handled with defining unique constraints. That is pure system design and nothing I could improve or worsen by removing some indexes.

Thus, the indexes was removed together with the foreign keys (referential integrity).

Sounds simple enough, but did it help? Did it ever! In one month after making the change, there has not been one report of one transaction that was anywhere close to take too long. This simple solution made the logging so fast that it is no longer a concern.

The next post in this series will discuss the solution for moving data to the history tables. This process took around 16 hours and it had to become at least three times as fast. As you’ll see, moving all these rows can be done much faster than so.

Posted in DBA, Partitioning, Performance Tagged with: , , ,

May 15th, 2013 by mathias

Introduction

In my work among other things I tune and tweak solutions for EXADATA. Today I’ll write about a big improvement we achieved with a process that moves data from the operational tables to the ones where the history is stored.

This will not be a technical post. While I talk about using advanced technologies, I will not discuss code or deep details of them in this post.

And yes, when I say post, I mean a series of posts. This will be too long to be a single post. I’ll break it up into an introduction and then a post on each area of improvement.

Let’s first discuss the before situation. This set of tables are logged to during the day. These log records are needed both to investigate how transactions were executed as well as to satisfy legal requirements. It is in a highly regulated industry and for good reason as mistakes could put someone’s life in danger.

In this situation the solutions were writing around 50 million log records per day to five tables. These tables all had a primary key based on a sequence and there was also referential integrity set up. This means that for the indexes, all processes were writing to the same place on disk. The lookup on the referential integrity was also looking at the same place. An attempt to remedy some of this had been made by hash partitioning the tables. The write activity was intense enough during the day that most of the logging had to be turned off as the solution otherwise was too slow. This of course has legal as well as diagnostic implications.

What’s worse is that once all that data was written, it had to be moved to another database where the history is kept. This process was even slower and the estimate for how long it would take to move one days worth of data was 16 hours. It never did run for that long as it was not allowed to run during the day, it had to start after midnight and finish before 7 am. As a result the volume built up every night until logging was turned off for a while and the move then caught up a little every night.

This series will have the following parts:

  1. Introduction (this post)
  2. Writing log records
  3. Moving to history tables
  4. Reducing storage requirements
  5. Wrap-up and summary

The plan is to publish one part each week. Hopefully I’ll have time to publish some more technical posts between the posts in this series.

Posted in DBA, Partitioning, Performance Tagged with: , , ,

October 3rd, 2010 by mathias

Yes, this post is a little out of order as it clearly didn't take place after the Thursday afternoon sessions. I missed it during the conference so I had to catch up on it later on the on demand site. I thought it was interesting enough to write up a few notes.

It was held by Tom Kyte and the subject was "What's new in database development". It'd be more correct to say Oracle development than database development as he covered just about any area that Oracle focuses on.

He started with APEX and spent a lot of time on it. The coverage APEX gets in general sessions and the number of really good APEX sessions speaks volumes about how strong APEX is becoming within the Oracle development technologies.

Tom shared a little background of how APEX was built by Mike Hitchwa when Mike set next to Tom and wrote the initial code for APEX. Anyone that has been impressed with the way bind variables are handled in APEX got their explanation for why. Tom has relentlessly been preaching about how to to it and the issues caused when not done well. So it all makes sense when you hear that it was Tom who wrote the initial code for bind variable handling in APEX.

Tom went on to say that they are seeing a lot of momentum behind APEX right now. The catch phrase he used to explain why it grows fast in popularity was "Easy architecture, easy to deploy, and easy to run".

Tom then spent some time talking about performance of APEX as a lot of people still thinks it is not able to handle enterprise level system demands.

The site apex.oracle.com that is used to run all the applications people create on Oracles hosted apex "test and demo" site has a surprisingly small computer running it. The specifications for it is:

  • Poweredge 1950
  • 2 Dual core Xeon 2.33 Ghz
  • 32 GB Ram
  • Cost : $4,300 (in 2007)
  • Runs 11G database

Not a very big computer to serve a site that provides support to anone who wants to try APEX.

Tom shared som of the numbers for what apex.oracle.com is used for.

  • They had 3.5 million views (pages loaded and processed) last week.
  • Distinct users accessing during the week: 3028
  • Total workspaces :9062
  • Applications: 32776

I'd think most "enterprise" systems would have less demand than that.

Some of the biggest things on apex.oracle.com are:

  • SQL Developer updates 800K hits /week
  • Pro MED (Prodution system) 770K hits / week
  • Asktom 250K-1000K hits / week
  • APEX Appbuilder 238K hits / week
  • JDeveloper updates 175K hits / week

Impressive to serve all of that from a small(ish) computer.

Tom then shared some key features from the 4.0 release.

Websheets

Designed for the enduser and designed for webbased content sharing. A solution that allows endusers to stop mailing exceldocuments around would be a good thing…

Dynamic Actions

Declarative client-side behavior. It is integrated in the framework. Uses JQuery.

This allows using common javascript function for handling advanced user interface improvements. From validations to drag and drop and more.

Team Development

Allows you to manage the application development process within APEX. This makes APEX able to be used by large development projects. It has built in end-user feedback. When a user has a problem they can tell you directly in the application and it feeds into the team development features.

Oracle APEX Listener

Built in Java.

It is a good alternative to mod_plsql and it is certified against WLS and Glassfish.

After that long piece on APEX, Tom went through som improvements for other technologies.

SQL Developer

The datamodeller is now free. It will now come with the product and will be supported with the database (just like SQL Developer itself).

It has real-time SQL monitoring. It will allow you to see what part of the execution plan Oracle is currently spending time in for a SQL. This exists in Enterprise Manager, but this gives the insight to many more as EM often is just used by the DBAs.

It can now find the SQL trace file and display the contents graphically in SQL Developer. This ought to make trace files much more accessible to developers.

It has support for AWR and ASH reports and has added developer specefic things that are not available in the other AWR/ASH aware tools.

A hierarchical profiler is now also included to allow you to see what functions calls which in a run of PL*SQL code.

SQL Developer now used PL/SCOPE data to show information about where variables are defined and used, This is something that has been added to the database and can alays be accessed with SQL, but it is more convenient to have it hooked into the development tool.

They have also added a DBA Navigator that allows DBAs to access most things they deal with in the database. Most objects (such as extents and tablespaces) can be seen and modified.

An interface for DBMS_SCHEDULER has been added. It is supposedly a much nicer way to configure jobs than to do it by have, which can be both tedious and complicated.

Another neat feature is that auto tracing has been improved to not only allow tracing of a statement, you can now trace two statements and see a comparison report of the tracing so you can compare them. Since this usually has been done with Toms "Runstats" package, I'd assume this feature has Toms fingers all over it.

ODP.Net

It is free and provides native access to the database. It has statement caching (on the client side) and will support TimesTen in 11.2.0.2.

I'm not a .Net person at all , but it sounds as if this provides a lot of benefits both in use and in performance. If you work with .Net, then you will want to take a look at all the things Oracle provides in this area.

Java

There is a Universal Connection Pool for all technologies using connection pools to use. This is supposed to reduce the overhead caused by having several connection pools.

For Secure Files there has apparently been a performance issue dealing with very large files, my understanding from Toms presentation is that it resulted in double buffering. Now there is Zero Copy which sounded as if it would remove all buffering and thereby improve the speed significantly.

Database Improvements

Tom focused on the optimizer and on developer related topics as this was a Develop keynote.

Tom showed how the optimizer could detect a three-way join that was not needed. It was a fairly complicated situation, but the optimizer can apparently completely skip accessing tables now if they are not needed for the end result.

PL*SQL

Edition based redefinition is available in a editions of 11G. Tom thinks this is the killer feature of 11G, and says that it may be the first time the killer feature is available in all editions and with no additional licensing requirements.

It allows live changing of code (PL and views). Live changing of data can already be done with online redefinition and online actions on indexes.

The process now is:

  • Create new edition
  • Test new edition of the code while users still use the old.
  • Two options for rolling out
    • Let new sessions use the new one and allow old to keep using the old edition.
    • Switch everyone over to the new when the system is "idle". Takes just seconds.

That was the end of Toms presentation. He covered a lot of technologies, even some that are not mentioned above (like Ruby). WHat I think is interesting is what wasn't mentioned at all.

The fist one that comes to mind is JDeveloper. It has not become very popular with Java purists and it is not mentioned at all. There are some rumors of it being de-emphasized by Oracle as how you work with it seems to be very far from how most Java projects want to organize their builds and deployments.

Next interesting omission is the technology mostly integrated with JDeveloper. No mention at all on the Oracle Develop for their key Java development framework. ADF did not get any love at all and that is not true just for this presentations, there were very few mentions of it in any of the presentations I attended. True, I did not seek out ADF sessions, but virtually every single presentation on non APEX technologies found a way to talk about APEX, the oposite was true for ADF. Even when you felt that someone was building up to a plug for ADF, they never went there.

The lack of love for ADF and JDeveloper may mean that they are not considered critical to Oracles success anymore. I assume ADF was used to build the fusion applications which would mean that the tool and product is probably not going away, but could it mean that they are primarily being focused for large product development and not something that is good for custom development projects?

Personally it feels as if ADF is now essentially becoming a tool used when integrating with Oracle (fusion) Apps or Oracle SOA Suite. I'm not sure a custom development project would end up using ADF. Of course there will be exceptions to that, but they will probably be fewer and fewer.

It could also be that JDeveloper and ADF starts becoming less and less of something for the customers and more and more of an internal tool at Oracle. I'm sure they have very specialized versions and techniques used in development to help the development of the fusion applications.

On the other hand, it could just be that this wasn't the time for releasing improvements to JDeveloper or ADF. I have trouble convincing myself of that though. The reason is that Oracle now owns Java and if there ever would have nee a reason to show JDeveloper and ADF in its full glory, now would have been the time. By not releasing improvements to what is considered problems with JDeveloper and ADF, they have probably given more momentum to the competing tools. They of course ha another tool in this space now, maybe NetBeans will be the future migration path of JDeveloper and when ADF finds a new home.

Posted in APEX, DBA, OOW, Oracle, Performance, SQL

October 1st, 2010 by mathias

For the afternoon I had two sessions I attended. The first was "Quantifying Oracle Performance" and the second was "The X-files – Managing exadata and highly available databases". I anticipated both to be great and possible be among my favorites for the week.

Unfortunately neither met my expectation, so this is going to be a fairly short post.

The first one was "Quantifying Oracle Performance" with Craig Shallahamer, I have seen many presentations with Craig and he has always been good and his material very interesting. Even if you do not use queueing simulations at work, having seen his presentations on how it affects performance helps when trying to understand system performance. This presentation was intended to help you build a model where you could show where your solution is today on a response time curve, i.e. where in the famous elbow are you now and where would different changes put you.

Craig showed some parts of how to model your current performance and to use AWR to figure out some numbers to use. However, there was virtually no data on how to model a planned change. I guess you have to implement and test that change with production like load to get the same data and then be able to plot it on the same graph as you created for your current solution.

I think the idea of showing non technical users the impact and how close to instability you currently are in a graphical manner is very good, as is the idea of comparing possible alternative solutions with it. However, I think the participants get only this idea and they have to do a lot of work on their own to make this a model they can use to work with their clients.

So, (sorry Craig) this did not feel like a presentation that is complete at this point. I do believe it could be a great one if it reaches its goal but I don't think it is there yet.

The next one was "The X-files – Managing exa databasemachine and highly available databases" and this was not one that fell short of the goal, it did not even make an attempt at meeting it. Pure bait and switch.

To explain this better here is the abstract

The Oracle Exadata is a complete package of software, servers, storage, and networking designed for all data processing and data management challenges. In this session, experts will walk you through Oracle maximum availability architecture best practices for managing this critical piece of your operating infrastructure through Oracle Enterprise Manager. They will also provide additional guidance on troubleshooting the Oracle Exadata and Oracle high-availability database solutions.

I read that as a session that will talk about how I should set things up snd how to achieve a HA-solution that is stable and can be proactively managed. What I got was a long session about how I should fork over money to ACS (Advanced Customer Support). Yes, ACS is probably great and big and important installations should probably have them as their dedicated Oracle support. However, knowing how to take care of your own solution is what I expected from the abstract and there was non of it. This was instead a one hour long plug for ACS.

I took away two things. The first is that a lot of work has been done to make EM monitor all aspects of an exadata box. It looks like they have a lot of interesting information. Unfortunately access to a lot of it seems to require using ACS and the appliance they put in place to aggregate information. The other thing is that setting up dataguard via EM now looks really easy. Maybe it is time to start using EM for setting up DG.

There was a brief mention of a tool that can estimate the benefit of using exadata based on a SQL tuning set. It was called something like "SPA exa database machine simulator".

Posted in OOW, Oracle, Performance

September 29th, 2010 by mathias

The day starts with a presentation b6 Tom Kyte about "What else can you do with system and session data".

Tom starts with reviewing the history of tuning an Oracle database.

The prehistoric era (v5) required writeing debug code as that was the only way to get any information about what the code did.

The dark ages followed (v6) and now Oracle introduces:

  • Counters/ratios
  • bstat/estat
  • SQL Trace
  • The first few (7?) v$-views are introduced

In the rainessance era (v7) introduced a few more things:

  • The wait event instrumentation was introduced
  • Move from counters to timers
  • Statspack

The modern era (v10) introduced the tools we prefer today

  • DB-Time tuning
  • Multiple scoping levels (AWR)
  • Always on, non-intrusive
  • Bult into the infrastructure – instrumentation, ASH, AWR, ADDM, EM

DBA_HIST_* views is where the historic data is located.

Tom contunued with discussing the metrics and the timemodel and how to join that to get data about them. However, all his presentations are available for everyone and you're better off reading them on his site than getting a summary of it here.

As usual, Tom places his presentations and other things under files on asktom.

The next presentation was Apex debugging and tracing with Doug Gault from Sumneva.

Debug mode is truned off on the app level by default. It neds to be enabled there before it can be used.

Debug data is now written to a table, and not dumped on the webpage as before. It is viewed by clicking view debug in the developer bar in the APEX interface.

Use conditional display to output things only in debugmode (using v('DEBUG') ).

Things that happens on a page after it has been rendered will not be captured. This is things like javascript running on this client.

APEX_DEBUG_MESSAGE is currently undocumented, but the code shows how to use it.

Debugging can be turned on/off on the page programmatically.

Logging from you own code allows you to see what parts of a package call it is that takes the time.

Doug then shows a few different ways to collect and report on the debugdata for a report.

I recommend looking at the presentation for ideas on how to use it and what can be done. Unfortunately this presentation does not seem to be available on the sumneva website at this point. If you are interested in it, send an email to them asking if they are willing to share it on their presenation page. You'd of course want to check their site first to make sure it isn't uploaded between the time I write this and the time you read it.

 

Posted in APEX, OOW, Oracle, Performance, SQL

September 20th, 2010 by mathias

I arrived to San Francisco yesterday and OOW kicked off today. The efficiency at the registration alone was impressive. The lines we're very long, but it probably took less than 15 minutes to get through it.

This post will essentially be a dump of notes from the sessions I attended.

The first session was the start of the APEX symposium. It was supposed to feature Scott Spendolini and an overview of APEx 4.0. He had been delayed so the presentation was held by Tim st Hilaire and Doug Gault. It was a nice review of focusing on userinterfaces. Technology is nice, but in the end it is all about how productive the end-user is.

They looked closely at the interface netflix has for managing the movies you want to see and used their own version of it for a troubleticket system. There was quite a bit of discussion about the problem with getting the business involved in prioritizing and how they built the solution to make it easy to get the busines side involved in making the right things prioritized.

The technical solution uses JQuery which is now part of APEX out of the box. They have placed their work in a plugin that can be downloaded from apexplugin.com.

The next session was SQL Developer Best practices. It went through the presenters favorite things with SQl Developer.

It covered a lot of things, the following are the ones I feel I need to use more.

Creating code templates and adding a hot key for activating it would eliminate rewtiting similar code over and over.

There is a way do generate documentation for a schema in HTM and other formats.

It includes a function for monitoring SQL and sessions that will reduce work with querying v$session and related tables to see what is going on in a system.

Unit testing is included in version 2.1. You create a repository and define tests that can be executed both from SQL Developer and from the commandline (ututil).

The third session of the day was "Messed up Applications" with Carly Millsap. It was an interesting session that wasn't very technical for a Millspa presnetation. He reviewed three systems that had been designed to perform poorly. Of course neither was designed that way on purpose, but they were all bult in a way where tuning would not help.

I missed the first whoch was about something he had built himself. The next was a ticket reservation system that spent over a minue to bring back all possible trips a user could go on, when the user almost always chooses the first one. The interface presented him with 2136 trips to choose from. This can of course not be fixed without adjusting how the system interface works.

The last was a ramp anti-pattern for a company that had a system that ended up printing every invoice since day one. So day one it printed all invoices for day one, day two it printed all invoices for day ane and two and so forth. This of course wasted lots of paper and lots of resources very soon.

The last presentation was about trending performance and capacity through using AWR data. The presenter had reverse engineered the SQL in AWS and then used that to bring out data for different times for the same execution or the same SQL over time. It takes some work to do, but it increases the value of AWS significantly.

After that was the keynote, but that will be in its own post.

Posted in APEX, OOW, Performance

January 9th, 2010 by mathias

Interested Transaction List is the method through which Oracle keeps track of which transaction has updated which row in a block. I wrote an article about it a while back where I show how the ITL works and how it impacts performance in certain situations. The basis of the article is a page locking scenario I encountered. Sure, page locking is not possible in Oracle, but an ITL wait is very similar to a page lock. 

You find the article here.

Please post comments and questions here.

Posted in ITL, Oracle, Performance