Category: Uncategorized

September 23rd, 2010 by mathias

The afternoon had one of the sessions I knew I would enjoy the most before I even got here. It was Carly Millsaps "Thinking clearly about performance". THe reason I anticipated a great presentation is not just Carlys knowledge in the area, but also the fact that he is fast becoming the best presenter in the Oracle area. This presentation was no different, he covered a lot of topics and still made it seem like there ws no rush with anything.

Everyone ought to read the paper the presentation is based on. You can find it here.

The presentation essentially alks through 21 items that makes up things that matters for performance. It also shows why DBAs need to care about the developers area and vice versa.

One key area he brought up was "knowing matters – proving matters more". They reason thi is important is that unless you can prove your theory, it is unlikely to get implemented in an organization that juggles many high priority changes.

Response time does not equal throughput even though it sounds as if it does to most people.

Customers feel the variance, not the mean. This means that it is usually not the average time something takes that is the problem, it is the fact that it sometimes takes 10 times longer that makes the user upset.

Problem ananlysis has to start with knowing both the current state and the goal state. If you do not know both, you cannot get started. Even though it can feel uncomfortable to ask about the goal state, you HAVE to.

Cary used sequence diagrams to show how to find out what it is that needs to be addressed. Without knowing what times each part (not just the DB) takes, it is not possible to know what part to fix first.

Profiling is used to know if the endgoal is possible. If you remove everything that should not be needed (such as waiting for a latch) and the gal is still not met, then it is not just a matter of removing unnecessary things, you probably need to re-architect too.

One of the closing remarks was the classic quote on performance that "The fastest way to do something is to not do it at all". As obvious as it sounds, it is easy to forget when analyzing a problem.

The next presentation was "Going realtime/convergent". It was about Oracle Apps Billing and Revenue management. The presentation was held by a person from Tusmobil in Slovakia. His name is something like Ognjen Antonic. The reason it is "like" is that I have no idea how to make my keyboard produce the special symbols he has above letters in his name.

I was hoping for a little bit of insight into the work with implementing a realtime billing solution on Oracles applications. This being a customercase provided more of their business and market background. We were shown lots of impressive numbers on what it had done and what they had achieved, but for a techie it did unfortunately not provide much insight into the work they performed to achieve it.

My next presentation for the day was attending a presentation on Golden Gate. The name was "Golden Gate: What is all the fuzz about" and was held by Borkur Steingrimsson from RittmanMead Consulting.

It was a review of what you get with the license to start with. One thing that is included is Active DG, which explains part of the license cost. Golden Gate is a nice technology, but it is unfortunate that what was included with the database in previous incarnations is replaced with something that requires a separate license. Still, Golden Gate looks like it has a lot of neat features.

It has recently been certified to be used with exadata for extracting and loading.

GG can be set up to do deferred apply of changes.

The presenter thinks that it has little GUI support, but what it lacks there is made up for in scripting support. This shows that it is primarily geared towards DBAs who traditionally prefers to do work on the command line over just clicking around in a GUI.

GG handles both the initial load to a target system and later incremental loads to keep source and target synchronized.

It has support for DDL and it can change both schema and table prefixes on DDL commands. It will however not change schema prefixes if you hardcode that into procedures or functions.

That presentation was the last one for the day. The evening was rounded off with a five mile roundtrip hike over to mamacita in Marina. If you like mexican food you will be in heaven at this restaurant. THis was by far the best mexican food I have ever had. They have a micheline star and they proved it with every dish we tried and we sampled five things where every thing tried hard to top the previous one.

Take a look at their site if you are or are planning to go to SF. If you go there, you need to try their Crudo de Atun. It is an ahi tuna tartar and it is to die for. This place has what gourmet dreams are made of, I know I'll dream about their food until I get a chance to return.

Posted in Uncategorized

September 22nd, 2010 by mathias

The morning was kicked off with a presentation about new features with interactive reports in APEX 4.0.

A demo shows how to create an icon enabled report for  basic product list report. Every product is represented by a picture in a grid that shpws a few images per "row".

A more detailed version of the icon view was also created where each row had a picture and a number of details about it on ever "row".

These are all integrated into the interactive report, so the only thing that differs is that the user clicks on a button in the searchbar for the interactive report. So it is essentially just a way to allow the user to select different modes for viewing the report.

Via group by controlls, the end user can set up their own way to aggregate data in the report. End users can save their settings for a report to allow them to get the same report again later by just selecting the saved report.

Developers can also save settings for the report to give all users access to a variant of the report that many users have a need for.

It is also possible for end users to share reports. This allows an end user to save their version of a report as a "public" report. This is turned on by the developer and who has the right to save such reports is controlled the same way as access to other objects in APEX.

It is also possible to enable subscriptions of reports. The result of that is the the user receives and email with the report in HTML format at the interval they have chosen.

All of these helps the end user be more productive and one area that also gives more fredom to the enduser is websheet application wich was presented next by David Peake.

David likes to call it "Wiki on steroids" and after watching the presentation I would have to agree. I have actually thought websheets were just a way to give users a way to edit data in a table in a grid on a webpage. It is clearly time to stop ignoring this part of APEX.

It allows end users to create pages and add contents just any WIKI does. It is also geared towards businessusers with little IT knowhow. They can build applications the shows data and updates data without knowing anything about programming or even databases.

David shows a demo of it where he first created a websheet application. THen during running of it the application runs in a mode where editing allows use of some things that are usually in the builder interface for database applications. The user can edit the page with rich text controls, can enter SQL to execute to create content on the page. It is also possible for the user to create a table or add data by just cutting and pasting from Excel.

Websheets essentially turns APEX into a user created content application. It is possible to collaboratively work on a websheet application and have dynamic content in it.

Could this be used as a documentation platform to enhance a wiki to also be able to present data from the system? Or maybe to use to build simple dashboards to give an easy overview of the statu on things in a system?

One area where this could be really useful would be to use it for prototyping together with end users and let them modify things on their own to show what kind of solution it is they would want.

The next presentation was "Exadata Management and Optimization". I was a little late to the session, but all I got in the forty minutes I was there was a long plug for using Oracle ACS to install, monitor and manage the solution. They had a hundred or so slides of information on how they have the people, the skills, and  the tool to do it right.

They seems ot havea very impressive setup with an appliance that collects data and an SLA of 15 minutes to present the customer with an action plan after something fails (the machine is very fault tolerant so it is usually not the same things as an outage).

Still I came to learn about managing and optimizing exadata, not to get a sales pitch for Oracle ACS.

The only amusing thing in the presentation was when during the Q&A the presenter was asked to summarize of Oracles best practices. He asked "All of them?" and the person asking responded very seriously that yes, he would indeed want to get a summary in a few words on all of the best practices. The presenter snickered and said something like "lets chat after the session to make sure you get what you need". 

Posted in Uncategorized

September 20th, 2010 by mathias

The keynote tsarted with some general hints on things to do at the conferense and hings that can help make the experience even better. After that safra Catz gave out the excellence awards.
Next up was Ann Livermore om HP to talk about how important Oracle is for them. she threw out lots of numbers of it, one that stuck with me is that 40% of all Oracle licenses is for HP hardware. They have 300 000 employees. my guess is that they may stand to lose the most on the SUN purchase.
they want to help Oracle's customer flip the ratio of spendding between operations and innovations.
Load runner is how available as a cloud service.
Next up waas Dave Donatelli who gave a boring presentation on HP's hardware offerings.
Larry was up next after a very long intro.
He spent a lot of time defiming what their view of cloud computimg is. In summary it is EC2 from Amazon (who he claims populariced the term cloud) and it is not software that just runsj on the net that you integrate with (it could be cloud, but it is not just because it is web-enabled).
The following are required for cloud in Oracle's view:
Standard platforms
HW and SW
Virtualized
Elastic
Runs variety of apps
Public and private
Exalogic was announced. There will be many more detailed accounts of it by now on the net, so here is just a short overview.
It is meant for any and all apps.
It has 30 compute servers and 360 cores. It runs Linux and Solaris virtualized.
It is by far the fastest machine for Java. No data for this claims was presented.
WebLogic Server and Jrocket has been optimized for it.
2 exalogic servers can service all http-requests for facebooks global presense. It can service 1 million HTTP requests per second.
it is claimed to improve http performance 12x and messaging 5x. Again there was no data presented, but it will surely be published soon.
The following specs were presented:
3 TB Dram
1 TB SSD
30 servers
360 cores
40 GB infiniband dwith extremely low larency
It is ideal for OLTP.
it was built with an intention to be able to drive exadata. the infiniband is used to connect to an exadata server.
it can be deployed from 1/4 machcine up to 8 exalogic servers.
They make a big point of the fact th at hardware and all software is tested together so everyone runs the same exact config. This should let the deliver one file that patches all pars of the server.
They stated tthat a full machine will be $1M while a similar config from IBM will be $4.4M and that machine is not getting close to the performance of the exalogic. If true the success will be immediate…
It is based on Oracle VM.
Them followed a discussion on Java where it dounded as if they will give up on red Hat support due to RH being so far behind. im sure this will be discussed at length in the Oraclce blogospherre.

Posted in Uncategorized

July 11th, 2007 by mathias

Oracle's launch of database 11g will be webcasted. It starts at 10 AM ET.

Unfortunately I'll probably miss the live webcast as we're wrapping up visits and packing today to fly out early tomorrow to return back to Colorado.

I'm sure the comming week will be filled with blogs and articles about all the exiting things in 11g and with the things we're disapointed to not get in this version.

Posted in Uncategorized

May 21st, 2007 by mathias

Are you using 9i/10G and still implement optimistic locking with your own column rather than through Oracle's pseudo column? So am I, but I couldn't really explain why. My main (defensive) argument would be that the system was built long before 9i. Still, it would make sense for us to change it. Let's look at a, hopefully, quick example.

Let's first create a user with two tables and add the same data to both tables.

conn system
create user rowscn identified by rowscn;
alter user rowscn default tablespace users;
alter user rowscn quota unlimited on users;
grant resource, create session to rowscn;
conn rowscn/rowscn
create table t1 (id number, total number);
insert into t1 (id, total) values(1,100);
insert into t1  (id, total) values(2,200);
insert into t1  (id, total) values(3,300);
create table t2 (id number, total number) rowdependencies;
insert into t2 (id, total) values(1,100);
insert into t2  (id, total) values(2,200);
insert into t2  (id, total) values(3,300);
commit;

Two tables with just one "small" difference. We'll soon see see the difference it makes. To see how this works, we'll use a simple update of these three rows.

update t1 set total = total + 1 where id = 1;
commit;
update t1 set total = total + 1 where id = 2;
commit;
update t1 set total = total + 1 where id = 3;
commit;

Each row was updated in a different transaction as we committed between each update. If we now look at the pseudo column, ORA_ROWSCN, we will see something like:

select id, total, ora_rowscn from t1;
ID  TOTAL ORA_ROWSCN
---

Posted in Uncategorized

April 1st, 2007 by mathias

It's time to start a blog and write down some ideas, findings, and general commentary about development with the Oracle database. This blog will be a place where I write about things related to Oracle development and performance that interests me.

I have been a DBA for longer than I really want to admit as that leads to the conclusion that I'm a middle aged man today. When I started in this industry, I used to feel that the middle aged men didn't know or understand anything that had been introduced in the last ten years. I'm sure it is the case with me in some areas, but I hope my interest for databases will make the content here usable for current versions of Oracle.

My interest and passion is using Oracle database technologies to build database driven applications that perform well. That is, I'm not interested in Oracle to build applications for the database, but rather to use the database to make better and faster applications.

Initially much of what you find here will be based on Tom Kyte's last book Expert Oracle Database Architecture. If you want to read about Oracle, then this is a great book. One of the most well written books I've come across and Tom makes even really complex material relatively easy to understand without reducing the complexity for the areas you need to fully understand to build better applications with Oracle. I'm not planning to just take Tom's ideas and turn his book into my blog, instead I will write about things I've thought of as a result of reading his book.

Writing about interesting side effects or proving just why some concepts are really important is what I want to do on this blog. Just rewriting the concepts guide or the performance manual serves little purpose unless one is interested in writing a book. Sure, Oracle's documentation is often in need of a guide to explain why it is important or how a technique should be used. To me, that is still not a blog I'd read. For that I buy a book or search for a site that just expands a little on the text in Oracle's official documentation.

Posted in Uncategorized