At the start of the Christmas break I decided to get started with Oracle VirtualBox 4.0. To get up an going faster with an 11g database I opted to download the pre-made image (called appliance by Oracle) that they use during Developer Days. It has a lot of things preinstalled and ready to go.
- Oracle Enterprise Linux 5
- Oracle Database 11g Release 2 Enterprise Edition
- Oracle TimesTen In-Memory Database Cache
- Oracle XML DB
- Oracle SQL Developer
- Oracle SQL Developer Data Modeler
- Oracle Application Express 4.0
- Oracle JDeveloper
- Hands-On-Labs (accessed via the Toolbar Menu in Firefox)
This is great and it is a very fast way to get up and running assuming you're not looking to practice installing the products. It worked well, except that I had to make two tweaks to make it work.
The first is optional, but allows the linux OS to be updated with the most critical patches. The reason it is needed is that the appliance is configured to be updated from Oracles internal servers, which only those working inside Oracles firewall can reach. Thar will of course not work for those of us not on Oracle's payroll.
To do this, change the directory to /etc/yum.repos.d and run this
Now rename the file already in the directory so it ends with something other than ".repo" to disable use of it for updates and edit the contents of the downloaded file to enable those update servers you want to use. I think the following are the ones you want to enable:
Now the most critical updates *should* be avaiulable to install. Both via commandline yum and through the GUI for the operatingsystem.
The next problem I encountered was that the database didn't start up and the reason seemed to have to do with the default local_listener used when starting and it's not defined. Then it uses the hostname as part of the specification and when it was no in the hosts file it caused the database to not start up as the computer did not know the way back to itself. I'm guessing this could be solved in the VirtualBox config, but s quicker work around for me was to update the hosts file.
Edit the /etc/hosts file and change this line:
127.0.0.1 localhost.localdomain localhost
127.0.0.1 localhost.localdomain localhost dhcppc7
Where dhcppc7 is the name of the computer (i.e. the virtual machine). You can see the nme of it by just running the command "hostname". It may always be dhcppc7 for this appliance.
With those changes the appliance gets critical updates from the public YUM-server and the database starts when the appliance is started.
Posted in Uncategorized
The afternoon started off with a presentation on the result cache in 11G. It was held by Rob van Wijk.
DB result cache is a cache for the resultset of a query or a plsql block. If the same query is executed again with the same parameters, then th code is not executed, the same resultset is sent back.
There are two parameters that controls the result cache.
Size is the size of the cache in bytes and result sets it in pct. Presumably the lowest of them takes precedence.
There is a function (memory_cache) to report how the memory in the cache is used.
Status shows if the cache is enabled or disabled.
With the init-parameter result_cache_mode you can force the database to use the cache for all statements. This *may* be useful on session level, it is probably not useful on system level.
It is possible to specify on a table if it should be cached. For it to work, all tables in a query block needs to have caching turned on.
There are some v$-views that provides details about the statements in the cache.
Changes to NLS parameters are tracked so a query is treated as a different query if an NLS parameter is different than it was for the cached version of the query.
The relies_on keyword for PLSQL is not needed after 11Gr2, such dependencies are figured out by the database if you skip specifying dependencies.
If you have uncommitted data in your session in a table you query, the cache will not be used as it would return invalid data.
It is a common misunderstanding that result cache and a deterministic function/procedure is essentially the same thing. They do have the same functionality, but in some cases a deterministic may run in 1/3 of the time a result_cached one does. The reason is a latch used for result_cache, but this ought to only be of practical use if you call something a million times and it runs extremely fast.
The last presentation for the day was supposed to be the keynote for database. Somehow that was not the presentation I ended up on as it barely mentioned database at all. Instead I found myself on Larrys second keynote.
He began with talking about how building hardare and software pre-integrated like the exadata box is the future. He said that his best Friend is Steve Jobs and he has talked about it for a long time and considers it to be one of the keys to Apples success with device they have ever released.
Most industries work that way, a car manufacturer controls every piece that is used to build a car, it does not sell a car for which you buy an engine from someone else.
Larry then went on to talk about all the things that have been released during OOW.
Here is the list:
- Exalogic Elastic Cloud – Cloud in a box
- Exadata X2-8 – 8 sockets per node
- Fusion apps – 5 years of design and programming
- Oracle Linux unbreakable kernel
- Solaris 11 – next gen #1 UNIX
- SPARC – next gen high performance chip
- Java roadmap – esp vector graphics 2D and 3D
- MySQL 5.5 – huge performance increases
Oracles R&D budget is now over $4B per year and Larry states that he has no plan stopping there.
Larry made fun of Salesforce again. In his first keynote he said that salesforce is not cloud computing since it is not elastic and it is not virtual. If too many resources are used, salesforce starts killing the reports that users are running. Later the salesforce CEO said something like "Larry just doesn't get it, cloads doesn't run in a box". Larry stated that typically you do need boxes to run software and when the CEO gets back to the office his technical staff will explain to him that software indeed runs on boxes. His monologe on this pulled down a lot of laughs and a lot of applause. The audience clearly sides with Larry Ellison and Jeff Bezos on what is cloud computing and what is not.
Larry then reviewed exalogic and exadata again. One interesting stat for the value was that Softbank has replaced a 60-rack Teradata with a 3-rack exadata. On top of the simpoler and cheaper solution it also made their applications 2-8 times faster (depending on the application).
The new exadata has very impressive specs. Among other things, it can fit 50 TB of data in flash with the 10x compression it achieves. Supposedly competing platsforms doesn't get anywhere close to 10x compression. The cost of one exadata is about 1/6 of a comparable IBM server.
Oracle Red Hat linux kernel was introduced ti improve performance and stability of linux. Oracle will still provide and support the redhat compatible kernel. But since redhat is now on a four year old kernel, it is not working as well as it could. The exadata machines uses the new Oracle Unbreakable Linux kernel. A client can chose which to boot up Oracle Enterprise Linuc with. The measured speed improvement for the new kernel shows that it is 2-5 times faster on the operations they reported on.
The introduction of the new Fusion apps took up most of the presentation. It seems like all applikations are being replaced. For example, it would seem that there is a new CRM that will replace Siebel, Peoplesoft, JD Edwards, and E-business suite.
It has been a massive project, possible one of the larges ever undertaken. They have created over 5000 tables and when these products are available to buy, the catalog with products form Oracle will have 100 new products.
Once Larry left the stage, people left in droves even though a demo of the new applications was what came next. The applications looked great to me, especially compared with what I have seen of the previous version. Even more impressive was the speed. The live demo showed very smooth operation even of things that you'd expect to be complicated.
That completed the presentations for Wednesday. After it we ventured into chinatown for dinner at Hunan Home's Restaurant. They have been selected best chinese restaurant several times (2010 also) and everything we got there was superb. On top of that the owner had a great sense of humor and made a few jokes when passing by our tables during dinner.
Posted in Uncategorized
The afternoon had one of the sessions I knew I would enjoy the most before I even got here. It was Carly Millsaps "Thinking clearly about performance". THe reason I anticipated a great presentation is not just Carlys knowledge in the area, but also the fact that he is fast becoming the best presenter in the Oracle area. This presentation was no different, he covered a lot of topics and still made it seem like there ws no rush with anything.
Everyone ought to read the paper the presentation is based on. You can find it here.
The presentation essentially alks through 21 items that makes up things that matters for performance. It also shows why DBAs need to care about the developers area and vice versa.
One key area he brought up was "knowing matters – proving matters more". They reason thi is important is that unless you can prove your theory, it is unlikely to get implemented in an organization that juggles many high priority changes.
Response time does not equal throughput even though it sounds as if it does to most people.
Customers feel the variance, not the mean. This means that it is usually not the average time something takes that is the problem, it is the fact that it sometimes takes 10 times longer that makes the user upset.
Problem ananlysis has to start with knowing both the current state and the goal state. If you do not know both, you cannot get started. Even though it can feel uncomfortable to ask about the goal state, you HAVE to.
Cary used sequence diagrams to show how to find out what it is that needs to be addressed. Without knowing what times each part (not just the DB) takes, it is not possible to know what part to fix first.
Profiling is used to know if the endgoal is possible. If you remove everything that should not be needed (such as waiting for a latch) and the gal is still not met, then it is not just a matter of removing unnecessary things, you probably need to re-architect too.
One of the closing remarks was the classic quote on performance that "The fastest way to do something is to not do it at all". As obvious as it sounds, it is easy to forget when analyzing a problem.
The next presentation was "Going realtime/convergent". It was about Oracle Apps Billing and Revenue management. The presentation was held by a person from Tusmobil in Slovakia. His name is something like Ognjen Antonic. The reason it is "like" is that I have no idea how to make my keyboard produce the special symbols he has above letters in his name.
I was hoping for a little bit of insight into the work with implementing a realtime billing solution on Oracles applications. This being a customercase provided more of their business and market background. We were shown lots of impressive numbers on what it had done and what they had achieved, but for a techie it did unfortunately not provide much insight into the work they performed to achieve it.
My next presentation for the day was attending a presentation on Golden Gate. The name was "Golden Gate: What is all the fuzz about" and was held by Borkur Steingrimsson from RittmanMead Consulting.
It was a review of what you get with the license to start with. One thing that is included is Active DG, which explains part of the license cost. Golden Gate is a nice technology, but it is unfortunate that what was included with the database in previous incarnations is replaced with something that requires a separate license. Still, Golden Gate looks like it has a lot of neat features.
It has recently been certified to be used with exadata for extracting and loading.
GG can be set up to do deferred apply of changes.
The presenter thinks that it has little GUI support, but what it lacks there is made up for in scripting support. This shows that it is primarily geared towards DBAs who traditionally prefers to do work on the command line over just clicking around in a GUI.
GG handles both the initial load to a target system and later incremental loads to keep source and target synchronized.
It has support for DDL and it can change both schema and table prefixes on DDL commands. It will however not change schema prefixes if you hardcode that into procedures or functions.
That presentation was the last one for the day. The evening was rounded off with a five mile roundtrip hike over to mamacita in Marina. If you like mexican food you will be in heaven at this restaurant. THis was by far the best mexican food I have ever had. They have a micheline star and they proved it with every dish we tried and we sampled five things where every thing tried hard to top the previous one.
Take a look at their site if you are or are planning to go to SF. If you go there, you need to try their Crudo de Atun. It is an ahi tuna tartar and it is to die for. This place has what gourmet dreams are made of, I know I'll dream about their food until I get a chance to return.
Posted in Uncategorized
The morning was kicked off with a presentation about new features with interactive reports in APEX 4.0.
A demo shows how to create an icon enabled report for basic product list report. Every product is represented by a picture in a grid that shpws a few images per "row".
A more detailed version of the icon view was also created where each row had a picture and a number of details about it on ever "row".
These are all integrated into the interactive report, so the only thing that differs is that the user clicks on a button in the searchbar for the interactive report. So it is essentially just a way to allow the user to select different modes for viewing the report.
Via group by controlls, the end user can set up their own way to aggregate data in the report. End users can save their settings for a report to allow them to get the same report again later by just selecting the saved report.
Developers can also save settings for the report to give all users access to a variant of the report that many users have a need for.
It is also possible for end users to share reports. This allows an end user to save their version of a report as a "public" report. This is turned on by the developer and who has the right to save such reports is controlled the same way as access to other objects in APEX.
It is also possible to enable subscriptions of reports. The result of that is the the user receives and email with the report in HTML format at the interval they have chosen.
All of these helps the end user be more productive and one area that also gives more fredom to the enduser is websheet application wich was presented next by David Peake.
David likes to call it "Wiki on steroids" and after watching the presentation I would have to agree. I have actually thought websheets were just a way to give users a way to edit data in a table in a grid on a webpage. It is clearly time to stop ignoring this part of APEX.
It allows end users to create pages and add contents just any WIKI does. It is also geared towards businessusers with little IT knowhow. They can build applications the shows data and updates data without knowing anything about programming or even databases.
David shows a demo of it where he first created a websheet application. THen during running of it the application runs in a mode where editing allows use of some things that are usually in the builder interface for database applications. The user can edit the page with rich text controls, can enter SQL to execute to create content on the page. It is also possible for the user to create a table or add data by just cutting and pasting from Excel.
Websheets essentially turns APEX into a user created content application. It is possible to collaboratively work on a websheet application and have dynamic content in it.
Could this be used as a documentation platform to enhance a wiki to also be able to present data from the system? Or maybe to use to build simple dashboards to give an easy overview of the statu on things in a system?
One area where this could be really useful would be to use it for prototyping together with end users and let them modify things on their own to show what kind of solution it is they would want.
The next presentation was "Exadata Management and Optimization". I was a little late to the session, but all I got in the forty minutes I was there was a long plug for using Oracle ACS to install, monitor and manage the solution. They had a hundred or so slides of information on how they have the people, the skills, and the tool to do it right.
They seems ot havea very impressive setup with an appliance that collects data and an SLA of 15 minutes to present the customer with an action plan after something fails (the machine is very fault tolerant so it is usually not the same things as an outage).
Still I came to learn about managing and optimizing exadata, not to get a sales pitch for Oracle ACS.
The only amusing thing in the presentation was when during the Q&A the presenter was asked to summarize of Oracles best practices. He asked "All of them?" and the person asking responded very seriously that yes, he would indeed want to get a summary in a few words on all of the best practices. The presenter snickered and said something like "lets chat after the session to make sure you get what you need".
Posted in Uncategorized
The keynote tsarted with some general hints on things to do at the conferense and hings that can help make the experience even better. After that safra Catz gave out the excellence awards.
Next up was Ann Livermore om HP to talk about how important Oracle is for them. she threw out lots of numbers of it, one that stuck with me is that 40% of all Oracle licenses is for HP hardware. They have 300 000 employees. my guess is that they may stand to lose the most on the SUN purchase.
they want to help Oracle's customer flip the ratio of spendding between operations and innovations.
Load runner is how available as a cloud service.
Next up waas Dave Donatelli who gave a boring presentation on HP's hardware offerings.
Larry was up next after a very long intro.
He spent a lot of time defiming what their view of cloud computimg is. In summary it is EC2 from Amazon (who he claims populariced the term cloud) and it is not software that just runsj on the net that you integrate with (it could be cloud, but it is not just because it is web-enabled).
The following are required for cloud in Oracle's view:
HW and SW
Runs variety of apps
Public and private
Exalogic was announced. There will be many more detailed accounts of it by now on the net, so here is just a short overview.
It is meant for any and all apps.
It has 30 compute servers and 360 cores. It runs Linux and Solaris virtualized.
It is by far the fastest machine for Java. No data for this claims was presented.
WebLogic Server and Jrocket has been optimized for it.
2 exalogic servers can service all http-requests for facebooks global presense. It can service 1 million HTTP requests per second.
it is claimed to improve http performance 12x and messaging 5x. Again there was no data presented, but it will surely be published soon.
The following specs were presented:
3 TB Dram
1 TB SSD
40 GB infiniband dwith extremely low larency
It is ideal for OLTP.
it was built with an intention to be able to drive exadata. the infiniband is used to connect to an exadata server.
it can be deployed from 1/4 machcine up to 8 exalogic servers.
They make a big point of the fact th at hardware and all software is tested together so everyone runs the same exact config. This should let the deliver one file that patches all pars of the server.
They stated tthat a full machine will be $1M while a similar config from IBM will be $4.4M and that machine is not getting close to the performance of the exalogic. If true the success will be immediate…
It is based on Oracle VM.
Them followed a discussion on Java where it dounded as if they will give up on red Hat support due to RH being so far behind. im sure this will be discussed at length in the Oraclce blogospherre.
Posted in Uncategorized
Oracle's launch of database 11g will be webcasted. It starts at 10 AM ET.
Unfortunately I'll probably miss the live webcast as we're wrapping up visits and packing today to fly out early tomorrow to return back to Colorado.
I'm sure the comming week will be filled with blogs and articles about all the exiting things in 11g and with the things we're disapointed to not get in this version.
Posted in Uncategorized
Are you using 9i/10G and still implement optimistic locking with your own column rather than through Oracle's pseudo column? So am I, but I couldn't really explain why. My main (defensive) argument would be that the system was built long before 9i. Still, it would make sense for us to change it. Let's look at a, hopefully, quick example.
Let's first create a user with two tables and add the same data to both tables.
create user rowscn identified by rowscn;
alter user rowscn default tablespace users;
alter user rowscn quota unlimited on users;
grant resource, create session to rowscn;
create table t1 (id number, total number);
insert into t1 (id, total) values(1,100);
insert into t1 (id, total) values(2,200);
insert into t1 (id, total) values(3,300);
create table t2 (id number, total number) rowdependencies;
insert into t2 (id, total) values(1,100);
insert into t2 (id, total) values(2,200);
insert into t2 (id, total) values(3,300);
Two tables with just one "small" difference. We'll soon see see the difference it makes. To see how this works, we'll use a simple update of these three rows.
update t1 set total = total + 1 where id = 1;
update t1 set total = total + 1 where id = 2;
update t1 set total = total + 1 where id = 3;
Each row was updated in a different transaction as we committed between each update. If we now look at the pseudo column, ORA_ROWSCN, we will see something like:
select id, total, ora_rowscn from t1;
ID TOTAL ORA_ROWSCN
Posted in Uncategorized
It's time to start a blog and write down some ideas, findings, and general commentary about development with the Oracle database. This blog will be a place where I write about things related to Oracle development and performance that interests me.
I have been a DBA for longer than I really want to admit as that leads to the conclusion that I'm a middle aged man today. When I started in this industry, I used to feel that the middle aged men didn't know or understand anything that had been introduced in the last ten years. I'm sure it is the case with me in some areas, but I hope my interest for databases will make the content here usable for current versions of Oracle.
My interest and passion is using Oracle database technologies to build database driven applications that perform well. That is, I'm not interested in Oracle to build applications for the database, but rather to use the database to make better and faster applications.
Initially much of what you find here will be based on Tom Kyte's last book Expert Oracle Database Architecture. If you want to read about Oracle, then this is a great book. One of the most well written books I've come across and Tom makes even really complex material relatively easy to understand without reducing the complexity for the areas you need to fully understand to build better applications with Oracle. I'm not planning to just take Tom's ideas and turn his book into my blog, instead I will write about things I've thought of as a result of reading his book.
Writing about interesting side effects or proving just why some concepts are really important is what I want to do on this blog. Just rewriting the concepts guide or the performance manual serves little purpose unless one is interested in writing a book. Sure, Oracle's documentation is often in need of a guide to explain why it is important or how a technique should be used. To me, that is still not a blog I'd read. For that I buy a book or search for a site that just expands a little on the text in Oracle's official documentation.
Posted in Uncategorized