Category: Uncategorized

December 5th, 2013 by Mathias

Time to get back into blogging. I stopped a while ago and the reason was two-fold. As I was leaving my job at Kentor AB I wanted to avoid any concerns with what I wrote while my three month long two week notice played out. The other reason was that once I had left life got really busy both with work for my client as well as with the new adventure I had departed on.

The new adventure is to bring the Miracle brand to Sweden. I will try to build up an Oracle database focused team here in Stockholm based on the success Miracle A/S has had in Denmark. Yes it is equally exciting as it is scary for someone who has lived their life in medium to large scale consulting practices.

However, I have always had a big admiration for the guys who started Miracle and what they have achieved. Getting a chance to bring the same style and quality to my home market of Stockolm and Sweden was just an offer that was too good to pass up. That is the kind of opportunities most of us will only get once or twice in a career. This one came at a time where it was close enough to work with everything else that was going on. The one thing that may not be optimal is having a new house built at the same time. Actually, that makes it completely not optimal. But the phrase I keep repeating to others that are thinking about when the best time to get started is “There is no time such as the present”, so I took my own advice for once.

So now the work is on getting the Oracle practice going, or rather it is going it just needs a few more legs. And with legs I mean consultants.

Next is however partying with our colleagues down in Copenhagen close to the headquarters. Tomorrow evening is when the Christmas party kicks off. Who knows when it will end. ūüôā

This blog will soon return to a more technical program. thought probably with some posts on interesting things with starting up a company. I’m sure the next year will teach us a lot of things on how to do that.

About that, this hiring business… When is the best time to add more people to a small company and how do we manage risk? Well… ehhh… Yeah… That’s right there is no best time, but there is no time such as the present.

Posted in Uncategorized

June 25th, 2013 by Mathias

So calling Google Reader dead may be a bit premature, but Googles announcement of their intention to kill their baby all but killed it. On monday it is RIP for the reader. I’ve gone through disbelief to mourning the loss to a search for a replacement that best emulates what Google Reader does.

In the end I decided to not just find something that just replaces it with a new tool providing the same exact feature only with a different name.

If I had to find a different tool, I might as well try to find something better. The fact is that although Google Reader was one of my favorite Google tools, I did not use it. Yes, it is a bit odd that I’m writing a post about replacing a tool I didn’t use. And when I say that I didn’t use it, I use the term “did not” very recklessly. The fact is that I could not read all the blog I follow without it. Google Reader had however turned into an infrastructure piece for me. I used it to collect the posts and keep track of what I had read, but the tool of my choice for reading was the Reeder. Unless you are familiar with it, it may seem like I just¬†referred¬†to the Google Reader again, but it is spelled differently and this is a wonderful tool for reading blogs and it uses the Google Reader to manage the blogs and lets Google Reader track which posts are read and which are not.

If the Reeder is such a wonderful app, why not keep using it. At first I thought of abandoning it as Google Reader was going away. But it turns out the Reeder will continue to work (or so they claim). It supports Feedbin and possibly even feedly in the future. There has not been much talk about it on their own website or even on their twitter account. There are lots of fans asking what is going on, scarcely little from @reederapp themselves.

I started thinking of how I consume text these days. I do still surf on a computer, but it is when looking for something. I prefer reading on my iPad. My workflow has actually changed such that when I find a great article or blog post, I flip it so I later can read it in Flipboard on the iPad. If the article is worth saving for the future after having read it I add it to Evernote. As that is my workflow for things I do not have to read right now, why should my workflow for reading blogs be any different? After all, I read them when time allows.

After that epiphany I took a look at Flipboard wondering if I could get a similar tool for reading blogs on the iPad. It turns out I can, they have ceased the opportunity Google created. You can now read your blogs in the tool we all love. Getting my blogs to Flipboard has made them seem so much more enjoyable. It integrates with Google Reader so it imports all the blogs you have there and lets you follow the same blogs.

The one thing that can be confusing is that after opting to read your blogs in flipboard, you will want to go into the settings for Google Reader in Flipboard and possibly turn on to have number of unread items indicated, to set posts you read in Flipboard to read and to show only unread posts. I have them all set to ON and it works very well with the way I want to read my blogs.

For me reading blogs on the iPad is so much more enjoyable than to try to catch up in the browser on the computer. The latter never happens  and I tend to fall far behind and have to set aside a few hours to catch up, since moving my blog reading to Flipboard I have stayed current on all blogs I read.

If you haven’t found a way to read your blogs yet that you really love, give Flipboard a try. I think you’re gonna love it.

Posted in Uncategorized Tagged with: , , ,

February 19th, 2013 by Mathias

I have hidden my previous post on the explicit request by Oracle Security Team. If you did read it or have it in your RSS, please do not forward or talk about it online until further notice. I’m not sure it is the right thing to do, but I have promised Oracle to not publish it for now. Please help me keep that promise.

Posted in Uncategorized

October 11th, 2011 by Mathias

Sometimes you my find a need to suppress repeating values in SQL. One case is when your reporting tool does not have such a feture or you just cannot find it fast enough. That happened to me with a report that was to be converted to APEX from Oracle Reports the other week.

I could not find an option in APEX to suppress repeating values, and I did not want to make them all control breaks as that would chop up the report too much.

Let’s begin with a real simple SQL that shows the departments different employees work in.

select a.dname
from dept a
inner join emp b
on a.deptno = b.deptno
order by 1,2

This of course gives a list like this:


Suppose I want to remove the department name when it is the same as on the previous row. SQL does not give a way to just mention a keyword to have it done and APEX does not allow for it as far as I can tell.

The first step is to add a rownumber to the result table for the above SQL.
select c.*
,rownum rn
from (select a.dname
from dept a
inner join emp b
on a.deptno = b.deptno
order by 1,2) c

Now each row has an number starting from 1 and ending with 14. We wrap this in a prefactor construct (i.e. wrap it in a with so the select can use the data).

with rpt as(select c.*
,rownum rn
from (select a.dname
from dept a
inner join emp b
on a.deptno = b.deptno
order by 1,2) c)
select case
when a.dname = b.dname then ' '
else a.dname
end dname
from rpt a left outer join rpt b
on a.rn = b.rn + 1
order by a.rn;

Here we have the select creating the list with the rownumber in a with that the select that follows can reference. We have now also added a selfjoin so we join each row with its predecessor in rownumber order.

This allows us to show data from the prefactored SQl using a to display data and b to check data on the previous row.

In this case we check the department and when it is the same as it was on the previous row we display a single space. if it is not the same, we display the name of the department. Thus, the case performs the supressing of repåeating department names.

The result of this SQL is the following report.


That is it. Suppressing isn’t too hard to do, but it requires a little bit of setup in the SQL.

Posted in Uncategorized

January 1st, 2011 by Mathias

At the start of the Christmas break I decided to get started with Oracle VirtualBox 4.0. To get up an going faster with an 11g database I opted to download the pre-made image (called appliance by Oracle) that they use during Developer Days. It has a lot of things preinstalled and ready to go.

  • Oracle Enterprise Linux 5
  • Oracle Database 11g Release 2 Enterprise Edition
  • Oracle TimesTen In-Memory Database Cache
  • Oracle XML DB
  • Oracle SQL Developer
  • Oracle SQL Developer Data Modeler
  • Oracle Application Express 4.0
  • Oracle JDeveloper
  • Hands-On-Labs (accessed via the Toolbar Menu in Firefox)

This is great and it is a very fast way to get up and running assuming you're not looking to practice installing the products. It worked well, except that I had to make two tweaks to make it work.

The first is optional, but allows the linux OS to be updated with the most critical patches. The reason it is needed is that the appliance is configured to be updated from Oracles internal servers, which only those working inside Oracles firewall can reach. Thar will of course not work for those of us not on Oracle's payroll.

To do this, change the directory to /etc/yum.repos.d and run this 


Now rename the file already in the directory so it ends with something other than ".repo" to disable use of it for updates and edit the contents of the downloaded file to enable those update servers you want to use. I think the following are the ones you want to enable:

  • el5_u5_base
  • el5_addons
  • el5_oracle_addons

Now the most critical updates *should* be avaiulable to install. Both via commandline yum and through the GUI for the operatingsystem.

The next problem I encountered was that the database didn't start up and the reason seemed to have to do with the default local_listener used when starting and it's not defined. Then it uses the hostname as part of the specification and when it was no in the hosts file it caused the database to not start up as the computer did not know the way back to itself. I'm guessing this could be solved in the VirtualBox config, but s quicker work around for me was to update the hosts file.

Edit the /etc/hosts file and change this line:               localhost.localdomain localhost

to               localhost.localdomain localhost dhcppc7

Where dhcppc7 is the name of the computer (i.e. the virtual machine). You can see the nme of it by just running the command "hostname". It may always be dhcppc7 for this appliance.

With those changes the appliance gets critical updates from the public YUM-server and the database starts when the appliance is started.


Posted in Uncategorized

September 24th, 2010 by Mathias

The afternoon started off with a presentation on the result cache in 11G. It was held by Rob van Wijk.

DB result cache is a cache for the resultset of a query or a plsql block. If the same query is executed again with the same parameters, then th code is not executed, the same resultset is sent back.

There are two parameters that controls the result cache.

  • result_cache_size_max_size
  • result_cache_size_max_result

Size is the size of the cache in bytes and result sets it in pct. Presumably the lowest of them takes precedence.

There is a function (memory_cache) to report how the memory in the cache is used.

Status shows if the cache is enabled or disabled.

With the init-parameter result_cache_mode you can force the database to use the cache for all statements. This *may* be useful on session level, it is probably not useful on system level.

It is possible to specify on a table if it should be cached. For it to work, all tables in a query block needs to have caching turned on.

There are some v$-views that provides details about the statements in the cache.

  • v$_result_cache_statistics
  • v$_result_cache_objects
  • v$_result_cache_dependency

Changes to NLS parameters are tracked so a query is treated as a different query if an NLS parameter is different than it was for the cached version of the query.

The relies_on keyword for PLSQL is not needed after 11Gr2, such dependencies are figured out by the database if you skip specifying dependencies.

If you have uncommitted data in your session in a table you query, the cache will not be used as it would return invalid data.

It is a common misunderstanding that result cache and a deterministic function/procedure is essentially the same thing. They do have the same functionality, but in some cases a deterministic may run in 1/3 of the time a result_cached one does. The reason is a latch used for result_cache, but this ought to only be of practical use if you call something a million times and it runs extremely fast.

The last presentation for the day was supposed to  be the keynote for database. Somehow that was not the presentation I ended up on as it barely mentioned database at all. Instead I found myself on Larrys second keynote.

He began with talking about how building hardare and software pre-integrated like the exadata box is the future. He said that his best Friend is Steve Jobs and he has talked about it for a long time and considers it to be one of the keys to Apples success with device they have ever released.

Most industries work that way, a car manufacturer controls every piece that is used to build a car, it does not sell a car for which you buy an engine from someone else.

Larry then went on to talk about all the things that have been released during OOW.

Here is the list:

  • Exalogic Elastic Cloud – Cloud in a box
  • Exadata X2-8 – 8 sockets per node
  • Fusion apps – 5 years of design and programming
  • Oracle Linux unbreakable kernel
  • Solaris 11 – next gen #1 UNIX
  • SPARC – next gen high performance chip
  • Java roadmap – esp vector graphics 2D and 3D
  • MySQL 5.5 – huge performance increases

Oracles R&D budget is now over $4B per year and Larry states that he has no plan stopping there.

Larry made fun of Salesforce again. In his first keynote he said that salesforce is not cloud computing since it is not elastic and it is not virtual. If too many resources are used, salesforce starts killing the reports that users are running. Later the salesforce CEO said something like "Larry just doesn't get it, cloads doesn't run in a box".  Larry stated that typically you do need boxes to run software and when the CEO gets back to the office his technical staff will explain to him that software indeed runs on boxes. His monologe on this pulled down a lot of laughs and a lot of applause. The audience clearly sides with Larry Ellison and Jeff Bezos on what is cloud computing and what is not.

Larry then reviewed exalogic and exadata again. One interesting stat for the value was that Softbank has replaced a 60-rack Teradata with a 3-rack exadata. On top of the simpoler and cheaper solution it also made their applications 2-8 times faster (depending on the application).

The new exadata has very impressive specs. Among other things, it can fit 50 TB of data in flash with the 10x compression it achieves. Supposedly competing platsforms doesn't get anywhere close to 10x compression. The cost of one exadata is about 1/6 of a comparable IBM server.

Oracle Red Hat linux kernel was introduced ti improve performance and stability of linux. Oracle will still provide and support the redhat compatible kernel. But since redhat is now on a four year old kernel, it is not working as well as it could. The exadata machines uses the new Oracle Unbreakable Linux kernel. A client can chose which to boot up Oracle Enterprise Linuc with. The measured speed improvement for the new kernel shows that it is 2-5 times faster on the operations they reported on.

The introduction of the new Fusion apps took up most of the presentation. It seems like all applikations are being replaced. For example, it would seem that there is a new CRM that will replace Siebel, Peoplesoft, JD Edwards, and E-business suite.

It has been a massive project, possible one of the larges ever undertaken. They have created over 5000 tables and when these products are available to buy, the catalog with products form Oracle will have 100 new products.

Once Larry left the stage, people left in droves even though a demo of the new applications was what came next. The applications looked great to me, especially compared with what I have seen of the previous version. Even more impressive was the speed. The live demo showed very smooth operation even of things that you'd expect to be complicated.

That completed the presentations for Wednesday. After it we ventured into chinatown for dinner at Hunan Home's Restaurant. They have been selected best chinese restaurant several times (2010 also) and everything we got there was superb. On top of that the owner had a great sense of humor and made a few jokes when passing by our tables during dinner.

Posted in Uncategorized

September 23rd, 2010 by Mathias

The afternoon had one of the sessions I knew I would enjoy the most before I even got here. It was Carly Millsaps "Thinking clearly about performance". THe reason I anticipated a great presentation is not just Carlys knowledge in the area, but also the fact that he is fast becoming the best presenter in the Oracle area. This presentation was no different, he covered a lot of topics and still made it seem like there ws no rush with anything.

Everyone ought to read the paper the presentation is based on. You can find it here.

The presentation essentially alks through 21 items that makes up things that matters for performance. It also shows why DBAs need to care about the developers area and vice versa.

One key area he brought up was "knowing matters – proving matters more". They reason thi is important is that unless you can prove your theory, it is unlikely to get implemented in an organization that juggles many high priority changes.

Response time does not equal throughput even though it sounds as if it does to most people.

Customers feel the variance, not the mean. This means that it is usually not the average time something takes that is the problem, it is the fact that it sometimes takes 10 times longer that makes the user upset.

Problem ananlysis has to start with knowing both the current state and the goal state. If you do not know both, you cannot get started. Even though it can feel uncomfortable to ask about the goal state, you HAVE to.

Cary used sequence diagrams to show how to find out what it is that needs to be addressed. Without knowing what times each part (not just the DB) takes, it is not possible to know what part to fix first.

Profiling is used to know if the endgoal is possible. If you remove everything that should not be needed (such as waiting for a latch) and the gal is still not met, then it is not just a matter of removing unnecessary things, you probably need to re-architect too.

One of the closing remarks was the classic quote on performance that "The fastest way to do something is to not do it at all". As obvious as it sounds, it is easy to forget when analyzing a problem.

The next presentation was "Going realtime/convergent". It was about Oracle Apps Billing and Revenue management. The presentation was held by a person from Tusmobil in Slovakia. His name is something like Ognjen Antonic. The reason it is "like" is that I have no idea how to make my keyboard produce the special symbols he has above letters in his name.

I was hoping for a little bit of insight into the work with implementing a realtime billing solution on Oracles applications. This being a customercase provided more of their business and market background. We were shown lots of impressive numbers on what it had done and what they had achieved, but for a techie it did unfortunately not provide much insight into the work they performed to achieve it.

My next presentation for the day was attending a presentation on Golden Gate. The name was "Golden Gate: What is all the fuzz about" and was held by Borkur Steingrimsson from RittmanMead Consulting.

It was a review of what you get with the license to start with. One thing that is included is Active DG, which explains part of the license cost. Golden Gate is a nice technology, but it is unfortunate that what was included with the database in previous incarnations is replaced with something that requires a separate license. Still, Golden Gate looks like it has a lot of neat features.

It has recently been certified to be used with exadata for extracting and loading.

GG can be set up to do deferred apply of changes.

The presenter thinks that it has little GUI support, but what it lacks there is made up for in scripting support. This shows that it is primarily geared towards DBAs who traditionally prefers to do work on the command line over just clicking around in a GUI.

GG handles both the initial load to a target system and later incremental loads to keep source and target synchronized.

It has support for DDL and it can change both schema and table prefixes on DDL commands. It will however not change schema prefixes if you hardcode that into procedures or functions.

That presentation was the last one for the day. The evening was rounded off with a five mile roundtrip hike over to mamacita in Marina. If you like mexican food you will be in heaven at this restaurant. THis was by far the best mexican food I have ever had. They have a micheline star and they proved it with every dish we tried and we sampled five things where every thing tried hard to top the previous one.

Take a look at their site if you are or are planning to go to SF. If you go there, you need to try their Crudo de Atun. It is an ahi tuna tartar and it is to die for. This place has what gourmet dreams are made of, I know I'll dream about their food until I get a chance to return.

Posted in Uncategorized

September 22nd, 2010 by Mathias

The morning was kicked off with a presentation about new features with interactive reports in APEX 4.0.

A demo shows how to create an icon enabled report for  basic product list report. Every product is represented by a picture in a grid that shpws a few images per "row".

A more detailed version of the icon view was also created where each row had a picture and a number of details about it on ever "row".

These are all integrated into the interactive report, so the only thing that differs is that the user clicks on a button in the searchbar for the interactive report. So it is essentially just a way to allow the user to select different modes for viewing the report.

Via group by controlls, the end user can set up their own way to aggregate data in the report. End users can save their settings for a report to allow them to get the same report again later by just selecting the saved report.

Developers can also save settings for the report to give all users access to a variant of the report that many users have a need for.

It is also possible for end users to share reports. This allows an end user to save their version of a report as a "public" report. This is turned on by the developer and who has the right to save such reports is controlled the same way as access to other objects in APEX.

It is also possible to enable subscriptions of reports. The result of that is the the user receives and email with the report in HTML format at the interval they have chosen.

All of these helps the end user be more productive and one area that also gives more fredom to the enduser is websheet application wich was presented next by David Peake.

David likes to call it "Wiki on steroids" and after watching the presentation I would have to agree. I have actually thought websheets were just a way to give users a way to edit data in a table in a grid on a webpage. It is clearly time to stop ignoring this part of APEX.

It allows end users to create pages and add contents just any WIKI does. It is also geared towards businessusers with little IT knowhow. They can build applications the shows data and updates data without knowing anything about programming or even databases.

David shows a demo of it where he first created a websheet application. THen during running of it the application runs in a mode where editing allows use of some things that are usually in the builder interface for database applications. The user can edit the page with rich text controls, can enter SQL to execute to create content on the page. It is also possible for the user to create a table or add data by just cutting and pasting from Excel.

Websheets essentially turns APEX into a user created content application. It is possible to collaboratively work on a websheet application and have dynamic content in it.

Could this be used as a documentation platform to enhance a wiki to also be able to present data from the system? Or maybe to use to build simple dashboards to give an easy overview of the statu on things in a system?

One area where this could be really useful would be to use it for prototyping together with end users and let them modify things on their own to show what kind of solution it is they would want.

The next presentation was "Exadata Management and Optimization". I was a little late to the session, but all I got in the forty minutes I was there was a long plug for using Oracle ACS to install, monitor and manage the solution. They had a hundred or so slides of information on how they have the people, the skills, and  the tool to do it right.

They seems ot havea very impressive setup with an appliance that collects data and an SLA of 15 minutes to present the customer with an action plan after something fails (the machine is very fault tolerant so it is usually not the same things as an outage).

Still I came to learn about managing and optimizing exadata, not to get a sales pitch for Oracle ACS.

The only amusing thing in the presentation was when during the Q&A the presenter was asked to summarize of Oracles best practices. He asked "All of them?" and the person asking responded very seriously that yes, he would indeed want to get a summary in a few words on all of the best practices. The presenter snickered and said something like "lets chat after the session to make sure you get what you need". 

Posted in Uncategorized

September 20th, 2010 by Mathias

The keynote tsarted with some general hints on things to do at the conferense and hings that can help make the experience even better. After that safra Catz gave out the excellence awards.
Next up was Ann Livermore om HP to talk about how important Oracle is for them. she threw out lots of numbers of it, one that stuck with me is that 40% of all Oracle licenses is for HP hardware. They have 300 000 employees. my guess is that they may stand to lose the most on the SUN purchase.
they want to help Oracle's customer flip the ratio of spendding between operations and innovations.
Load runner is how available as a cloud service.
Next up waas Dave Donatelli who gave a boring presentation on HP's hardware offerings.
Larry was up next after a very long intro.
He spent a lot of time defiming what their view of cloud computimg is. In summary it is EC2 from Amazon (who he claims populariced the term cloud) and it is not software that just runsj on the net that you integrate with (it could be cloud, but it is not just because it is web-enabled).
The following are required for cloud in Oracle's view:
Standard platforms
HW and SW
Runs variety of apps
Public and private
Exalogic was announced. There will be many more detailed accounts of it by now on the net, so here is just a short overview.
It is meant for any and all apps.
It has 30 compute servers and 360 cores. It runs Linux and Solaris virtualized.
It is by far the fastest machine for Java. No data for this claims was presented.
WebLogic Server and Jrocket has been optimized for it.
2 exalogic servers can service all http-requests for facebooks global presense. It can service 1 million HTTP requests per second.
it is claimed to improve http performance 12x and messaging 5x. Again there was no data presented, but it will surely be published soon.
The following specs were presented:
3 TB Dram
30 servers
360 cores
40 GB infiniband dwith extremely low larency
It is ideal for OLTP.
it was built with an intention to be able to drive exadata. the infiniband is used to connect to an exadata server.
it can be deployed from 1/4 machcine up to 8 exalogic servers.
They make a big point of the fact th at hardware and all software is tested together so everyone runs the same exact config. This should let the deliver one file that patches all pars of the server.
They stated tthat a full machine will be $1M while a similar config from IBM will be $4.4M and that machine is not getting close to the performance of the exalogic. If true the success will be immediate…
It is based on Oracle VM.
Them followed a discussion on Java where it dounded as if they will give up on red Hat support due to RH being so far behind. im sure this will be discussed at length in the Oraclce blogospherre.

Posted in Uncategorized

July 11th, 2007 by Mathias

Oracle's launch of database 11g will be webcasted. It starts at 10 AM ET.

Unfortunately I'll probably miss the live webcast as we're wrapping up visits and packing today to fly out early tomorrow to return back to Colorado.

I'm sure the comming week will be filled with blogs and articles about all the exiting things in 11g and with the things we're disapointed to not get in this version.

Posted in Uncategorized