ather it has been happening for a long time. I just did not pay attention of just how useful it had become. The tool has long been the preferred TUI/CUI (Text-based user interface / Command line User Interface) for SQL and PL/SQL. Not only has the tool added a lot of nice features, just see my last blog post for an example, it also allows for scripting to make it even more compelling.
Today it is also how you evolve a schema incrementally. Think of it as how you add changes to a file to git. Every change is checked in as a delta for what was already in place. With the Liquibase features SQLcl is now the tool to in all you databases evolve schemas and have the tool keep track of which scripts has been run and which needs to be run to bring the schema up to the current version. It also adds a lot of other nice features that has come with the ORacle implementation. That is however a subject for another post.
What made me a convert and a lover of this feature is when I had a problem I could not solve with PL/SQL for pure technical reasons. I needed to test round trips to the database. It was a test case where we just wanted to run “select * from dual” over and over, but we wanted each time we did it to be a full round trip to the database. Doing it in PL/SQL does not work as the loop would be in in a single PL/SQL block and it would run all the calls in one trip to the database.
“Does it matter, if you runt close to the database it would be the same speed”. Yes, that was a theory some floated. Especially those who mostly write code that runs outside of the database. The tests showed that it was at least 10x faster to run a tight loop of that SQL in the database.
What about “I already know it’s faster”? Maybe you do, and maybe you are right. But unless you test and show you are no better than the developer who just knows that “the database is slow”. We don’t really know what happens unless we look at data that is measured. Do you know what actually happens in the database without looking at a trace file? No, not really. How do you know how many times the application has to call the database to grab all the data returned by a SQL? How long time does it take between each such call?
Being able to implement that gives you a wonderful simulation tool to show the cost a current solution has and prove that the savings you predict from AWR or trace files can be shown in a simulation.
In addition to that you can also build commands you couldn’t really create otherwise. Like grabbing a second connection and let one inspect what happens when the other runs. Alternatively start many threads performing work in parallell.
Yet another neat feature is to build commands that feels very native to SQLcl. Like how you have a commands that allows you to say “show user” in SQLcl, you could add a “whoami” to display all kinds of information about your session. You could of course do it with a package in the database and go “exec whoami”, but that does not feel as much as a SQLcl command. That is even more true when you want to do something in PL/SQL that you are not allowed to do within the context of a PL/SQL block.