This was going to be a simple post about creating multiple Pluggable Databases (PDBs) in Oracle 18cXE.
But you know what it’s like, you get your lego database set out and then you spot those unusual shaped bricks… and the transparent ones… oh, that one has a wheel on…and get a bit carried away.
What follows is a guide on how to create three concurrent PDBs on an Oracle 18cXE database. However, I have taken the opportunity to investigate other areas of the Express Edition platform and so will be making use of Advanced Compression as well as Oracle Enterprise Manager Express.
In fact, I wouldn’t be surprised if SQLDeveloper put in an appearance as well.
I’ve also included a couple of the more notable members of Oracle’s small-but-perfectly-formed eco-system into the final design. On top of all that, I’ll be performing a magic trick to ensure that I only have to install each piece of software once, even though it may end up in multiple PDBs… Continue reading →
As I write, various parts of the United Kingdom are still under variations of the theme of “lockdown”. Not that I’m stir crazy or anything but I’ve now decided it’s time to explore one way of escaping, from the confinements of my database, at least.
Specifically I’m going to :
create an OS user on the database server
create an OS group to allow both the new user and the oracle user to access to a common location
create a shell script owned by the OS user which writes to that common location
create a credential to access that user from inside the database itself
setup and run a scheduler job to execute a shell script as the new OS user
read the output file generated by the shell script from inside the database
For this exercise, I’m using my trusty 18cXE database running on CentOS… Continue reading →
I can’t remember how long we’ve been in lockdown now but between you and me, I think my Oracle Database may be cracking up.
As Exhibit A, I present the fact that RDBMS seems to raise the NO_DATA_FOUND exception only when it feels like it… Continue reading →
Lempel-Ziv. Sounds like something you might find on a starship. I mean, you can just imagine Scotty telling Kirk that they can’t go to warp because the Lempel-Ziv compressors are offline.
In fact, the Lempel-Ziv algorithm is what GNU’s Gzip program uses to compress files.
Fortunately, the UTL_COMPRESS package employs the same algorithm, which means that you can create archives using one tool that can be manipulated by the other… Continue reading →
Two weeks into lockdown and some are looking to break the monotony by varying their diet…
For my part, I’ve decided to amuse myself by making a couple of “Top five” lists – the five best things about the lockdown and five things I’ve now finally got time to watch.
These lists are in text files are somewhere on my Database server and therefore provide the basis for the examples that follow.
When using Oracle, if you find yourself in a situation where you don’t have access to the Database Server file system, or would just rather use SQL to find what you’re looking for in a data set, you’re in luck.
The technique that used to be a neat way of viewing the database alert log from inside the database can now be re-purposed to look at any text file that’s located in a directory referenced by a Directory Object in the database.
What we’re going to look at here is :
Setting up an external table for reading from files
How to point the external table specific files and directories using EXTERNAL MODIFY
How achieve the same thing in 11gR2 databases with the help of DBMS_LOCK
XML, YAML, JSON. When it comes to transferring data between systems, they’ve all been flavour of the month at one time or another. However, good old CSV is still the format of choice when the batch window opens.
Not only is it simple, it adds comparatively little overhead to the size of the data file being generated.
OK – CSV used to mean “Comma-delimited” but these days it’s become synonymous with delimited data.
Whatever separator character you use, generating CSV is considerably easier these days…unless you’re using PL/SQL in a batch (as opposed to interactive) process.
That said, Connor MacDonald does have a clever way of utilising the functionality available in a command line tool such as SQL*Plus by means of a DBMS_SCHEDULER job.
If for some reason that won’t work for you, it looks like you’re going to have to hack out some custom PL/SQL to get the job done…or you could have a look at a couple of the options available in what other technologies would refer to as “the ecosystem”.
What I’m going to cover is :
If he’s going to make it in politics, Teddy knows that he doesn’t have time to waste waiting for that pesky long-running job he’s just started. Oh no, he needs to use his evenings to get on with building his power base.
Fortunately, Oracle facilitates detached execution of PL/SQL blocks by means of the DBMS_SCHEDULER package. Now, I know what you’re thinking, that’s going to involve a lot of fiddly setup with schedules and windows and chains and stuff. Well, you may be pleasantly surprised… Continue reading →
There are a number of ways to transfer data between Oracle Databases, one of which is to use the PL/SQL Datapump API – DBMS_DATAPUMP. If you wish to avail yourself of this utility but find the syntax a bit fiddly, you always have the option of getting SQLDeveloper to do (most of) it for you. What we’re talking about here is how to persuade the SQLDeveloper DB module to :
Create and execute a custom Datapump export job
do most of the work creating an import of a subset of the exported data
Debbie felt a shiver run down her spine. To be fair, that wasn’t much of a surprise since Lapland at this time of the year does tend to be a little chilly.
However, it wasn’t the weather that was the cause of her discomfort. Someone high up in the IT Department of her employer, The National Elf ( aka Santa’s Grotto) had decided that Continuous Integration was the way to go and had decreed that it should be used forthwith across all projects and technologies in the Company.
This included the application that Debbie was responsible for.
Written around 15 years ago, this Stock Control Application had already survived one major database upgrade but was now resolutely “stuck” on Oracle 11g.
The thing about so many modern software development techniques is that they were based on the premise that code was file based. Of course, this was also true ( or at least, true enough) for some database objects, but tables were a little different.
You couldn’t simply “replace” a table like you could any other program as doing so would destroy any data in that table. For this reason, any changes required to tables for a mature application such as this would be applied by means of DDL ALTER statements.
Of course, there are tools around for this sort of thing. Liquibase, FlexDeploy – these were just two of the tools that Debbie had no chance of getting approval to use in the face of a bureaucracy that made the Vogon Civil Service look like it was following Extreme Programming.
If she was going to get her changes through by her Christmas Eve deadline, she would have to get creative…
“You can’t have your cake and eat it !” This seems to be a regular refrain from the EU in the ongoing Brexit negotiations.
They also seem to be a bit intolerant of “cherry picking”.
I’ve never really understood the saying, “You can’t have your cake and eat it”.
What’s the point in having the cake unless you are going to eat it ?
Fortunately, I’m not alone in my perplexity – just ask any Brexiteer member of the British Cabinet.
For those who want to make sense of it ( the saying, not Brexit), there is a handy Wikepedia page that explains all.
When it comes to Unit Testing frameworks for PL/SQL, compromise between cake ownership and consumption is usually required.
Both utPLSQL 2.0 and ruby-plsql-spec have their good points, as well as some shortcomings.
Of course, if you want a more declarative approach to writing Unit Tests, you can always use TOAD or SQLDeveloper’s built-in tools.
Recently, a new player has arrived on the PL/SQL testing scene.
Despite it’s name, utPLSQL 3.0 appears to be less an evolution of utPLSQL 2.0 as a new framework all of it’s own.
What I’m going to do here, is put utPLSQL 3.0 through it’s paces and see how it measures up to the other solutions I’ve looked at previously.
Be warned, there may be crumbs…