“Proxy Users !” exclaimed Debbie.
“I say, that’s rather harsh don’t you think ?” came the rather startled reply from her boss.
Debbie sighed. They were in the midst of a discussion on the subject of how best to deploy database changes to multiple schemas.
“I meant”, she replied with iron patience, “that we could set up a proxy user to connect as each application owner in turn. That way, we wouldn’t have to grant those CREATE ANY privileges that get auditors so worried”.
“Oh, I see”, said Mike, who didn’t.
Not for the first time, Debbie wondered whether she had been lumbered with a less competent man as her boss simply in order to imbue this post with a semblance of social realism.
“I think”, she said, “that it’s time to move on to the techie bit.”
Debbie is right, as usual…
In order to make a change in Oracle ( or any database for that matter), you need at some point to connect to the database and run some SQL.
This is relatively straightforward if you are using the schema that is – or will be – the owner of the objects you are creating or changing.
However, this may not be possible if the account is identified externally or – in more recent releases – it’s a schema only account.
So, what is the best way to setup and use an account to make such changes in other schemas ?
At this point you may well ask yourself that what – apart from gratuitous puppy pics and cheesy-snack-based puns – is the difference between that post and this.
Well, if you’re a long-time user of 11gXE and you’re looking to upgrade then you will find 18cXE a rather different proposition.
The introduction of Multitenant databases aside, 18cXE differs greatly from it’s predecessor in terms of it’s functional scope.
Wheras 11gXE was – broadly speaking – functionally equivalent to Oracle Standard Edition, the approach for 18cXE has been to shoe-horn in as many Enterprise Edition features as possible.
No doubt, this will leave you anxious to play with the new version. However, there are some “home comforts” that were present in the old version that you’ll need to configure yourself this time around.
What I’m going to go through is :
Installing 18cXE on a Red Hat compatible distro (CentOS7)
Connecting to the database and exploring the containers
Checking the TNS Listener
Manual and Automatic Startup and Shutdown of the database and listener
Setting and persisting the Oracle environment variables
Accessing Enterprise Manager Express
Installing the HR demo application in a Pluggable Database (PDB)
Configuring the firewall to allow remote access to Oracle
The steps documented here have been performed on a vanilla installation of CentOS7. As such, they should work pretty much unaltered for other Red Hat based distros based on or similar to Red Hat Enterprise Linux (RHEL) version 7.
Debbie felt a shiver run down her spine. To be fair, that wasn’t much of a surprise since Lapland at this time of the year does tend to be a little chilly.
However, it wasn’t the weather that was the cause of her discomfort. Someone high up in the IT Department of her employer, The National Elf ( aka Santa’s Grotto) had decided that Continuous Integration was the way to go and had decreed that it should be used forthwith across all projects and technologies in the Company.
This included the application that Debbie was responsible for.
Written around 15 years ago, this Stock Control Application had already survived one major database upgrade but was now resolutely “stuck” on Oracle 11g.
The thing about so many modern software development techniques is that they were based on the premise that code was file based. Of course, this was also true ( or at least, true enough) for some database objects, but tables were a little different.
You couldn’t simply “replace” a table like you could any other program as doing so would destroy any data in that table. For this reason, any changes required to tables for a mature application such as this would be applied by means of DDL ALTER statements.
Of course, there are tools around for this sort of thing. Liquibase, FlexDeploy – these were just two of the tools that Debbie had no chance of getting approval to use in the face of a bureaucracy that made the Vogon Civil Service look like it was following Extreme Programming.
If she was going to get her changes through by her Christmas Eve deadline, she would have to get creative…
Cute and fluffy he may be, but he’s got to earn his keep. He can start making himself useful by helping me with this post.
It begins one Friday afternoon when an urgent request lands on my desk with a large splat.
The requirement is that some csv files be uploaded into the Oracle 11g Datbasae serving the UAT environment to facilitate some testing.
There are around 20 files, each with a slightly different set of attributes.
The files are currently sitting on the on the Red Hat Linux Server hosting the database.
I have sufficient OS permissions on the server to move them to a directory that has a corresponding database object in the UAT instance.
Nevertheless, the thought of having to knock out 20-odd external tables to read these files might leave me feeling a bit like this…
Fortunately, a certain Lee E. McMahon had the foresight to predict the potential risk to my weekend and wrote the Stream Editor (sed) program…
The World Cup is finally over and “It’s Coming Home !”
For quite a long time, we English laboured under the illusion that “it” was football.
Fortunately for Scots everywhere, “It” turned out to be the World Cup which, like so many international sporting competitions, was conceived in France.
Another area that is often subject to flawed assumptions is what privileges are required to provide read-only access for someone to provide support to an Oracle Application.
So, for any passing auditors who may be wondering why “read only” access to an Oracle application sometimes means Write, or even Execute on certain objects…
It’s true, Oracle are giving away free stuff. “Oracle ?”, I hear you say, “as in Larry’s Database Emporium and Cloud base ?” The very same.
It’s been going on for quite a while and includes relatively hidden gems such as SQLDeveloper Data Modeler.
There is some confusion around this particular tool for a couple of reasons.
When it was first released (sometime around 2009 as I recall), Data Modeler was an additional cost option. However, that didn’t last long.
At present (and for a number of years now), it is available either as a completely standalone tool, or as a fully integrated component of the SQLDeveloper IDE.
Either way, it costs exactly the same as the SQLDeveloper IDE – i.e. nothing.
I can tell you like the price, want to take it for a spin ?
I’m going to focus here on using the integrated version of Data Modeler. This is because
I want to use it for small-scale modelling of the type you might expect to find when using an Agile Methodology
I’m a developer and don’t want to leave the comfort of my IDE if I don’t need to
What I’m going to cover is :
Viewing a Table Relationship Diagram (TRD) for an existing database table
Creating a Logical Data Model and Entity Relationship Diagram (ERD)
Generating a physical model from a logical model
Generating DDL from a Physical Model (including some scripting tweaks to suit your needs)
Using a Reporting Schema and pre-canned SQLDeveloper Reports to explore your models
This post is about introducing the features of Data Modeler in the hope that you may find them useful.
It’s not intended as a paragon of data modelling virtue.
Come to that, it’s not intended as a definitive guide on how to use this tool. I’m no expert with Data Modeler (as you are about to find out). Fortunately, there are people out there who are.
If, after reading this, you want to explore further, then you could do worse than checking out words of Data Modeler wisdom from :
“You can’t have your cake and eat it !” This seems to be a regular refrain from the EU in the ongoing Brexit negotiations.
They also seem to be a bit intolerant of “cherry picking”.
I’ve never really understood the saying, “You can’t have your cake and eat it”.
What’s the point in having the cake unless you are going to eat it ?
Fortunately, I’m not alone in my perplexity – just ask any Brexiteer member of the British Cabinet.
For those who want to make sense of it ( the saying, not Brexit), there is a handy Wikepedia page that explains all.
When it comes to Unit Testing frameworks for PL/SQL, compromise between cake ownership and consumption is usually required.
Both utPLSQL 2.0 and ruby-plsql-spec have their good points, as well as some shortcomings.
Of course, if you want a more declarative approach to writing Unit Tests, you can always use TOAD or SQLDeveloper’s built-in tools.
Recently, a new player has arrived on the PL/SQL testing scene.
Despite it’s name, utPLSQL 3.0 appears to be less an evolution of utPLSQL 2.0 as a new framework all of it’s own.
What I’m going to do here, is put utPLSQL 3.0 through it’s paces and see how it measures up to the other solutions I’ve looked at previously.
Be warned, there may be crumbs…