Saturday, 11 June 2016

Angular 2 Tutorials

Free Angular 2 examples and tutorials at lishman.io

Friday, 10 June 2016

Tuesday, 17 February 2009

Oracle SQL Parsing Essentials

Any developer working with Oracle must understand how SQL statements are parsed if they are to build high performance, scalable applications. It is not an exaggeration to say that a few badly written queries can have a devastating effect on the overall performance of the database. This is not just limited to queries that retrieve data inefficiently but also applies to SQL statements that are repeatedly parsed unnecessarily.

When an SQL statement is submitted to Oracle, it will do one of three things; a hard parse, a soft parse or no parse at all.

Hard Parse

The first time an SQL statement is executed, Oracle must do a hard parse. It must perform all of the following steps.

   1. Parse the statement.
   2. Determine the optimal execution plan.
   3. Execute the statement.

The main distinction between a hard and a soft parse is that a hard parse must do step two – query optimization.

During optimization, the cost based optimizer (CBO) attempts to determine the best way to access the data. The CBO uses complex algorithms based on system and table statistics to generate many (possibly thousands) of execution plans, each of which is assigned a numeric cost. The plan with the lowest cost is selected for use.

It is important to understand that optimization is a very expensive operation. It is a resource hungry, CPU intensive process that can take longer to run than it does to retrieve the data.

We can see an example of this with a TKPROF report on a simple SQL command.
select * 
from
all_objects where object_id = 115


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ------- --------- ---------- -------
Parse 1 0.14 0.13 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 12 0 1
------- ------ -------- ---------- ------- --------- ---------- -------
total 3 0.14 0.13 0 12 0 1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
The parsing operation took over a tenth of a second, whereas the time to fetch the data didn't even register because it was so quick.

But there is another issue. During optimization, Oracle will lock areas of memory using a device called a latch. Latches protect shared memory from being updated by multiple sessions at the same time. The longer the latches are held, the longer other sessions are queued, waiting for their turn to access the memory segment. This will ultimately inhibit scalability.

Soft Parse

Hard parses should be avoided whenever possible if we are to provide a scalable solution. Fortunately, once the difficult task of producing the execution plan has been done, Oracle will automatically cache it in a memory area called the shared pool. Next time an identical query is executed, Oracle will retrieve the execution plan produced by the original query and re-use it. This is known as a soft parse.

Now Oracle only needs to do the following.

   1. Parse the statement.
   2. Execute the statement.

If we execute the same SQL statement that we ran previously, the TKPROF output looks like this.
select * 
from
all_objects where object_id = 115


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ------- -------- ---------- --------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 12 0 1
------- ------ -------- ---------- ------- -------- ---------- --------
total 3 0.00 0.00 0 12 0 1

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
This time we did a soft parse. We re-used the execution plan which was produced the first time we ran the query and no measurable time was spent parsing.

So Oracle takes care of caching the execution plan automatically for us. However, care must be taken to ensure that the appropriate SQL statement can be found in the cache. Consider the following Java method which returns an account name from an account id.
public String getAccountName(Connection con, 
int accId) throws SQLException {
String accName = null;

Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery(
"SELECT acc_name FROM acc WHERE acc_id = " + accId);

if (rs.next()) {
accName = rs.getString("acc_name");
}
stmt.close();

return accName;
}
Earlier I stated that Oracle will do a soft parse if an identical query has been executed before. The problem here is that the SQL produced by this method is unlikely to be identical from one invocation to the next. For example, if we enquire on accounts 100, 200 and 500 we get the following SQL.
SELECT acc_name FROM acc WHERE acc_id = 100;
SELECT acc_name FROM acc WHERE acc_id = 200;
SELECT acc_name FROM acc WHERE acc_id = 500;
Only the literal values are different in each case but, as far as Oracle is concerned, these are three separate statements which means three hard parses. We need Oracle to perform a single hard parse for the first statement and soft parses for subsequent executions.

Bind Variables

The solution is to use bind variables with a PreparedStatement.
public String getAccountName(Connection con, 
int accId) throws SQLException {
String accName = null;

PreparedStatement pstmt = con.prepareStatement(
"SELECT acc_name FROM acc WHERE acc_id = :acc_id");
pstmt.setInt(1, accId);
ResultSet rs = pstmt.executeQuery();

if (rs.next()) {
accName = rs.getString("acc_name");
}
pstmt.close();

return accName;
}
Now every invocation of this method will produce the same SQL statement.
SELECT acc_name FROM acc WHERE acc_id = :acc_id
Oracle will do a hard parse the first time the query is submitted. This is unavoidable. However, subsequent executions will read the cached execution plan from the shared pool, substitute the bind variable values and execute the query. The SQL is soft parsed and the query optimization stage is avoided.

If you wish to produce a well-behaved, high performance, scalable application then the importance of bind variables cannot be understated. It is simply one of the most important concepts a Java programmer must understand when working with Oracle.

Later we will see the dramatic difference in performance between hard and soft parsing.

A word about CURSOR_SHARING

Oracle provides a feature which enables auto-binding on SQL statements. Setting the CURSOR_SHARING initialization parameter to FORCE, instructs Oracle to dynamically replace literals in an SQL statement with bind variables. The original literal values are then substituted, just as if bind variables had been used in the client program. All this happens transparently in the database itself.

For example
SELECT acc_name FROM acc WHERE acc_id = 100;
becomes
SELECT acc_name FROM acc WHERE acc_id = :SYS_B_0;
This may seem like a silver bullet solution but unfortunately results can be unpredictable and side effects may even degrade the performance of certain queries.

This feature should only ever be considered in an emergency, when changing the application to use bind variables is not an option. It is no substitute for carefully written and well tested SQL that uses PreparedStatements and bind variables from the outset.

Statement Parsing

So soft parsing improves performance and scalability. But what does a soft parse actually involve? Broadly speaking, the parsing stage consists of syntax checking, semantic analysis and a lookup in the shared pool. Let's look at each of these in turn.

Syntax Checking

Is it valid SQL as defined by the Oracle SQL reference documentation?
SELECT acc_name RFOM acc;

SQL Error: ORA-00923: FROM keyword not found where expected
The FROM keyword has been misspelt.

Semantic analysis

Is the syntactically correct SQL statement valid in terms of the objects being accessed, permissions and column ambiguities etc?
SELECT acc_name FROM cust.acc;

SQL Error: ORA-00942: table or view does not exist
Either ACC does not exist in the CUST schema or the user does not have the necessary privileges to access it.

Shared Pool Lookup

At this point we have a valid SQL statement and Oracle will attempt to find an existing execution plan in the shared pool.

However, it is not sufficient to just locate a statement in the pool with identical text. An existing statement may refer to different objects, in different schemas, or the environment in which it was executed may be incompatible, making the execution plan inappropriate for our query. Oracle must resolve these issues which takes time.

The point I am trying to make here is that, while a soft parse is far more efficient than a hard parse, it is not without cost. So what is even more efficient than doing a soft parse? Doing no parse at all.

No Parse

The problem with the getAccountName method shown earlier is that, each time this method is called, the SQL statement is compiled (when prepareStatement is executed) and each time the statement is compiled, a soft parse is performed. If, however, the PreparedStatement was created outside of the method and then passed in as a parameter, the statement could be parsed once and executed many times.
public void testGetAccountName(Connection con) throws SQLException {

PreparedStatement pstmt = con.prepareStatement(
"SELECT acc_name FROM acc WHERE acc_id = :acc_id");

for (int i=1; i <= 10; i++) {
String accName = getAccountName(pstmt, i);
System.out.println("Account Name: " + accName);
}

pstmt.close();
}

public String getAccountName(PreparedStatement pstmt,
int accId) throws SQLException {
String accName = null;

pstmt.setInt(1, accId);
ResultSet rs = pstmt.executeQuery();

if (rs.next()) {
accName = rs.getString("acc_name");
}

return accName;
}
The PreparedStatement effectively maintains a direct reference to the memory structure which represents our statement. The statement parsing and the shared pool lookup process performed by the soft parse is no longer necessary. All Oracle needs to do is bind the new values to the existing structure and execute the query.

Statement Caching

Parsing an SQL statement once and executing it many times should be our goal, but passing around a PreparedStatement is not always convenient or even possible. Code can become cluttered if every method in the call stack is forced to include the PreparedStatement object.

One solution is to use implicit statement caching. Implicit statement caching improves performance by caching frequently used executable statements in the JDBC driver. This allows us to keep our original getAccountName implementation which creates and closes the PreparedStatement within the method.

By setting the following properties on the data source, we allow up to 200 statements to be cached for each connection to the database.
public Connection getConnection() throws SQLException {

OracleDataSource ods = new OracleDataSource();
ods.setURL("jdbc:oracle:thin:@localhost:1521:test01");

ods.setUser("SCOTT");
ods.setPassword("TIGER");

Properties prop= new Properties();
prop.setProperty("MaxStatementsLimit","200");

ods.setConnectionCachingEnabled(true);
ods.setConnectionCacheProperties (prop);

Connection con = (OracleConnection)ods.getConnection();

con.setAutoCommit(false);

return con;
}
The JDBC driver will automatically store prepared and callable statements in the cache each time the statement is closed. If the same SQL statement is executed again, it will be retrieved from the cache and the statement can be executed without any parsing at all.

A Word About PL/SQL

PL/SQL makes use of bind variables and statement caching automatically.

Let's rewrite the getAccountName method we saw earlier as a PL/SQL function.
CREATE OR REPLACE FUNCTION get_account_name(p_acc_id acc.acc_id%TYPE) 
RETURN VARCHAR2 AS
l_acc_name acc.acc_name%TYPE;
BEGIN

SELECT acc_name
INTO l_acc_name
FROM acc
WHERE acc_id = p_acc_id;

RETURN l_acc_name;

EXCEPTION
WHEN NO_DATA_FOUND THEN
RETURN null;

END;
In this example, p_acc_id is substituted with a bind variable and the select statement is transparently cached by the PL/SQL engine. We can see this by running the function one thousand times then looking at the statement in the V$SQL view.
DECLARE 
l_name acc.acc_name%TYPE;
BEGIN

FOR i IN 1 .. 1000
LOOP
l_name := get_account_name(i);
END LOOP;

END;
/

anonymous block completed


SELECT sql_text, parse_calls, executions
FROM v$sql
WHERE lower(sql_text) LIKE 'select acc_name from acc%';

SQL_TEXT PARSE_CALLS EXECUTIONS
--------------------------------------------- ------------- ------------
SELECT ACC_NAME FROM ACC WHERE ACC_ID = :B1 1 1000

1 rows selected
Here we see that bind variable B1 has been substituted into the query which means that, even though the statement was executed one thousand times, it was parsed only once.

In fact it is more difficult to make PL/SQL not use bind variables. To do so we would need to re-code the function to use the EXECUTE IMMEDIATE statement as follows.
CREATE OR REPLACE FUNCTION get_account_name(p_acc_id acc.acc_id%TYPE) RETURN VARCHAR2 AS
l_acc_name acc.acc_name%TYPE;
BEGIN

EXECUTE IMMEDIATE
'SELECT acc_name FROM acc WHERE acc_id = ' || p_acc_id
INTO l_acc_name;

RETURN l_acc_name;

EXCEPTION
WHEN NO_DATA_FOUND THEN
RETURN null;
END;
/


DECLARE
l_name acc.acc_name%TYPE;
BEGIN

FOR i IN 1 .. 1000
LOOP
l_name := get_account_name(i);
END LOOP;

END;
/

anonymous block completed
We can see the effect of this change in the V$SQL view.
SELECT sql_text, parse_calls, executions
FROM v$sql
WHERE lower(sql_text) LIKE 'select acc_name from acc%';

SQL_TEXT PARSE_CALLS EXECUTIONS
-------------------------------------------- ------------- ------------
SELECT acc_name FROM acc WHERE acc_id = 241 1 1
SELECT acc_name FROM acc WHERE acc_id = 8 1 1
SELECT acc_name FROM acc WHERE acc_id = 491 1 1
SELECT acc_name FROM acc WHERE acc_id = 994 1 1
SELECT acc_name FROM acc WHERE acc_id = 55 1 1
SELECT acc_name FROM acc WHERE acc_id = 224 1 1
SELECT acc_name FROM acc WHERE acc_id = 94 1 1
SELECT acc_name FROM acc WHERE acc_id = 376 1 1
SELECT acc_name FROM acc WHERE acc_id = 977 1 1
SELECT acc_name FROM acc WHERE acc_id = 38 1 1
SELECT acc_name FROM acc WHERE acc_id = 814 1 1
SELECT acc_name FROM acc WHERE acc_id = 17 1 1
...

1000 rows selected.

Example

To illustrate the performance implications of SQL parsing, we will execute a simple query many times from Java, using three different techniques. The slight differences in the Java code will demonstrate the effects of hard parsing, soft parsing and no parsing in the database. We will monitor the results using Oracle Enterprise Manager.

Hard Parse

First, we execute the method we saw earlier, which uses a JDBC Statement rather than a PreparedStatement. This will trigger a hard parse each time the SQL is submitted.
public String getAccountName(Connection con, 
int accId) throws SQLException {
String accName = null;

Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery(
"SELECT acc_name FROM acc WHERE acc_id = " + accId);

if (rs.next()) {
accName = rs.getString("acc_name");
}
stmt.close();

return accName;
}
The method is executed a total of one million times in ten concurrent threads and takes about twelve minutes to complete.

ADDM Performance Analysis

The first indication that all is not well can be seen on the Database Instance home page under the ADDM Performance Analysis region.


Clicking on 'Hard Parse Due to Literal Usage' takes us to the following page.

Click anywhere on the image to see full size.

Oracle has recognized that we have many SQL statements which vary only by the literal value we have specified in our WHERE clause. Oracle recommends that we set the CURSOR_SHARING parameter to FORCE or that we change our application logic to use bind variables instead of literals. Later, we will follow the second recommendation and change our application logic to use a PreparedStatement rather than a Statement.

Also, if we navigate to the Duplicate SQL page, a feature designed to help us recognise hard parse issues such as this, then we see the following.


Further evidence that we have a significant performance problem can be seen on the graphs in the Performance section of Enterprise Manager.


Clearly, we are having some concurrency issues. If we click on the Concurrency link we see that library latching is to blame. This is a classic symptom of excessive hard parsing.



Soft Parse

So let's rewrite the method to use bind variables and a PreparedStatement to run the same query.
public String getAccountName(Connection con, 
int accId) throws SQLException {
String accName = null;

PreparedStatement pstmt = con.prepareStatement(
"SELECT acc_name FROM acc WHERE acc_id = :acc_id");
pstmt.setInt(1, accId);
ResultSet rs = pstmt.executeQuery();

if (rs.next()) {
accName = rs.getString("acc_name");
}
pstmt.close();

return accName;
}
Once again, the method is executed one million times in ten concurrent threads but this time it takes only four minutes.

And how did the system behave during the query? Lets take a look at the Average Active Sessions on the Performance page.


Note the difference in scale of the Active Sessions on the left compared to the hard parse version. We are using less CPU, waiting less time for resources and we have reduced the library latch contention considerably. The benefits of soft parsing are clear.

However, if we take a look at the SQL Details page for our query we see evidence of another potential performance bottleneck.


The value that catches the eye is Total Parses in the Shared Cursor Statistics region. It is about the same as the number of executions. The fact that we did only two hard parses is good but the fact we did a million soft parses suggests we could do better.

No Parse

Let's enable implicit statement caching when we retrieve our database connection and re-run the previous test.
public Connection getConnection() throws SQLException {

OracleDataSource ods = new OracleDataSource();
ods.setURL("jdbc:oracle:thin:@localhost:1521:test01");

ods.setUser("SCOTT");
ods.setPassword("TIGER");

Properties prop= new Properties();
prop.setProperty("MaxStatementsLimit","200");

ods.setConnectionCachingEnabled(true);
ods.setConnectionCacheProperties (prop);

Connection con = (OracleConnection)ods.getConnection();

con.setAutoCommit(false);

return con;
}
This time it took just two minutes to run the test and the overhead on the database server was reduced considerably.


If we take a look at the SQL Details page we see that, although our query was executed over a million times, it did just one hard parse and ten soft parses (one for each thread in our test).


If we compare the activity statistics from the previous two tests, the benefits of reduced parsing becomes apparent.

Soft parse

No Parse


The execution time is halved and the wait time reduced considerably when we keep the number of soft parses to a minimum.

Summary

Parse once, execute many times.

This should be the goal of any developer who intends to build high performance, scalable applications using the Oracle database. We must endeavour to keep, not only hard parsing to a minimum but soft parsing also.

In Java, the use of PreparedStatements and bind variables is essential to avoid unnecessary hard parses. This is one of the most valuable lessons, regarding Oracle database performance, that a Java developer can learn.

However, it is also important to avoid unnecessary soft parses. Effective re-use of a PreparedStatement and implicit statement caching are both useful techniques to avoid excessive soft parses.

Fortunately, PL/SQL avoids these problems by using bind variables and statement caching automatically.

Saturday, 14 February 2009

Time Zones in Oracle

The TIMESTAMP datatype was introduced in Oracle 9i. TIMESTAMPs differ from DATEs in that they allow fractions of a second to 9 decimal places and can be time zone aware.

Let's see how TIMESTAMPs and DATEs behave when they are simultaneously accessed by a user in the United States and a user in the United Kingdom. We can simulate this situation in a couple of sessions by setting the time zone of one session to US/Pacific and the other to GB. It is 8am in the US and 4pm in the UK.

Set Up

First we create this table in a database which is located in the UK.
CREATE TABLE t1 (
d DATE,
t TIMESTAMP,
tz TIMESTAMP WITH TIME ZONE,
ltz TIMESTAMP WITH LOCAL TIME ZONE
);
As you can see there are four datetime datatypes; DATE, TIMESTAMP, TIMESTAMP WITH TIME ZONE and TIMESTAMP WITH LOCAL TIME ZONE.

Next we set up a couple of sessions, one with a US/Pacific time zone and the other with a GB time zone. We also set the default date and time formats for the session.

US Session
ALTER SESSION SET time_zone = 'US/Pacific';

ALTER SESSION SET nls_date_format='DD-Mon-RR hh24:MI';
ALTER SESSION SET nls_timestamp_format='DD-Mon-RR HH24:MI';
ALTER SESSION SET nls_timestamp_tz_format='DD-Mon-RR HH24:MI TZR';
UK Session
ALTER SESSION SET time_zone = 'GB';

ALTER SESSION SET nls_date_format='DD-Mon-RR hh24:MI';
ALTER SESSION SET nls_timestamp_format='DD-Mon-RR HH24:MI';
ALTER SESSION SET nls_timestamp_tz_format='DD-Mon-RR HH24:MI TZR';

Database and Session Time Zones

There is a time zone associated with the database and a time zone associated with a session. We can display these values using the DBTIMEZONE and SESSIONTIMEZONE functions.

US session
SELECT dbtimezone, sessiontimezone FROM dual;

DBTIMEZONE SESSIONTIMEZONE
---------- -----------------
+00:00 US/Pacific

1 rows selected
UK session
SELECT dbtimezone, sessiontimezone FROM dual;

DBTIMEZONE SESSIONTIMEZONE
---------- -----------------
+00:00 GB

1 rows selected

Current Date and Time

We can also retrieve the current date or timestamp for the database and the session. SYSDATE and SYSTIMESTAMP return the database date and time and CURRENT_DATE, CURRENT_TIMESTAMP and LOCALTIMESTAMP return the session time. The difference between LOCALTIMESTAMP and CURRENT_TIMESTAMP is that LOCALTIMESTAMP returns a TIMESTAMP value while CURRENT_TIMESTAMP returns a TIMESTAMP WITH TIME ZONE value.

US session
SELECT sysdate, systimestamp FROM dual;

SYSDATE SYSTIMESTAMP
---------------------- --------------------------
05-Nov-2008 16:02 05-Nov-2008 16:02 +00:00

1 rows selected


SELECT current_date, current_timestamp, localtimestamp FROM dual;

CURRENT_DATE CURRENT_TIMESTAMP LOCALTIMESTAMP
--------------------- ------------------------------- ------------------
05-Nov-2008 08:02 05-Nov-2008 08:02 US/PACIFIC 05-Nov-2008 08:02

1 rows selected
UK session
SELECT sysdate, systimestamp FROM dual;

SYSDATE SYSTIMESTAMP
---------------------- --------------------------
05-Nov-2008 16:02 05-Nov-2008 16:02 +00:00

1 rows selected


SELECT current_date, current_timestamp, localtimestamp FROM dual;

CURRENT_DATE CURRENT_TIMESTAMP LOCALTIMESTAMP
--------------------- ----------------------- -------------------
05-Nov-2008 16:02 05-Nov-2008 16:02 GB 05-Nov-2008 16:02

1 rows selected
The SYSDATE and SYSTIMESTAMP values are the same for both sessions as you would expect but the CURRENT_DATE, CURRENT_TIMESTAMP and LOCALTIMESTAMP are different in the US and the UK because they reflect the time zone of the session.

Date and Timestamp

Now, let's insert some data into our table from the US session.

US session
INSERT INTO t1 (d, t, tz, ltz) VALUES (
TIMESTAMP '2008-11-05 11:22:00',
TIMESTAMP '2008-11-05 11:22:00',
TIMESTAMP '2008-11-05 11:22:00',
TIMESTAMP '2008-11-05 11:22:00'
);

COMMIT;


SELECT d, t FROM t1;

D T
----------------- -----------------
05-Nov-08 11:22 05-Nov-08 11:22

1 rows selected
The DATE and TIMESTAMP datatypes have no concept of time zones. They return the exact time that was entered, regardless of the time zone of the client. Therefore, the values will be the same for the US session and the UK session.

But what about timestamps with time zones?

US session
SELECT tz, ltz FROM t1;

TZ LTZ
---------------------------- -----------------
05-Nov-08 11:22 US/PACIFIC 05-Nov-08 11:22

1 rows selected
UK session
SELECT tz, ltz FROM t1;

TZ LTZ
---------------------------- -----------------
05-Nov-08 11:22 US/PACIFIC 05-Nov-08 19:22

1 rows selected

The TIMESTAMP WITH TIME ZONE datatype stores the time zone offset in its value, so the output of TZ is the same in both sessions. The time zone offset is the difference (in hours and minutes) between local time and UTC. Whenever the timestamp value is retrieved, the time zone is retrieved with it.

The LTZ column, on the other hand, displays a different value in each session. When a timestamp is stored in a TIMESTAMP WITH LOCAL TIME ZONE datatype, it is normalized to the database time zone. The time zone offset is not stored as part of the column data. When the user retrieves the data, Oracle automatically converts the value to the local session time zone.

A TIMESTAMP WITH LOCAL TIME ZONE value is the only timestamp datatype that will adjust the display value for a session time zone.

Explicit Conversion

So how do we convert a timestamp to a specific time zone in SQL? The answer is the AT TIME ZONE clause.

Suppose we wish to find out the current time in the US and the current time in Japan.

US session
SELECT current_timestamp us, 
current_timestamp AT TIME ZONE 'Japan' japan
FROM dual;

US JAPAN
---------------------------- ------------------------
05-Nov-08 03:15 US/PACIFIC 05-Nov-08 19:15 JAPAN

1 rows selected

Timestamp Comparisons

Two TIMESTAMP WITH TIME ZONE values are considered identical if they represent the same instant in UTC, regardless of the time zone offset stored in the data.

For example, using the data we inserted earlier.

US session
SELECT tz FROM t1 WHERE tz = TIMESTAMP '2008-11-05 11:22:00';

TZ
----------------------------
05-Nov-08 11:22 US/PACIFIC

1 rows selected
The date in the TZ column and the timestamp literal both correspond to a UTC time of 05-Nov-08 19.22

UK session
SELECT tz FROM t1 WHERE tz = TIMESTAMP '2008-11-05 11:22:00';

TZ
-------------

0 rows selected


SELECT tz FROM t1 WHERE tz = TIMESTAMP '2008-11-05 19:22:00';

TZ
----------------------------
05-Nov-08 11:22 US/PACIFIC

1 rows selected


SELECT tz FROM t1 WHERE tz = TIMESTAMP '2008-11-05 11:22:00 US/Pacific';

TZ
----------------------------
05-Nov-08 11:22 US/PACIFIC

1 rows selected


SELECT tz FROM t1
WHERE tz = TIMESTAMP '2008-11-05 11:22:00' + INTERVAL '8' HOUR;

TZ
----------------------------
05-Nov-08 11:22 US/PACIFIC

1 rows selected
In the first statement, the TZ column has a UTC time of 05-Nov-08 19.22 but the timestamp literal has a UTC time of 05-Nov-08 11.22. The subsequent SQL statements adjust the timestamp literal to match the value in the table.

A Word About Timestamp Literals

A timestamp literal takes the following format.
TIMESTAMP '2008-11-05 11:22:00'
or
TIMESTAMP '2008-11-05 11:22:00 GB'
or
TIMESTAMP '2008-11-05 11:22:00 -08:00'

We could just use a character string.
SELECT tz FROM t1 WHERE tz = '5-Nov-08 11:22 US/Pacific'; 

TZ
----------------------------
05-Nov-08 11:22 US/PACIFIC

1 rows selected
However, I would recommend against this approach as we are no longer using a timestamp literal. Instead, we are using a character literal which is implicitly converted to a timestamp when it is compared with a date.

The problem with this technique is that the query can return different results depending on the value of the default timestamp format model for the session.
ALTER SESSION SET nls_timestamp_tz_format='DD-Mon-RR HH24:MI TZR';

SELECT tz FROM t1 WHERE tz = '5-Nov-08 11:22 US/Pacific';

TZ
----------------------------
05-Nov-08 11:22 US/PACIFIC

1 rows selected
Now we run the same SQL but with a different timestamp format.
ALTER SESSION SET nls_timestamp_tz_format='DD-Mon-YYYY HH24:MI TZR';

SELECT tz FROM t1 WHERE tz = '5-Nov-08 11:22 US/Pacific';

TZ
-------------

0 rows selected
For this reason, be sure to use a date or timestamp literal or an explicit conversion using the TO_DATE, TO_TIMESTAMP or TO_TIMESTAMP_TZ functions.
SELECT tz FROM t1 WHERE tz = 
TO_TIMESTAMP_TZ('5-Nov-08 11:22 US/Pacific', 'DD-Mon-YY HH24:MI TZR');

TZ
----------------------------
05-Nov-08 11:22 US/PACIFIC

1 rows selected

How To Generate Random Values in Oracle

Oracle provides the DBMS_RANDOM package for generating random data. The VALUE function returns a random number and the STRING function returns a random string.

VALUE

The VALUE function returns a random number in the specified range. If no range is provided, a value greater than or equal to zero and less than one is returned.

For example, we can get a random value between zero and one.
SELECT dbms_random.value rnd FROM dual;

RND
-----------------------------------------
0.6573975346756855362691307891314972871

1 rows selected

Or, we could specify a range to return a value between, say, ten and twenty.
SELECT dbms_random.value(10, 20) rnd FROM dual;

RND
------------------------------------------
17.0948463090004580455301715474853817198

1 rows selected
Add a FLOOR, CEIL or ROUND function to return an integer.
SELECT floor(dbms_random.value(10, 20)) rnd FROM dual;

RND
-------
14

1 rows selected

STRING

The STRING function is used to get a random string.

The following options are used to specify the type of string to return.
  • 'u' or 'U' returns uppercase alpha characters
  • 'l' or 'L' returns lowercase alpha characters
  • 'a' or 'A' returns mixed case alpha characters
  • 'x' or 'X' returns uppercase alpha-numeric characters
  • 'p' or 'P' returns any printable characters
For example, to get a random string of 20 uppercase alphanumeric characters, we use the following SQL.
SELECT dbms_random.string('X', 20) rnd FROM dual;

RND
----------------------
75JM2IQUK4NL91NR4GIR

1 rows selected

To return a 30 character, mixed case string, we use this command.
SELECT dbms_random.string('A', 30) rnd FROM dual;

RND
--------------------------------
auzZoOfpahvMzaDXqcpMSuxOGyraPb

1 rows selected

Notes on DBMS_RANDOM

  • The SEED procedure can be used to reset the seed for the package. The same set of random values are returned if an identical value is used for the seed. This can be useful during testing.
  • If no seed is specified then DBMS_RANDOM is automatically initialized with the date, userid, and process id.
  • The INITIALIZE, RANDOM and TERMINATE functions are obsolete.

Oracle Quick Tip: Displaying a Text Graph

The RPAD function can be used to display a text graph in Oracle.

Suppose we wish to display the number of sessions that have been opened by each client machine connected to the database. We can run this simple SQL.
SELECT 
machine,
count(*) sessions
FROM v$session
GROUP BY machine
ORDER BY machine;

MACHINE SESSIONS
-------------------------- -------------
GHYT887 20
MAL001024 18
SCJ0012 15
dev.lishman.com 2
jupiter.lishman.com 1
mars.lishman.com 31
saturn.lishman.com 15
venus.lishman.com 23

8 rows selected
However, we can easily include a text graph to provide a visual representation of the session distribution.
SELECT 
machine,
count(*) sessions,
rpad('*', count(*), '*') session_graph
FROM v$session
GROUP BY machine
ORDER BY machine;

MACHINE SESSIONS SESION_GRAPH
-------------------------- ------------- -------------------------------
GHYT887 20 ********************
MAL001024 18 ******************
SCJ0012 15 ***************
dev.lishman.com 2 **
jupiter.lishman.com 1 *
mars.lishman.com 31 *******************************
saturn.lishman.com 15 ***************
venus.lishman.com 23 ***********************

8 rows selected
RPAD right pads the first expression ('*') to the length specified by the second expression (count(*)) with the character provided by the third expression ('*').

Auto Generated Primary Keys in Oracle

Unlike most other databases, Oracle does not provide an auto-incrementing datatype that can be used to generate a sequential primary key value. However, the same effect can be achieved with a sequence and a trigger.

This article demonstrates how to generate a sequential primary key and how to access this value using PL/SQL, JDBC, JPA and Hibernate.

The Table

We will use this simple account table for the demonstration.
CREATE TABLE acc( acc_id NUMBER(16),
acc_name VARCHAR2(30) NOT NULL,
acc_balance NUMBER DEFAULT 0 NOT NULL,
CONSTRAINT acc_pk PRIMARY KEY (acc_id));
Table created.
When an account is added, the database must generate the account identifier and return it to the application.

The Sequence and the Trigger

First, we create the sequence and the trigger which will populate the primary key column. The account identifiers will start from 2000.
CREATE SEQUENCE acc_id_seq START WITH 2000;
Sequence created.

CREATE OR REPLACE TRIGGER acc_pk_trg
BEFORE INSERT ON acc FOR EACH ROW
BEGIN
IF :NEW.acc_id IS NULL THEN
SELECT acc_id_seq.NEXTVAL INTO :NEW.acc_id FROM DUAL;
END IF;
END;
/
Note that the sequence is only used if no value is provided for acc_id. This gives us some flexibility when inserting rows into the table as we will see later.

Incidentally, SQL Developer includes an option to create this type of trigger for us. Right click on a table in the Connections navigator to access the following menu.

Inserting Rows

When inserting rows into the table, we can specify the acc_id ourselves or have the value generated for us.
INSERT INTO acc (acc_id, acc_name)
VALUES (1000, 'Green Square');
1 rows inserted

INSERT INTO acc (acc_name)
VALUES ('Red Triangle');
1 rows inserted

SELECT * 
FROM acc;
ACC_ID       ACC_NAME         ACC_BALANCE            
------------ ---------------- ----------------
1000 Green Square 0
2000 Red Triangle 0
2 rows selected
Inserting our own primary key values will be an issue if we use a number which is later generated by the sequence. In this case, we may need to adjust the sequence to a value beyond the maximum primary key value.

Although there is an ALTER SEQUENCE statement which allows us to change pretty much every aspect of the sequence, it does not allow us to change the actual sequence number itself. To change this value you must drop and recreate the sequence using the START WITH clause.

PL/SQL

The following PL/SQL function adds a row to the table and returns the identifier of the newly created account to the calling application.
CREATE OR REPLACE 
FUNCTION add_acc (p_acc_name IN acc.acc_name%TYPE,
p_balance IN acc.acc_balance%TYPE DEFAULT 0)
RETURN NUMBER AS
l_acc_id acc.acc_id%TYPE;
BEGIN
  INSERT INTO acc (acc_name, acc_balance) 
VALUES (p_acc_name, p_balance)
RETURNING acc_id INTO l_acc_id;
  RETURN l_acc_id;

END;
/

EXEC DBMS_OUTPUT.PUT_LINE('Account id: ' ||
add_acc('Red Triangle', 10000));
COMMIT;

------------------------------------------------------------------------

Account id: 2000
The RETURNING clause on the INSERT statement is used to populate a local variable with the generated acc_id. This is then used as the return value from the function.

We can also call this function from Java.
CallableStatement cstmt = 
con.prepareCall("BEGIN ? := add_acc(?,?); END;");
cstmt.registerOutParameter(1, Types.INTEGER);
cstmt.setString(2, "Green Square");
cstmt.setInt(3, 2000);
cstmt.execute();
System.out.println("Account Id: " + cstmt.getInt(1));

------------------------------------------------------------------------

Account id: 2001

JDBC

JDBC Auto-Generated Keys

JDBC 3.0 introduced the retrieval of auto-generated keys. When calling execute or executeUpdate on a Statement object or prepareStatement on a Connection, we can specify which values we wish to return from our INSERT statement.

In this code snippet we request both the acc_id and the acc_balance to show that we can return multiple values and not just the generated identifier. We also use named bind variables rather than question marks, but either style is acceptable.
PreparedStatement pstmt = 
con.prepareStatement("INSERT INTO acc (acc_name) " +
"VALUES (:acc_name)",
new String[] { "acc_id", "acc_balance" });

pstmt.setString(1, "Red Triangle");
pstmt.executeUpdate();

ResultSet rs = pstmt.getGeneratedKeys();
if (rs.next()) {
System.out.println("Account Id: " + rs.getInt(1));
System.out.println("Balance: " + rs.getInt(2));
}

------------------------------------------------------------------------

Account Id: 2002
Balance: 0
The example returns a ResultSet containing a single row with a column for each requested value.

Some databases allow you to specify the RETURN_GENERATED_KEYS constant rather than naming all the generated keys explicitly. However, this feature does not work with Oracle as the JDBC driver cannot identify which columns need to be retrieved. Instead, Oracle will return the ROWID.
PreparedStatement pstmt = 
con.prepareStatement("INSERT INTO acc (acc_name) " +
"VALUES (:acc_name)",
Statement.RETURN_GENERATED_KEYS);

pstmt.setString(1, "Red Triangle");
pstmt.executeUpdate();

ResultSet rs = pstmt.getGeneratedKeys();
if (rs.next()) {
System.out.println(“Generated key: " + rs.getString(1));
}

------------------------------------------------------------------------

Generated key: AAA9TIAAGAAJqNfAAV

DML Returning

We can also return data from an INSERT statement using the RETURNING clause as we saw in the PL/SQL example earlier. Standard JDBC does not support DML returning directly with PreparedStatement so we need to use OraclePreparedStatement instead.
OraclePreparedStatement pstmt = (OraclePreparedStatement)
con.prepareCall("INSERT INTO acc (acc_name, acc_balance) " +
"VALUES (:acc_name, :acc_balance) " +
"RETURNING acc_id INTO :acc_id");

pstmt.setString(1, "Green Square");
pstmt.setInt(2, 1500);
pstmt.registerReturnParameter(3, OracleTypes.NUMBER);
pstmt.executeUpdate();

ResultSet rs = pstmt.getReturnResultSet();
if (rs.next()) {
System.out.println("Account Id: " + rs.getInt(1));
}

------------------------------------------------------------------------

Account Id: 2003
One benefit of using this technique is that it can be used with UPDATE and DELETE statements too, whereas the JDBC auto-generated key retrieval feature can only be used with an INSERT.
OraclePreparedStatement pstmt = (OraclePreparedStatement) 
con.prepareCall("UPDATE acc " +
"SET acc_balance = acc_balance + :bal_adjustment " +
"WHERE acc_id = :acc_id " +
"RETURNING acc_balance INTO :balance");

pstmt.setInt(1, 1000);
pstmt.setInt(2, 2003);
pstmt.registerReturnParameter(3, OracleTypes.NUMBER);
pstmt.executeUpdate();

ResultSet rs = pstmt.getReturnResultSet();
if (rs.next()) {
System.out.println("New Balance: " + rs.getInt(1));
}

------------------------------------------------------------------------

New Balance: 2500

Object Relational Mapping

Next we take a regular Java class and persist it to the database, first using the Java Persistence API (JPA) and then using the Hibernate proprietary API.

This simple class represents an account.
public class Account {

private int id;
private String name;
private int balance;

...

}

Java Persistence API

To generate a sequential primary key for our JPA entity we add the following annotations to the Account class.
@Entity(name = "acc")
public class Account {

@Id
@Column(name="ACC_ID", nullable = false)
@GeneratedValue(generator = "AccountSeq",
strategy = GenerationType.SEQUENCE)
@SequenceGenerator(name = "AccountSeq",
sequenceName = "ACC_ID_SEQ",
allocationSize = 1)
private int id;

@Column(name="ACC_NAME")
private String name;

@Column(name="ACC_BALANCE")
private int balance;

...

}
The @GeneratedValue annotation tells the JPA vendor that we are using a sequence to generate the primary key. The @SequenceGenerator annotation specifies that we are using the ACC_ID_SEQ sequence.

Saving our object to the database with JPA is simple.
EntityManagerFactory emf = 
Persistence.createEntityManagerFactory("accounts");
EntityManager em = emf.createEntityManager();

EntityTransaction txn = em.getTransaction();
txn.begin();

Account acc = new Account();
acc.setName("Red Triangle");
em.persist(acc);
txn.commit();

System.out.println("Account Id: " + acc.getId());

------------------------------------------------------------------------

Account Id: 2004
When we persist the object to the database, the account identifier will be populated automatically in both the Java object and the database table.

Hibernate

With Hibernate, we can use annotations as we did in the previous example, or we can use a mapping file.
<hibernate-mapping>
<class name="blog.Account" table="ACC">
<id name="id" column="ACC_ID">
<generator class="sequence">
<param name="sequence">ACC_ID_SEQ</param>
</generator>
</id>
<property name="name" column="ACC_NAME" />
<property name="balance" column="ACC_BALANCE" />
</class>
</hibernate-mapping>
The id element specifies the primary key column and the nested generator element specifies the name of the sequence we wish to use.

We then use the Hibernate API to save the Java object.
Session session =
new AnnotationConfiguration().
configure().
buildSessionFactory().
getCurrentSession();

session.beginTransaction();

Account acc = new Account();
acc.setName("Red Triangle");
acc.setBalance(1500);
int accId = (Integer)session.save(acc);

session.getTransaction().commit();

System.out.println("Account id: " + accId);

------------------------------------------------------------------------

Account id: 2005

Allocation Size

The allocationSize in the @SequenceGenerator annotation should be the same as the increment size of the sequence. For example, if the sequence is defined with the following SQL then the allocationSize should also be set to 20.
CREATE SEQUENCE acc_id_seq START WITH 2020 INCREMENT BY 20;
Each time the the JPA provider retrieves a value from the sequence, it can allocate 20 sequential identities to new objects before it needs to go back to the database to access the next sequence value.

For example, if the provider receives the value 100 from our sequence and the allocationSize is 20 it will allocate the values 81 to 100 to the next 20 Account objects that are persisted.

This feature reduces the number of round trips to the database which will improve application performance.

Optional Primary Key Generation

When persisting an object to the database, Hibernate or the JPA provider (which can also be Hibernate) will obtain the next number from the sequence and use this value in the INSERT statement.

We can see this if we enable SQL tracing with either Hibernate or Toplink.

Hibernate
Hibernate: select ACC_ID_SEQ.nextval from dual
Hibernate: insert into ACC (ACC_NAME, ACC_BALANCE, ACC_ID) values (?, ?, ?)
Toplink
(Thread[main,5,main])--SELECT acc_id_seq.NEXTVAL FROM DUAL
(Thread[main,5,main])--INSERT INTO ACC (acc_id, acc_name, acc_balance) VALUES (?, ?, ?)
bind => [2001, Red Triangle, 1500]
At this point, the retrieved sequence value is also used to populate the identifier in the Java object.

We can now see the importance of making the primary key generation conditional in the trigger.
IF :NEW.acc_id IS NULL THEN
SELECT acc_id_seq.NEXTVAL INTO :NEW.acc_id FROM DUAL;
END IF;
If we exclude the IS NULL condition, the trigger will effectively overwrite the value supplied for acc_id in the INSERT statement with the next value from the sequence. This would result in a primary key value and an object identifier which are out of sync.