db2 dba coding database programs for dbas

Post on 16-Apr-2017

149 Views

Category:

Software

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

But we NEED to BE developers!

DBAs frequently (constantly?) get called on to develop “quick” database

scripts and programs, never mind that we don’t have the time, the tools, or the

training. From a quick, little runstats script in ksh, to a full blown application

to load data or “fix a few rows”, it seems that if it is in the database, it’s my job

to fix it!

This presentation will present examples of several programming languages

commonly used with DB2 LUW (SQL/PL, ksh, perl/dbi, java/jdbc/jpa, udf,

XML) and discuss the plusses and minuses of each. Next, I will share some

tools, utilities, functions and tips for coding database programs in each one.

Heavy emphasis will be placed on error handling, as this is one of areas that is

most often overlooked or under developed.

Next, I will share some of my favorite utilities, tools, tips and tricks to help the

DBA get things done.

Finally, we will discuss a few of the different IDEs are available to assist the

DBA (Developer wannabe). (I had to cut this topic for time) – sorry!

1

Show similar common DBA task in three different languages, and give you

some “tips and tricks” for each.

Woodshop analogy:

• If I want to help you get into woodworking:

• Show you some cool tools:

• Mitre saw

• Band saw

• Planer

• Jointer

• All “cut” wood. All very cool. All useful.

• Good in different ways and in different cases

• Show you a cool “Chest of drawers” that you can build with it

• That is what I am going to do with software tools

2

As a DBA, we are constantly tasked with fixing data, loading tables, and

running utilities.

It seems like, “If it lives in a database table, It’s our responsibility to load,

correct, and maintain.”

We need to have and use the proper tools to do this.

Many deveolpers with LOTS of tools.

One DBA, with…. Ksh and CLP.

3

This is the proper attitude for us to have. As long as we have a great set of

tools to work with!

However, UNLIKE Jeff Spicoli, we need to actually HAVE good tools and

know how to use them.

4

Show similar common DBA task in three different languages, and give you

some “tips and tricks” for each.

Woodshop analogy:

• If I want to help you get into woodworking:

• Show you some cool tools:

• Miter saw

• Band saw

• Planer

• Jointer

• All “cut” wood. All very cool. All useful.

• Good in different ways and in different cases

• Show you a cool “Chest of drawers” that you can build with it

• That is what I am going to do with software tools

5

Show similar common DBA task in three different languages, and give you

some “tips and tricks” for each.

Many times we are going into a fight with no weapons. (Or at least the wrong

ones. )

Others?

Python

XML

UDF – might add an example in the future.

6

Some good Manuals from the DB2 documents.

7

Note:

Typeset –u – uppercase dbname

${1-sample} – default value for a parameter

Backticks ` ` to run a command and capture it’s output

We are running db2 commands via the clp

Issues:

1) No error checking

What if you don’t get something good back from db2?

What if connect fails?

2) No way to check return codes from “underneath” tee command

3) No return code on the exit

4) “race condition” exists on select vs delete.

The time between them is >0, so they may get different row count as rows may get old enough to be purged.

5) NO Intermediate COMMITS!!!!!

Will this blow up your logs?

What if it doesn’t run for a while, then gets run?

This is hard to fix in ksh / clp

8

Note:

Trying to handle the “race” condition by using start_ts (which is, The time at

which I start purging rows)

Use count as the Number of rows to delete

Using an error handling function: sql_error_handler

Part of my standard library

simply a case statement on rc

0 – ok

1 – no rows found

4 – warning

8 - error

9

Note:

Looping until we delete enough rows.

Committing every “chunk_sz” rows

Using variable output to capture sqlca info from “db2 –a”

purge_row is: number of rows from sqlca

sqlcode: from sqlca

Issues:

Must rerun query to determine deletable rows over and over – no cursor.

This could be really expensive!

10

This is an example of what “db2 –a” output looks like, so that you can see

what I am “cut”ting out.

This is somewhat fragile, should DB2 change this output.

Not prod ready IMHO…

Not sufficiently advanced technology - I can tell it from Magic

11

DB2 CLP is limited by the SQLCA - 32K buffer size

“Panel” at idug in philly had a suggestion (that I haven’t gotten to work, yet)

WLM_SET_CLIENT_INFO procedure

12

Need:

A library of tools / functions.

13

Note:

Use of function in ksh

Comments! They are GOOD!

Check to see if the log file exists. If not, see if I can touch the log file and if

so, can I write to it.?

If not, write out reasonable errors and die with a RC 1.

14

Note:

If both stdout (1) and stderr (2) are a tty, then I must be on a terminal.

Basically, I am sending stdout and stderr to the log file, or if I’m a terminal, I

send it to tee and push it to the tty and the log file.

use of print – more POSIX compliant vs echo.

15

This just shuts down the file descriptors using exec and makes sure to wait for

the children to actually return.

Maybe it should check to see if the logging is actually started, before it tries to

stop it…

16

Note:

“Source” in our .profile – big deal if running from cron

Cron environment VERY different from your login environment.

None of YOUR “stuff” .profile, .login pulled in.

Cron environment is a “crippled” environment.

Pull in our function library.

Use of basename to get program name

Fully qualified program names – good practice especially if running from cron ($PATH may not be your $PATH)

Function call: Call to start_logging – passes in a log file name (This is verified in the function)

stop_logging – no parameters

Reading input from a file on a specific file descriptor – not commonly done in ksh.

In this program, I also read from the terminal, so I need to use a separate “fd” for this input file.

Continue statement

{ } – use – delimiter – reduces ambiguity – BEST practice!

LOGFILE=${RECIPEFILE%.*}.log - strips anything to the right of the first ‘.’

17

Note:

Good to have a heading with common stuff in it.

Options for ksh:

xtrace (-x) - debug

nounset – will die if you don’t have a param set

errexit - can make error handling easier

18

Note:

Co-worker who did this trick all the time. I had never seen it.

Concern: What if you don’t get exactly what you expect, when you read?

Can you catch all possible weirdness??? - Buffer overflows, and SQL

injections in this kind of thing, are very possible.

19

Note:

Co-worker who did this trick all the time. I had never seen it.

Concern: What if you don’t get exactly what you expect, when you read?

Can you catch all possible weirdness??? - Buffer overflows, and SQL

injections in this kind of thing, are very possible.

20

From my backup script:

All this just to subtract one from the log file name!

Could do MUCH more easily in perl.

21

The DBI is a library that provides database routines that are agnostic about the

underlying database.

The various DBDs provide routines to support dbi calls for a given database

manager.

22

The DBI is a library that provides database routines that are agnostic about the

underlying database.

The various DBDs provide routines to support dbi calls for a given database

manager.

May need to recompile perl for AIX to get performance enhancements, Multi-

threading, etc.

23

24

This is what a “typical” dbi program looks like.

Use DBI

Connect

Prepare

Execute

Fetchrow-array or hashref in a loop

Commit (or rollback)

25

Similar example but showing INSERT vs. SELECT.

26

Wanted a generic tool to do this VERY common task.

Non perl-capable dbas can use it to do data updates.

27

DBI – must think “dbi way”

Don’t open a db2 cursor.

As we saw in the ksh example, We only want to do the select once, as it may

be very costly.

But, we must have intermediate commits (as log size is finite!), so we are

forced to do CURSOR SELECT and then updates in a loop.

28

First part of the .ini file.

29

30

Second half of the .ini file.

31

32

Error handling with issues in the .ini file could use some improvement.

Db_connection functions could be improved.

33

Rollback: I tried (and really wanted) to be able too, actually run the “update”

statement and DO the work in the database. Then, if UPDATE_DATA flag is

set to ‘0’ (false), just issue a rollback vs a commit each time.

This would have provided a much better idea (worst case scenario, maybe) of

the time the updates would actually take to run.

However, rollback, closes down the “cursor” in the dbi, and ends the loop, etc.

So, I just log what I “would have updated”.

This can still be very useful for testing and debugging, as it prints out the

values of the columns in debug log level.

34

35

36

37

See dbi performance documentation for details on Fetchrow_arraryref vs

hashref, etc.

Perl has Autovivification – java does not

38

#!/usr/bin/perl

###############################################################################################################

# Name: backfill.pl

#

# Desc: This script is an attempt at a generic 'Update Prod data" script.

# It assumes a form:

# SELECT all rows to be updated

# Loop thru them one by one and run an update stmt to do the work.

# Both the SELECT and UPDATE sql are read from the ini file.

# Options: All specified in the config file:

#

# update_data: 0 - shows what "would" be updated. No commits.

# 1 - Does the work.

#

# commit_cnt: # of rows between commits

#

# fetch_rows: 0 - (or not defined). fetch all rows.

# # of rows to fetch

# This adds a "FETCH FIRST # rows ONLY to your SELECT sql

#

# isolation_level: CS - DEFAULT. Other choices are UR, RS, RR.

#

# Pending:

#

# Some better error checking on reading the ini file.

# Right now, if you get something wrong in the ini file (typo or an option missing),

# You just get back a cryptic error.

# (In my defense - I pilfered this code from Gat... ) This should be improved in the libs.

# (In HIS defense - He was nice enough to write and share these libs in the first place.

#

# I pulled the get_db_connection sub from the script/lib - so that I could set

# AutoCommit => 0 vs 1. This should be made a param in the lib.

#

# I use both the:

# extract_cfg sub and

# my $update_data = $cfg->val(CONTROL_PARMS => 'update_data');

# syntax to pull cfg info from the ini files. This should probably be standardized.

#

# Rollback - update_data beavior:

# "Near as I can tell" - the DBI/DBD::DB2 will not allow a rollback without closing the handles.

# now: if update_data = 0, I skip exevuting the update altogether.

# I would LOVE to actully execute the updates then at commit_cnt - issue a rollback.

# This would provide SOME idea of how long the script would run.

# When I issue a rollback instead of a commit, however, the "cursor" gets toasted and

# fetch drops out of the while loop.

#

#

# Written by: Ken Shaffer

# Date Written: 2014-07-09

#

# Crontab: n/a

#

# Modified By Date CR / PR Reason

# ____________ __________ _______ _________________________________________________________________________

# kshaffer 2014-06-10 CR20267 Backfill PRL from REQUEST.

###############################################################################################################

use lib '/home/dbadmin/tvpi01/script/lib';

use strict;

use warnings;

use Helper::Config;

use Helper::DB;

use Helper::Logger;

use Helper::MISC;

use Getopt::Std;

use POSIX 'strftime';

use MIME::Base64::Perl;

use constant DAY => 60 * 60 * 24;

our $VERSION = '1.0';

my %opt;

get_args(\%opt);

my $cfg = Helper::Config->new(-file => $opt{c}) or exit(1);

exit(0) if ! $cfg->val(GLOBAL => 'run_ok');

39

my $logger = set_logger($cfg->val(LOG => 'file'), $cfg->val(LOG => 'level'), $cfg->val(LOG => 'buffer_output'));

check_user($cfg, $logger);

my $lock = ensure_only_running_once($cfg, $logger);

my $dbh = db_connect($cfg, $logger);

check_hadr_role($cfg, $dbh, $logger);

get_candidates($cfg, $dbh, $logger);

$dbh->commit(); # Need this, here, to allow disconnect to work.

$dbh->disconnect(); # Without disconnect, we get a warning.

log_if_should($logger, "Backfill utility completed successfully. " , 'INFO');

log_if_should($logger, "", 'INFO');

log_if_should($logger, "", 'INFO');

exit(0);

######################

39

# SUBS start here #

######################

sub get_candidates {

my ($cfg, $dbh, $logger) = @_;

log_if_should($logger, 'Starting fetch of backfill candidates', 'INFO');

my $update_data = $cfg->val(CONTROL_PARMS =>'update_data');

log_if_should($logger, "update_data = '$update_data'. ", 'INFO');

log_if_should($logger, "Looks like we WILL be updating data.\n", 'INFO') if $update_data;

log_if_should($logger, "Looks like we will NOT be updating data.\n", 'INFO') if ! $update_data;

my $commit_cnt = $cfg->val(CONTROL_PARMS =>'commit_cnt');

log_if_should($logger, "commit_cnt = '$commit_cnt' for initial select.\n", 'INFO');

log_if_should($logger, " Looks like we will be Committing every $commit_cnt updates.\n", 'INFO') if$commit_cnt;

my $fetch_rows = $cfg->val(SELECT_CNTL =>'fetch_rows');

log_if_should($logger, "fetch first '$fetch_rows' for select", 'INFO') if $fetch_rows;

my $isolation_level = $cfg->val(SELECT_CNTL =>'isolation_level');

log_if_should($logger, "Isolation level will be set to: '$isolation_level' for select", 'INFO') if

39

$isolation_level;

my %sql_cfg = extract_cfg($cfg, 'SQL_STMT', $logger);# Pull the SELECT stmt from the ini file.

my $select_sql = $sql_cfg{sel_sql};# Stuff the SELECT into a variable

$select_sql .= " FETCH FIRST $fetch_rows ROWS ONLY " if $fetch_rows; # Add these, if the inifile tells us too.

$select_sql .= " WITH $isolation_level" if $isolation_level;

my $sel_sth = $dbh->prepare($select_sql);

#$sel_sql =~ tr/\r\n/ /;

log_if_should($logger, "", 'INFO');

log_if_should($logger, "", 'INFO');

log_if_should($logger, "Prepared '$select_sql' to find candidates for update.", 'INFO');

log_if_should($logger, "", 'INFO');

log_if_should($logger, "", 'INFO');

log_if_should($logger, "", 'INFO');

log_if_should($logger, "", 'INFO');

#$dbh->{AutoCommit} = 0 if $update_data; # turn autocommit off - not with new begin_work...

39

# PREPARE our update stmt once before the loop

my $upd_sth = $dbh->prepare($sql_cfg{upd_sql});

log_if_should($logger, "Update/Delete statement that we will be running is: $sql_cfg{upd_sql}.", 'INFO');

log_if_should($logger, "", 'INFO');

log_if_should($logger, "", 'INFO');

# EXECUTE THE SELECT STMT.

my $start_secs = time();

log_if_should($logger, "Select started at time: [$start_secs].", 'DEBUG');

$sel_sth->execute();# Get the initial "cursor" of rows to update.

my $stop_secs = time();

log_if_should($logger, "Select finished at time: [$stop_secs].", 'DEBUG');

my $elapsed_time = $stop_secs - $start_secs;

log_if_should($logger, "Select took: [$elapsed_time] Seconds.", 'DEBUG') ;

my $formatted_time = time_format($elapsed_time);

log_if_should($logger, "Select took: [$formatted_time].", 'INFO') ;

my $updated_cnt = 0;

my $tot_updates = 0;

$start_secs = time();

39

while (my $row = $sel_sth->fetchrow_hashref) {# fetch each row

#my ($person_id, $pax_id_number, $airlineid, $airportid) = @{$row}{qw/PERSON_ID PAX_ID_NUMBER AIRLINEID AIRPORTID/};

#$upd_sth->execute($pax_id_number, $airlineid, $airportid, $person_id);

my @colnames = split /\s+/,$sql_cfg{set_parm_list};

#log_if_should($logger, " COLS: [@colnames]", 'DEBUG');

my @bindparms = @{$row}{@colnames};

#log_if_should($logger, " PARMS: [@bindparms]", 'DEBUG');

if ($update_data) {

$upd_sth->execute( @bindparms );

log_if_should($logger, " Updated with Params: [@bindparms]." , 'DEBUG');

$updated_cnt++;

if ($updated_cnt >= $commit_cnt) {# Have we hit the commit count?

$dbh->commit(); # Commit the last update.

$stop_secs = time();

39

$elapsed_time = $stop_secs - $start_secs;

$formatted_time =time_format($elapsed_time);

log_if_should($logger, "Updating [$commit_cnt] rows took: [$formatted_time].", 'INFO') ;

if ( $elapsed_time > 0 ) {

my $rate = $updated_cnt /$elapsed_time;

log_if_should($logger, "Rows per second: [$rate].", 'INFO') ;

}

$updated_cnt = 0;

$start_secs = time();# Reset the start time.

}

} else {

log_if_should($logger, " Would have Updated with Params: [@bindparms]." , 'DEBUG');

}

$tot_updates++;

} # While

if ($update_data) {

$dbh->commit(); # Commit any leftovers.

$stop_secs = time();

$elapsed_time = $stop_secs - $start_secs;

$formatted_time = time_format($elapsed_time);

log_if_should($logger, "Updating [$updated_cnt] rows took: [$formatted_time].", 'INFO') ;

if ( $elapsed_time > 0 ) {

39

my $rate = $updated_cnt / $elapsed_time;

log_if_should($logger, "Rows per second: [$rate].", 'INFO') ;

}

}

log_if_should($logger, "", 'INFO');

log_if_should($logger, "", 'INFO');

log_if_should($logger, " Updated: [$tot_updates] Rows." , 'INFO');

log_if_should($logger, "", 'INFO');

log_if_should($logger, "", 'INFO');

return;

} # get_candidates

sub ensure_only_running_once {

my ($cfg, $logger) = @_;

my ($socket, $error) = run_only_once($cfg->val(GLOBAL => 'lock_port'));

if ($error) {

log_msg($logger, 'Process already running -aborting', 'ERROR');

exit(1);

}

return $socket;

}

sub check_user {

39

my ($cfg, $logger) = @_;

my $src_user = getpwuid($>);

my $tgt_user = $cfg->val(GLOBAL => 'user');

if ($src_user ne $tgt_user) {

log_msg($logger, "$0 must run as '$tgt_user' not '$src_user'", 'CRITICAL');

exit(1);

}

}

sub check_hadr_role {

my ($cfg, $dbh, $logger) = @_;

my ($role, $error) = hadr_role($dbh);

if ($error) {

log_if_should($logger, "Unable to determine HADR Role: $error", 'CRITICAL');

exit(1);

}

if ($role && $role eq 'STANDBY') {

log_if_should($logger, "Not running as currently HADR standby", 'DEBUG');

exit(0);

}

}

sub db_connect {

my ($cfg, $logger) = @_;

my ($dbh, $error) = get_db_connection($cfg);

if ($error) {

log_msg($logger, $error, 'CRITICAL');

exit(1);

}

log_if_should($logger, 'Successfully connected to the

39

database', 'DEBUG');

return $dbh;

}

sub get_args {

my ($opt) = @_;

my $Usage = qq{Usage: $0 [-h] -c <config_file>

-h : This (h)elp message

-c : The (c)onfig file

} . "\n";

getopts('hc:', $opt) or die $Usage;

die $Usage if $opt->{h};

die $Usage if ! $opt->{c};

}

# This sub gets a [SECTION] from the cfg file and passes back the hash of key, values.

# The SECTION is optional. Obviously, if the section doesn't exist, the functionality (INTER_COL) for instance, won't work.

sub extract_cfg {

my ($cfg, $section, $logger) = @_;

my %kv;

if (! defined $section || ! $cfg->SectionExists($section)) {

$logger

? log_if_should($logger, "Section [$section] NOT a defined section in the config, so it's functionality will be absent.", 'WARNING')

: warn "Section [$section] NOT a defined section in the config, so it's functionality will be absent.\n";

39

# Hokey way to put something in action_on_bad_inter_col if it's not in the cfg file.

#if ($section eq 'INTER_COL') { $kv{'action_on_bad_inter_col'} = 'NA' };

#return %kv;

}

for my $key ($cfg->Parameters($section)) {

my $val = $cfg->val($section => $key);

$kv{$key} = $val;

}

return %kv;

}

sub get_db_connection {

my ($cfg) = @_;

my %conn;

$conn{$_} = $cfg->val(DATABASE => $_) for qw/db_nameuser password/;

$conn{password} = decode_base64($conn{password});

my ($dbh, $error);

eval {

$dbh = DBI->connect(

"dbi:DB2:$conn{db_name}",

$conn{user},

$conn{password},

{

AutoCommit => 0, RaiseError => 1,PrintError => 0,

db2_info_programname => ($cfg->val(DATABASE => 'prog_name') || 'nrr_purge')

}

);

};

39

$error = $DBI::errstr if $@ || ! $dbh;

return ($dbh, $error);

}

#sub calc_purge_ts {

# my ($cfg) = @_;

# my ($epoch) = time;

# $epoch -= $cfg->val(PURGE => 'days_back') * DAY;

# return strftime('%Y-%m-%d-%H.%M.%S', gmtime($epoch));

#}

sub time_format {

my ($time) = @_;

my $days = int($time / 86400);

$time -= ($days * 86400);

my $hours = int($time / 3600);

$time -= ($hours * 3600);

my $mins = int($time / 60);

my $secs = ($time % 60);

$days = $days < 1 ? '' : $days . ' days ';

$hours = $hours < 1 ? '' : $hours . ' hours ';

$mins = $mins < 1 ? '' : $mins . ' mins ';

$time = $days . $hours . $mins . $secs . ' secs';

return $time;

}

39

Dumps out the settings in the ini file for documentation.

Select can be devilishly complex! - join, merge, etc.

40

Shows the Update statement as prepared.

You can see the Parameter Markers and the values that each was assigned for

each row.

If we had committed, during the run, it would have logged the rate in rows /

second.

This is VERY useful on long performance test/prod runs.

“Did it slow down?”

If you have DEBUG logging set in the ini file, you get param values for each

“update” statement.

Might be too much for prod

Enormous log file

Time to write log records

INFO would just show each commit / rate info.

41

###############################################################################################################

# Config file for the backfill.pl - This file defines params to control the script..pl.

# This file is for populating rows into sfpd_search from request.

# Written by: Ken Shaffer

# Date Written: 2015-01-21

#

# Modified By Date CR / PR Reason

# ____________ __________ _______ _________________________________________________________________________

# kshaffer 2015-01-21 CRXXXX Backfill MY_SEARCH from XXXX Table.

# Couldn't just do ONE MERGE command, as it would have done all

# of the millions of updates/inserts in one transaction.

# Considered 1) Doing a custom backfill.pl, that did an update and then an

# insert, for each key. However, I'd have had to

# do another select to see if it existed or not, for each

# row.

# 2) Writing a udf with a cursor, and then either #1 above, or a merge

# statement.

# This option seemed better and easier... "time will tell".

# kshaffer 2015-03-11 PR22685 Add created_ts > 'ts' to the initial select, so that we can control what we run on.

# Also, increased the commit count from 10K to 100K.

###############################################################################################################

[GLOBAL]

run_ok = 1

user = ken

lock_port = 31652

run_if_standby = 0

[CONTROL_PARMS]

update_data = 1

#update_data: 0 - skip the update statement (Just print what we WOULD do). 1 - do the updates.

commit_cnt = 100000

#commit_cnt How often to commit.

[DATABASE]

db_name = kendb

prog_name = BF_my_search

user = ken

#PROD:

#password = xxxx

#Test:

password = xxxxx

[LOG]

file = ./BF_my_search.log

level = INFO

buffer_output = 0

[SQL_STMT]

# sel_sql is the STMT to select the "CURSOR" of rows to be updated.

# HERE: This grabs THE one row for each person with the greatest airline_ts.

sel_sql =<<SELECTSQL

SELECT

r.p_id as P_ID,

r.u_id as U_ID,

case when r.class in ('1','2','9') then 1

else 0

end as RECORD_TYPE,

r.full_name as FULL_NAME,

r.surname as SURNAME,

r.givenname as GIVENNAME,

r.middlename as MIDDLENAME,

r.id_number as ID_NUMBER,

r.lineid as LINEID,

r.portid as PORTID,

r.f_number as F_NUMBER,

r.over_flag as OVER_FLAG,

r.birthdate as BIRTHDATE,

r.gender as GENDER,

r.d_datetime as D_DATETIME,

r.class as CLASS,

r.line_ts as LINE_TS,

r.created_ts as CREATED_TS,

r.last_updated_ts as LAST_UPDATED_TS,

r.scare as SCARE,

r.incomplete_cd as INCOMPLETE_CD,

r.port_number as PORT_NUMBER,

r.port_country as PORT_COUNTRY,

r.msg as MSG

FROM

KEN.R_TABLE r

WHERE

created_ts > '0001-01-01-00.00.00.000000' AND

line_ts = (SELECT distinct max(line_ts) FROM ken.r_table r1 WHERE r1.p_id = r.p_id )

SELECTSQL

# this is currently unused - but might be needed for future requirements

sel_col_list =

# upd_sql is the STMT to update each row by some key fetched in the "cursor" above.

# Here: This stuffs each row into a pseudo table and calls a merge statement with that row against the MY_SEARCH table.

# For each row, if that p_id exists, and the line_ts on this row is greater than the one in my_search,

# update the my_search row with the cols from the r_table row.

# else, if the person_id doesn't exist, insert the row.

# Note:

# This will log (at DEBUG Level) an "update" for each row in the select above. That doesn't mean that it UPDATES

# each row, just that backfill.pl ran a MERGE statement on it.

#

#

#

#

#

upd_sql =<<UPDSQL

MERGE INTO TVPSF.SFPD_SEARCH TARGET

USING ( VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? ) )

AS PARM ( P_ID,

U_ID,

RECORD_TYPE,

FULL_NAME,

SURNAME,

GIVENNAME,

MIDDLENAME,

42

ID_NUMBER,

LINEID,

PORTID,

F_NUMBER,

OVER_FLAG,

BIRTHDATE,

GENDER,

D_DATETIME,

CLASS,

LINE_TS,

CREATED_TS,

LAST_UPDATED_TS,

SCARE,

REASON_CD,

PORT_NUMBER,

PORT_COUNTRY,

MSG

)

ON ( TARGET.P_ID = PARM.P_ID )

WHEN MATCHED AND

( PARM.LINE_TS > TARGET.LINE_TS )

THEN UPDATE SET

TARGET.P_ID = PARM.P_ID,

TARGET.PKEY = PARM.U_ID,

TARGET.RECORD_TYPE = PARM.RECORD_TYPE,

TARGET.FULL_NAME = PARM.FULL_NAME,

TARGET.SURNAME = PARM.SURNAME,

TARGET.GIVENNAME = PARM.GIVENNAME,

TARGET.MIDDLENAME = PARM.MIDDLENAME,

TARGET.LOCATOR = PARM.PAX_ID_NUMBER,

TARGET.LINEID = PARM.INEID,

TARGET.PORTID = PARM.PORTID,

TARGET.NUMBER = PARM.NUMBER,

TARGET.OVER_FLAG = PARM.OVER_FLAG,

TARGET.BIRTHDATE = PARM.BIRTHDATE,

TARGET.GENDER = PARM.GENDER,

TARGET.D_DATETIME = PARM.D_DATETIME,

TARGET.CLASS = PARM.CLASS,

TARGET.LINE_TS = PARM.LINE_TS,

TARGET.CREATED_TS = PARM.CREATED_TS,

TARGET.UPDATE_TS = PARM.LAST_UPDATED_TS,

TARGET.SCARE = PARM.SCARE,

TARGET.REASON_CD = PARM.REASON_CD,

TARGET.PORT_NUMBER = PARM.PORT_NUMBER,

TARGET.PORT_COUNTRY = PARM.PORT_COUNTRY,

42

TARGET.MSG = PARM.MSG

WHEN NOT MATCHED THEN INSERT

(

P_ID,

PKEY_VALUE,

RECORD_TYPE,

FULL_NAME,

SURNAME ,

GIVENNAME ,

MIDDLENAME ,

LOCATOR ,

LINEID ,

PORTID ,

NUMBER ,

OVER_FLAG ,

BIRTHDATE ,

GENDER ,

D_DATETIME ,

CLASS ,

LINE_TS ,

CREATED_TS ,

UPDATE_TS ,

SCARE ,

REASON_CD ,

PORT_NUMBER ,

PORT_COUNTRY ,

MSG

)

VALUES (

PARM.P_ID,

PARM.U_ID,

PARM.RECORD_TYPE,

PARM.FULL_NAME,

PARM.SURNAME,

PARM.GIVENNAME,

PARM.MIDDLENAME,

PARM.ID_NUMBER,

PARM.LINEID,

PARM.PORTID,

PARM.FLIGHT_NUMBER,

PARM.OVER_FLAG,

PARM.BIRTHDATE,

PARM.GENDER,

PARM.D_DATETIME,

PARM.CLASS,

42

PARM.LINE_TS,

PARM.CREATED_TS,

PARM.LAST_UPDATED_TS,

PARM.SCARE,

PARM.REASON_CD,

PARM.PORT_NUMBER,

PARM.PORT_COUNTRY,

PARM.MSG

)

ELSE IGNORE

UPDSQL

# These get used to pass parms into the update stmt

set_parm_list = P_ID U_ID RECORD_TYPE FULL_NAME

SURNAME GIVENNAME MIDDLENAME ID_NUMBER LINEID PORTID NUMBER OVER_FLAG BIRTHDATE GENDER D_DATETIME CLASS LINE_TS

CREATED_TS LAST_UPDATED_TS SCARE REASON_CD PORT_NUMBER PORT_COUNTRY MSG

[SELECT_CNTL]

fetch_rows = 0

#fetch_rows: 0 - means don't add a fetch first $fetch_rows only clause. Usually, this will be 0.

isolation_level = CS

#isolation_level: if set: One of UR, CS, RS or RR.

42

Why Java

Slide says it all. Most development projects have at least SOME java.

43

Look in the SQLLIB/samples/jdbc directory

44

Simple JDBC example

Connect

Execute statement

Loop thru Result Set

Close

45

Properties file – similar to .ini file in perl. Not all functionality has been

duplicated, yet.

46

47

48

49

50

51

52

53

Rollback closes cursor and shuts down all the fun, here too.

54

import java.lang.*;

import java.sql.*;

import javax.sql.*;

import java.io.*;

import java.util.Properties;

import java.util.Hashtable;

import java.util.Enumeration;

class BackFill_ds

{

public static void main(String argv[])

{

String propfile = "BackFill.properties";

String sel_stmt;

String upd_stmt;

boolean updateData;

FileInputStream in = null;

Properties props = new Properties();

Connection con = null;

try

{

in = new FileInputStream(propfile);

if (in == null) {

System.out.println("Sorry, unable to find " + propfile);

return;

}

props.load(in);

in.close();

// Read properties file and stuff into an enuneration

Enumeration<?> e = props.propertyNames();

while (e.hasMoreElements()) {

String key = (String) e.nextElement();

String value = props.getProperty(key);

System.out.println("Key: [" + key + "] Value: [" + value + "]");

} // while

} catch (IOException ex) {

ex.printStackTrace();

} finally {

if (in != null) {

try {

in.close();

} catch (IOException e) {

e.printStackTrace();

}

} // if

} //finally

try

{

com.ibm.db2.jcc.DB2SimpleDataSource ds =

new com.ibm.db2.jcc.DB2SimpleDataSource();

ds.setDriverType(4);

ds.setServerName(props.getProperty("server", "localhost"));

ds.setPortNumber(Integer.valueOf(props.getProperty("port", "50000")));

ds.setDatabaseName(props.getProperty("alias", "sample"));

ds.setUser(props.getProperty("userid"));

ds.setPassword(props.getProperty("passwd"));

ds.setClientProgramName(props.getProperty("progname", "KenApp")) ;

int commitCnt_i = Integer.valueOf(props.getProperty("commitCount","0"));

sel_stmt=props.getProperty("sel_stmt");

upd_stmt=props.getProperty("upd_stmt");

updateData = Boolean.parseBoolean(props.getProperty("updateData","false") );

System.out.println("Boolean updateData = " + updateData);

// Should: verify that list is comma seperated?

String upd_parm_list= props.getProperty("upd_parm_list");

String[] updCols = upd_parm_list.split(",");

String Upper;

for (int j = 0; j < updCols.length; j++ ) {

Upper = updCols[j].toUpperCase();

updCols[j] =Upper.trim();

}

// connect to the 'alias' database

con = ds.getConnection();

//call the proc to do the work.

execPreparedQueryWithParam(con, commitCnt_i, sel_stmt, upd_stmt, updCols, updateData);

con.close();

}

catch (Exception e)

{

JdbcException jdbcExc = new JdbcException(e);

jdbcExc.handle();

}

} // main

static void execPreparedQueryWithParam(Connection con, int commitCnt,String sel_stmt,String upd_stmt,String[]updCols, boolean updateData)

{

try

{

Statement stmt= con.createStatement();

con.setAutoCommit(false);

// prepare the SELECT and "UPDATE" queries

PreparedStatement sstmt = con.prepareStatement(sel_stmt);

PreparedStatement ustmt = con.prepareStatement(upd_stmt);

//sstmt.setInt(1, 51); // Only needed if I had parameter marker in the select

ResultSet rs= sstmt.executeQuery();

ResultSetMetaData rsmd= rs.getMetaData();

int columnCount = rsmd.getColumnCount();

Hashtable<String,String> colTypes = new Hashtable<String,String>();

55

Hashtable<String, String> colValsString = new Hashtable<String, String>();

Hashtable<String, Integer> colValsInt = new Hashtable<String, Integer>();

// Get the Name and Type of each column selected and put them in a hash table

// The column count starts from 1

for (int i = 1; i < columnCount + 1; i++ ) {

String colName = rsmd.getColumnName(i);

int colType = rsmd.getColumnType(i);

String colTypeName = rsmd.getColumnTypeName(i);

System.out.println("Column Name: [" + colName + "]");

System.out.println("Column Type: [" + colType + "]");

System.out.println("Column TypeName: [" + colTypeName + "]");

colTypes.put(colName, colTypeName.toUpperCase() );

System.out.println();

}

System.out.println();

System.out.println();

//System.out.println("The Type of DEPTNUMB is: " + colTypes.get("DEPTNUMB") );

//System.out.println("The Type of LOCATION is: " + colTypes.get("LOCATION") );

//For each update column in the list, get the type from the select list.

String updTypes[];

updTypes = new String[updCols.length];

for (int j = 0; j < updCols.length; j++ ) {

System.out.println("Update col is: " + updCols[j] + "J is: " + j);

//System.out.println("The Type of " + updCols[j] + " is: " +

colTypes.get(updCols[j] ) );

if (colTypes.containsKey(updCols[j] ) ){

System.out.println("The Type of " + updCols[j] + " is: " + colTypes.get(updCols[j]) );

updTypes[j] = colTypes.get(updCols[j]);

} else {

System.out.println("There is no entry in the table for the type of " + updCols[j] );

}

}

System.out.println();

System.out.println(" Results:\n" +

" DEPTNUMB LOCATION\n" +

" -------- --------------");

int r=0;

while (rs.next())

{

for (int i = 1; i < columnCount + 1; i++ ) {

String colName = rsmd.getColumnName(i);

switch (colTypes.get(colName)) {

case "VARCHAR":

55

colValsString.put(colName, rs.getString(i) );

System.out.println("Put ValsString " + i + " to " + rs.getString(i) + " as a String");

break;

case "SMALLINT":

colValsInt.put(colName, rs.getInt(i) );

System.out.println("Put ValsInt " + i + " to " + rs.getInt(i) + " as an Int");

break;

default:

System.out.println("Select Type: [" + colTypes.get(rsmd.getColumnName(i)) + "] Not Found.");

break;

}

}

for (int j = 0; j < updCols.length; j++ ) {

switch (updTypes[j]) {

case "VARCHAR":

ustmt.setString(j+1,colValsString.get(updCols[j]));

System.out.println("Set " + j + " to " + updCols[j] + " as a String");

break;

case "SMALLINT":

ustmt.setInt(j+1,colValsInt.get(updCols[j]));

System.out.println("Set " + j + " to " + updCols[j] + " as an int");

break;

default:

System.out.println("Type: [" + updTypes[j] + "] Not Found.");

break;

}

}

System.out.println();

if (updateData == true) {

System.out.println( " Updating..." );

int rows = ustmt.executeUpdate();

System.out.printf("%d row(s) updated!\n", rows);

System.out.println();

} else {

System.out.println( "Would have updated..." );

System.out.println();

}

55

if ( ++r == commitCnt ) {

System.out.println( "Committing at r=[" + r + "].");

con.commit();

r = 0;

System.out.println("Sleeping...");

Thread.sleep(30);

System.out.println("Awake now... ");

} // if

} // while

rs.close();

stmt.close();

con.commit();

} // try

catch (Exception e)

{

JdbcException jdbcExc = new JdbcException(e, con);

jdbcExc.handle();

}

} // execPreparedQueryWithParam

} //BackFill

55

56

Java, Pascal, Ada and C require all variables to have a declared type, and

support the use of explicit casts of arithmetic values to other arithmetic types.

Java, C#, Ada and Pascal are sometimes said to be more strongly typed than C,

a claim that is probably based on the fact that C supports more kinds of

implicit conversions, and C also allows pointer values to be explicitly cast

while Java and Pascal do not. Java itself may be considered more strongly

typed than Pascal as manners of evading the static type system in Java are

controlled by the Java Virtual Machine's type system. C# is similar to Java in

that respect, though it allows disabling dynamic type checking by explicitly

putting code segments in an "unsafe context". Pascal's type system has been

described as "too strong", because the size of an array or string is part of its

type, making some programming tasks very difficult.[7][8]

Smalltalk, Perl, JavaScript, Ruby, Python, and Self are all "strongly typed" in

the sense that typing errors are prevented at runtime and they do little

implicit type conversion, but these languages make no use of static type

checking: the compiler does not check or enforce type constraint rules. The

term duck typing is now used to describe the dynamic typing paradigm used

by the languages in this group.

57

What would a Star Trek Themed presentation be without Lt. Uhura?

58

If you have any questions about using any of the code that you have seen here,

or have any other comments, I’d love to hear from you.

59

top related