Monday, June 15, 2015

Package cache




What is the package cache?

            DB2 LUW uses many different memory areas to cache critical information. These memory areas are very powerful tools in controlling database performance. The package cache is one such area. This is where DB2 stores static and dynamic queries, access plans, and information about the execution of those queries.
 What information is in the package cache?
            Just like a gold mine, your package cache may seem like a black hole with an entire mountain of information behind it that you must sort through to find the gold nuggets that will make you rich. 

Few of the more interesting pieces of information available are:
Number of executions — the number of times a particular query has been executed
Execution time — the total time spent on all executions of a particular query
Lock wait time — the total time a particular query spent waiting on locks
Sort time — the total time a particular query spent organizing and ordering data
Rows read — the total number of rows a particular query has examined
Rows returned — the total number of rows a query has returned in result sets
CPU time — the total CPU time consumed by a particular query
Execution time — the total time a particular query spent executing
All information in the package cache is cumulative since the last time a particular query was placed in the package cache.
What information is NOT available in the package cache?

While the package cache contains a lot of useful information, it does not replace a SQL statement event monitor. 

The package cache cannot tell you:
             -When a particular query was executed, Some details on static SQL Specific values used in execution of a statement, when that statement uses parameter markers, Literal values used during execution .

How long does information stay in the package cache?

     DB2 moves individual statements in and out of the package cache to keep the most frequently executed statements in memory. Consequently, a statement might be in the package cache for a while before it is overwritten.
You can purge the contents of the package cache :

            FLUSH PACKAGE CACHE DYNAMIC

It is important to note that the execution of this statement can affect performance on a running system, since it will cause the access plans for all dynamic SQL to be re-generated.
Deactivating a database will also cause the package cache to be emptied.






Friday, May 8, 2015

Rebuild index



what is difference between rebuilding of indexes using reorg command and load command?

Two ways for rebuilding indexes: 
1) Reorg Indexes for all table tablename 
2) Load from /dev/null of del insert into tablename indexing mode rebuild
 

What is the difference between the above two? 

Rebuilding indexes using Load should be faster than Re-org indexes command. 

Observation 1: 
SORT Phase is running parallel in Load command. verify with db2pd -sort and -reorg index options. The output indicates two sort memory areas allocated and performed. 

Observation 2: 
SORT Phase is running one after another in REORG INDEXES ALL FOR TABLE command. The output of db2pd -sort and -reorg index options indicates one sort operation after another performed. 

> intra-parallel mode(CFG setting is NO) and running shared sorts only. 


Wednesday, April 15, 2015

LBAC overview



 Label Based Access Control

 >   SECADM authority
 >   security policy   ->  create/alter/drop

 > components :

           -name
           -type     tree/array/set



Friday, March 6, 2015

how can you identify if autoresize is enabled?



$db2 get snapshot for tablespaces on MYDB


For Automatic Storage check “Using automatic storage” – if it’s “1” than it’s enabled for Automatic Storage.For automatic resize check “Auto-resize enabled”   and if it’s “Yes”


Monday, March 2, 2015

backup


Take an OFFLINE backup 
db2 connect to <dbname>
db2 quiesce database immediate force connections
db2 connect reset
db2 backup database <dbname> to <path> compress without prompting
after backup completes:
db2 connect to <dbname>
db2 unquiesce database
db2 connect reset
Take an ONLINE Backup
db2 backup db <dbname> to <path> online compress
or use a background process on a linux/unix machine
nohup db2 backup db <dbname> to <path> online compress &
List recent backups and where they are stored
=>db2 list history backup all for <dbname>
Check the integrity of a backup image
=>db2ckbkp <image name>
Restore from Incremental Backup Automatic
=>db2 restore db <dbname> incremental automatic taken at <timestamp>
If you need to restore an incremental backup manually this command will give you the required previous backups.
=>db2ckrst -d <dbname> -t <timestamp of image>




Question:  How can I delete all FULL backups of a specific DB2 database  older than X amount of days?
Answer:  To delete FULL backups greater than X amount of days   , use this example as a basis and modify to your requirements. This example applies if you’re using TSM as the backup mechanism.

$db2adutl delete FULL older than 14 days  db MYDB




Thursday, December 4, 2014

Replicating Data Load from HADR Primary to Standby server


         In order for a DB2 LOAD to be replicated from an HADR primary server to a standby server, DB2 must be able to access the load data on the standby.  This is done using a shared file system and the COPY YES option on the LOAD command at the primary server. 

         If it is not possible to establish a shared file system, the standby server can be DEACTIVATED ("db2 deactivate database <db>") before the load runs and ACTIVATEd ("db2 activate database <db>") after the load has competed on the primary server and the load file has been manually copied to the standby (eg. via sftp or rcp).  It is important to note that it is NOT the load input file that is copied to the standby, but a specially formatted file which DB2 creates as part of the load process.  

Below is an example using the shared file system, which in this case is NFS.

     In this document, commands on the primary server will be issued at db2inst1@linux and shown in blue text.  Commands issued on the standby server will be issued at db2inst2@linux and shown in red text.

Step 1 – Setup Details

Test Setup:
Primary Server – db2hadr1
Primary Instance – db2inst1
Standby Server – db2hadr2
Standby Instance – db2inst2

NFS Details

db2inst1@linux:~> cat /etc/exports
/db2shared/     db2hadr2(rw,no_root_squash,sync)

db2inst2@linux:~> mount
<snip>
db2hadr1:/db2shared on /db2shared type nfs (rw,addr=192.168.217.151)

Step 2 – Create a test table that looks like the SAMPLE database EMPLOYEE table.  We will call it “EMPCOPY.”

db2inst1@linux:~> db2 create table empcopy like employee
DB20000I  The SQL command completed successfully.

Step 3 – Export some data from the EMPLOYEE table to load into our test table.

db2inst1@linux:~> db2 "export to /db2db/emp.del of del select * from employee"
SQL3104N  The Export utility is beginning to export data to file
"/db2db/emp.del".

SQL3105N  The Export utility has finished exporting "42" rows.
Number of rows exported: 42




Step 4 – LOAD the data utilizing the COPY YES option to copy the data to the shared file system.

db2inst1@linux:~> db2 "load from /db2db/emp.del of del replace into empcopy copy yes to /db2shared"
SQL3109N  The utility is beginning to load data from file "/db2db/emp.del".

SQL3500W  The utility is beginning the "LOAD" phase at time "01/08/2007
15:16:51.723262".

SQL3519W  Begin Load Consistency Point. Input record count = "0".
SQL3520W  Load Consistency Point was successful.
SQL3110N  The utility has completed processing.  "42" rows were read from the input file.
SQL3519W  Begin Load Consistency Point. Input record count = "42".
SQL3520W  Load Consistency Point was successful.
SQL3515W  The utility has finished the "LOAD" phase at time
"01/08/2007 15:16:51.792947".

Number of rows read         = 42
Number of rows skipped      = 0
Number of rows loaded       = 42
Number of rows rejected     = 0
Number of rows deleted      = 0
Number of rows committed    = 42

Step 5 – Check to see if our LOAD data was copied to the standby server.  Note that the timestamps from our load output and the copy file match and the tablespace ID (SAMPLE.x.db2inst1...) is for our tablespace.

Again, it is NOT the load input file that is copied to the standby, but a specially formatted file which DB2 creates as part of the load process. 

db2inst2@linux:~> ls -l /db2shared/
total 2137
-rw-r-----  1 db2inst1 db2grp1 274456 2007-01-08 15:16 SAMPLE.4.db2inst1.NODE0000.CATN0000.20070108151651.001




Step 6 – Look for messages in the db2diag.log on the standby server to verify that the load was executed.  The timestamps will match.

db2inst2@linux:~> less ~/sqllib/db2dump/db2diag.log

2007-01-08-15.26.38.393708-480 I862317G372        LEVEL: Warning
PID     : 3500                 TID  : 366115520   PROC : db2agnti (SAMPLE) 0
INSTANCE: db2inst2             NODE : 000         DB   : SAMPLE
APPHDL  : 0-97
FUNCTION: DB2 UDB, database utilities, sqludcpy, probe:544
DATA #1 : String, 64 bytes
Starting to restore a load copy.
DB2INST1.EMPCOPY.20070108151651

<snip>

2007-01-08-15.26.48.774310-480 I863391G333        LEVEL: Warning
PID     : 3500                 TID  : 366115520   PROC : db2agnti (SAMPLE) 0
INSTANCE: db2inst2             NODE : 000         DB   : SAMPLE
APPHDL  : 0-97
FUNCTION: DB2 UDB, database utilities, sqludcpy, probe:1134
MESSAGE : Load copy restore completed successfully.

Step 7 (Optional) – Just for fun, fail over, check the table and fail back.

db2inst2@linux:~> db2 takeover hadr on db sample
DB20000I  The TAKEOVER HADR ON DATABASE command completed successfully.
db2inst2@linux:~> db2 connect to sample

   Database Connection Information

 Database server        = DB2/LINUX 9.1.1
 SQL authorization ID   = DB2INST2
 Local database alias   = SAMPLE

db2inst2@linux:~> db2 "select count(*) from db2inst1.empcopy"

1
-----------
         42

  1 record(s) selected.

db2inst1@linux:/db2db> db2 takeover hadr on db sample

DB20000I  The TAKEOVER HADR ON DATABASE command completed successfully.




                                                                                                                                                             Written By Craig Maddux