Archive
event 10046 – tracing sessions
how to trace session by
– alter session
– oradebug
– logon trigger
tracefiles will be placed in user_dump_dest (10g) or in diag_dest/db/inst/trace (11g)
Session Tracing
alter session set max_dump_file_size = unlimited; alter session set tracefile_identifier='tr10046'; alter session set events '10046 trace name context forever, level 12';
tkprof – analyze tracefiles
After creating some traces (e.g. via event 10046) you can analyze those traces with tkprof
tkprof <input tracefile> <output file> aggregate=no sys=no explain=username/password
sys=no hiddes the recursive calls
aggregate=no doesn’t aggregate reoccured statements to one summary
with sort=xxxx,yyyy,zzzz you can sort the output
prscnt number of times parse was called prscpu cpu time parsing prsela elapsed time parsing prsdsk number of disk reads during parse prsqry number of buffers for consistent read during parse prscu number of buffers for current read during parse prsmis number of misses in library cache during parse execnt number of execute was called execpu cpu time spent executing exeela elapsed time executing exedsk number of disk reads during execute exeqry number of buffers for consistent read during execute execu number of buffers for current read during execute exerow number of rows processed during execute exemis number of library cache misses during execute fchcnt number of times fetch was called fchcpu cpu time spent fetching fchela elapsed time fetching fchdsk number of disk reads during fetch fchqry number of buffers for consistent read during fetch fchcu number of buffers for current read during fetch fchrow number of rows fetched
event 10053 – obtaining optimizer computations
to see how the oracle optimizer generates the execution plan use the event 10053
alter session set max_dump_file_size = unlimited; alter session set tracefile_identifier='tr10053'; alter session set events='10053 trace name context forever, level 1';
the tracefile will be written to user_dump_dest (10.2) or in <diag_dest>/rdbms/<database>/<instance>/trace
bulk commit
For bulk operations it’s helpfull to commit frequently to avoid ora-1555 (snapshot too old) and avoid long lockings.
with the following pl/sql-block you can handle this easier
DECLARE currentcursor NUMBER := 0; total NUMBER := 0; CURSOR delete_current_record IS SELECT rowid FROM BIGEMP; BEGIN FOR rec IN delete_current_record LOOP DELETE FROM BIGEMP WHERE rowid = rec.rowid; total := total + 1; currentcursor := currentcursor + 1; IF (currentcursor >= 10000) THEN COMMIT; currentcursor := 0; END IF; END LOOP; COMMIT; END; /
ADR – Automatic Diagnostic Repository
Changing listener-log directory
to change the directory of listener.xml in 11g
set in listener.ora
ADR_BASE_<listenername>=directory
e.g.
ADR_BASE_LISTENER=/var/opt/oracle
the informations will be written to e.g.
/var/opt/oracle/diag/tnslsnr/server/listener/alert/log.xml
additional events
Cursor tracing
to find the hash_value of the parent cursor for adress
select sql_text, hash_value
from v$sqlarea where sql_id='<sql_id>’;
turn on tracing:
alter session set events ‘immediate trace name cursortrace level 99172, address <hash_value>’;
turn off tracing
alter system set events ‘immediate trace name cursortrace level 2147483648, address 1’;
stored outlines – fixed execution plan
stored outlines are designed to force a specific sql-statement to use a fixed execution plan.
This execution plan will be used even the table or the environment changes.
Using stored outlines is dependend on the hash_value (in 10g) / sql_id (in 11g).
In 11g a new feature was implemented – the sql plan managment.
This is a great extension to guarantee save plans.
See more in sql plan managment
Read more…
unix scripts – hints
This section lists a some usefull unix commands and sample
- find files with specific content
find . -atime +30 -exec grep -il “abc” {} \; - find files with special pattern and access-time
find . -name “*tr?” -atime +3
and delete those
find . -name “*tr?” -atime +2 -delete - looping n-times
I=1
while test $I -le 100
do
sqlplus /nolog @script
I=`expr $I + 1`
Done - remove blank / empty lines from file
grep -v "^$" filename > newfilename
- convert UPPER to lower case
host=`hostname`
lhost =$(echo $host | tr [:upper:] [:lower:])
enable, disable options in 11.2
Starting with 11.2 the installation procedure changed.
There is no more custom installation – all options will be installed.
To check which options are active run
select * from v$option order by parameter;
Stop all instances using this ORACLE_HOME !
cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk part_on ioracle
SQL> select * from v$option where upper(parameter) like ‘%PARTI%’;
PARAMETER VALUE
—————————————- —————————————————————-
Partitioning TRUE
make -f ins_rdbms.mk part_off ioracle
SQL> select * from v$option where upper(parameter) like ‘%PARTI%’;
PARAMETER VALUE
—————————————- —————————————————————-
Partitioning FALSE
These options can be disabled or enabled
Enable Disable
Partitioning part_on part_off
Datamining dm_on dm_off
OLAP olap_on olap_off
Data Vault dv_on dv_off
Real Application Testing rat_on rat_off
Is it possible to deinstall/remove a specific component from already installed Oracle Database Home using OUI? (Doc ID 888934.1)
reorganize tables online with dbms_redefinition
Sometimes it’s neccessary to change table structures or to rebuild tables.
Unfortunately you wont get a downtime to do so…. even business as usual 😉
With the package dbms_redefinition you can modify your tables online (with only a view exclusive locks)
short summary :
1. check for redefinition
2. create a interim table (with your new definition)
3. duplicate original table to interim table
in background a mlog-table (if rowid is used) will be updated while both tables are “in use”
4. create dependent objects (e.g. indexes, views …)
5. resync tables
6. redefinition finish
7. drop interim table
9. possible problems