Monday, February 10, 2014

Oracle Database 12c: What happens during CDB startup if one the PDB has a media problem?

During Oracle Day 2014, which held in Baku/Azerbaijan, my friend Mahir Quluzade told me that, according his test if there is media problem in one of the PDB and if we want to startup CDB, CDB will not start and it is completely NOT possible to start! Interesting, although I did not tested, I was sure it is by design and there should be workaround. Ok, I got free time today and tested:
SQL> select name, open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB1                           READ WRITE
PDB2                           READ WRITE


SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/testdb/system01.dbf
/u01/app/oracle/oradata/testdb/sysaux01.dbf
/u01/app/oracle/oradata/testdb/undotbs01.dbf
/u01/app/oracle/oradata/testdb/pdbseed/system01.dbf
/u01/app/oracle/oradata/testdb/users01.dbf
/u01/app/oracle/oradata/testdb/pdbseed/sysaux01.dbf
/u01/app/oracle/oradata/testdb/pdb1/system01.dbf
/u01/app/oracle/oradata/testdb/pdb1/sysaux01.dbf
/u01/app/oracle/oradata/testdb/pdb1/pdb1_users01.dbf
/u01/app/oracle/oradata/testdb/pdb2/system01.dbf
/u01/app/oracle/oradata/testdb/pdb2/sysaux01.dbf
/u01/app/oracle/oradata/testdb/pdb2/pdb2_users01.dbf

12 rows selected.
Now, removing datafile from USERS tablespace in PDB1:
SQL> !rm /u01/app/oracle/oradata/testdb/pdb1/pdb1_users01.dbf

SQL> shutdown immediate
ORA-03113: end-of-file on communication channel
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@12cdb ~]$ sqlplus "/ as sysdba"

SQL*Plus: Release 12.1.0.1.0 Production on Sat Feb 8 22:30:53 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 2137886720 bytes
Fixed Size                  2290416 bytes
Variable Size             603983120 bytes
Database Buffers         1526726656 bytes
Redo Buffers                4886528 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 10 - see DBWR trace file
ORA-01110: data file 10: '/u01/app/oracle/oradata/testdb/pdb1/pdb1_users01.dbf'

So, CDB does not start, but it is not permanent situation, just connect into PDB1 and make that datafile offline:
SQL> alter session set container=PDB1;

Session altered.

SQL> alter database datafile '/u01/app/oracle/oradata/testdb/pdb1/pdb1_users01.dbf' offline;

Database altered.


SQL> alter database open;

Database altered.

SQL> select name, open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB1                           MOUNTED
PDB2                           MOUNTED


SQL> alter pluggable database all open;

Pluggable database altered.

SQL> select name, open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB1                           READ WRITE
PDB2                           READ WRITE
Yes, it is working. It would be not logical, if we could not open other PDBs in the same CDB because of having problem in 1 PDB only. If you apply PSU1 on 12c, you will not meet with such problem, CDB will skip opening problematic PDB and will open other PDBs.

Wednesday, July 24, 2013

Oracle 12 High Availability New Features Series: 1 - Partial Global/Local Indexes

Since its first introduction in Oracle 8.0 in 1997, Oracle has enhanced the functionality of Oracle
Partitioning with every release, by either adding new partitioning techniques, enhancing the scalability,
or extending the manageability and maintenance capabilities. Oracle Database 12c is no different by
offering enhanced composite partitioning strategies and vastly enhanced partition maintenance
operations.

Oracle 12c now provides the ability to index a subset of partitions and to exclude the others.

Local and global indexes can now be created on a subset of the partitions of a table. Partial Global indexes provide more flexibility in index creation for partitioned tables. For example, index segments can be omitted for the most recent partitions to ensure maximum data ingest rates without impacting the overall data model and access for the partitioned object.

Partial Global Indexes save space and improve performance during loads and queries. This feature supports global indexes that include or index a certain subset of table partitions or subpartitions, and exclude the others. This operation is supported using a default table indexing property. When a table is created or altered, a default indexing property can be specified for the table or its partitions.

The CREATE TABLE syntax is extended to support specification of the indexing attributes clause, which is as follows:
[INDEXING { ON | OFF }]

The indexing property determines whether table partitions and subpartitions are included in partial indexes on the table. The default is INDEXING ON.

When you specify the INDEXING clause at the table level, in the table properties clause, you set the default indexing property for the table.

The INDEXING clause may also be specified at the partition and subpartition levels (if ommitted partitions/subpartitions inherit the default indexing property of the table).

The CREATE INDEX syntax is extended to support specification of the FULL or PARTIAL index.

Specify INDEXING FULL to create a full index. In this case the index is decoupled from the table properties. A full index includes all partitions in the underlying table, regardless of their indexing properties.

Specify INDEXING PARTIAL to create a partial index. A partial index includes only partitions in the underlying table with an indexing property of ON.

The default is FULL, if neither FULL or PARTIAL is specified, i.e. Oracle will index all partitions (similar to the behavior of Oracle Database releases prior to Oracle Database 12c.).

CREATE TABLE myt1
(
c1 NUMBER(4),
c2 DATE,
c3 NUMBER(8)
)
INDEXING OFF
PARTITION BY RANGE (c2)
(
PARTITION p1 VALUES LESS THAN (TO_DATE('01-APR-2009','DD-MON-YYYY')) INDEXING ON,
PARTITION p2 VALUES LESS THAN (TO_DATE('01-JUL-2010','DD-MON-YYYY')) INDEXING OFF,
PARTITION p3 VALUES LESS THAN (TO_DATE('01-OCT-2011','DD-MON-YYYY')) INDEXING ON,
PARTITION p4 VALUES LESS THAN (TO_DATE('01-MAR-2012','DD-MON-YYYY')),
PARTITION p5 VALUES LESS THAN (TO_DATE('01-MAR-2013','DD-MON-YYYY'))
);

Partitions which have not INDEXING clause, inherit the default indexing property of the table (INDEXING OFF).

So:
Partitions P1 and P3 are included in all partial global indexes.
Local index partitions (for indexes created PARTIAL) corresponding to the above two table partitions are created usable by default
Other partitions are excluded from all partial global indexes, and created unusable in local indexes (for indexes created PARTIAL)

select partition_name,indexing from user_tab_partitions where table_name='MYT1';
PARTITION_NAME       INDEXING
-------------------- --------------------
P5                           OFF
P4                           OFF
P3                           ON
P2                           OFF
P1                           ON

select index_name, partition_name,STATUS from user_ind_partitions where index_name = 'LP_IDX';

INDEX_NAME                PARTITION_NAME  STATUS
------------------------- --------------------         --------------------
lp_idx                              P2                            UNUSABLE
lp_idx                              P3                            USABLE
lp_idx                              P1                            USABLE
lp_idx                              P4                            UNUSABLE
lp_idx                              P5                            UNUSABLE

Note: Partitions P4 and P5 are not included in all partial global indexes because the default INDEXING property for the table is OFF, and INDEXING ON/OFF has not been specified at the partition level for those partitions; therefore, P4 and P5 partitions inherit the table default indexing property.

This statement will create a Partial Global index on the MYT1 table.

CREATE INDEX gp_idx ON myt1(c3) GLOBAL INDEXING PARTIAL;

select index_name,indexing from dba_indexes where index_name='GP_IDX';

INDEX_NAME                     INDEXING
------------------------------ --------------------
GP_IDX                                PARTIAL

The GP_IDX index in the example will be created to index only those partitions that have INDEXING ON, and exclude the remaining partitions.

Friday, January 18, 2013

EXADATA Certified Implementation Specialist...




Today, I have passed 1Z0-536 Oracle Exadata 11g Essentials.It means, I I have one more Oracle Certification Program status – Oracle Exadata 11g Certified Implementation Specialist!

My passing score was 86%.

Tips about exam:


  • Total 69 questions, passing score is 67%
  • I would say, exam is highly based on official preparation course – Exadata and Database Machine Administration Workshop, so try to visit it. If you don't, it will be difficult...
  • Interestingly, most of questions related to IORM (Input-Output Resource Manager)
  • There were no exhibits on exam
  • It seems, mot of the questions related to X2 family, not X3...

Good luck!

Friday, June 1, 2012

Oracle 11g: SecureFiles


  Starting from Oracle 11g, there is a new feature which is called SecureFiles.  Although it is very useful feature, less people is aware about it or implemented it. I don't know why ;)


 SecureFiles is a completely new architecture inside the Oracle Database 11g for handling file or unstructured data – word, pdf, image, video and etc. Traditionally, we stored unstructured data in data type of LOB/BLOB or in OS with reference to database. Before Oracle 11g, there are many overheads if you store your unstructured data in OS/LOB data type, like no smart backup mechanism, no storage saving, no transaction/read consistency algorithm, no fine-grained security control, no auditing and etc. In general: most of the database features cannot be applied on it.


But now – with SecureFiles, the advanced features are available only for SecureFiles and do not apply to older LOBs or BasicFiles. The new features in SecureFiles - Deduplication, Compression and Encryption - can be setup independently or as a combination of one or more features. 


Using SecureFiles:
First of all, there is a new initialization parameter enables DBA to specify SecureFIles usage inside database. From doc: http://docs.oracle.com/cd/B28359_01/appdev.111/b28393/adlob_smart.htm#BABJFIBC
Parameter Name: db_securefile
Parameter Type: text
Allowable Values: { ALWAYS | FORCE | PERMITTED | NEVER | IGNORE }
Default Value: PERMITTED
Description: This parameter enables the database administrator to either allow SECUREFILE LOBs to be created (PERMITTED), disallow SECUREFILE LOBs from being created going forward (NEVER), force all LOBs created going forward to be SECUREFILE LOBs (FORCE), attempt to create SECUREFILE LOBs but fall back to BASICFILE LOBs (ALWAYS), or disallow SECUREFILE LOBs and ignore any errors that would otherwise be caused by forcing BASICFILE LOBs with SECUREFILE options (IGNORE).  Pls, read more details on this parameter from documentation.


Example:


Prior to Oracle 11g, we would have defined the table as follows:


CREATE TABLE basic_tab (
  id         NUMBER,
  myblob  BLOB

LOB(myblob)(tablespace users);


In Oracle 11g, as for backward compatibility, you would have define it as below:


CREATE TABLE basic_tab (
  id         NUMBER,
  myblob  BLOB

LOB(myblob) store as basicfile (tablespace users);




If you want to store the LOB as a SecureFile, all you have to do is place a clause "store as securefile" in the table creation, as shown below:


CREATE TABLE secure_tab (
  id         NUMBER,
  myblob  BLOB

LOB(myblob) STORE AS SECUREFILE (tablespace users);






SQL> CREATE TABLE basic_tab (
  2  id number,
  3  myblob BLOB
  4  )
  5  LOB(myblob) store as BASICFILE (tablespace users)
  6  /


Table created.


SQL> CREATE TABLE secure_tab(
  2  id number,
  3  myblob BLOB
  4  )
  5  LOB(myblob) store as SECUREFILE (tablespace users)
  6  /


Table created.


SQL> 




Note: The tablespace where you are creating the securefile must be Automatic Segment Space Management (ASSM) enabled, otherwise you will meet with below error:


SQL> CREATE TABLESPACE mytbs1
  2  DATAFILE '/u01/app/oracle/oradata/ocmdb/mytbs01.dbf' SIZE 10M
  3  EXTENT MANAGEMENT LOCAL
  4  SEGMENT SPACE MANAGEMENT MANUAL;


Tablespace created.


SQL> CREATE TABLE secure_tab_ms(
  2  id number, 
  3  myblob BLOB
  4  )
  5  LOB(myblob) store as securefile (tablespace mytbs1);
CREATE TABLE secure_tab_ms(
*
ERROR at line 1:
ORA-43853: SECUREFILE lobs cannot be used in non-ASSM tablespace "MYTBS1"




Your applications do not need to be changed in order to load data to SecureFile LOBs, the same as you did for pre-11g LOB (BasicFile).




Deduplication:


In my opinion, deduplication is the most popular feature in SecureFiles. Imagine you have 10 records each with BLOB. 6 of the BLOBs are identical. Is it possible to store the BLOB only once and store only the reference to that copy on other 5 records, it would reduce the space consumption. This is possible in OS files but would not have been possible in Oracle Database 10g LOBs. But with SecureFiles it's actually trivial via a property called deduplication. You can specify it during the table creation or modify it later. Here, I will insert the same number of records to 2 tables: one with DEDUPLICATE option, another one with KEEP_DUPLICATES. Later, we will see how much space used in each case:


CREATE TABLE keep_duplicates_tab (
  id         NUMBER,
  myblob  CLOB

LOB(myblob) STORE AS SECUREFILE (KEEP_DUPLICATES tablespace users);


CREATE TABLE deduplicate_tab (
  id         NUMBER,
  myblob   CLOB

LOB(myblob) STORE AS SECUREFILE (DEDUPLICATE tablespace users);






DECLARE
  l_blob CLOB := RPAD('X', 10000, 'X');
BEGIN
  FOR i IN 1 .. 1000 LOOP
    INSERT INTO keep_duplicates_tab VALUES (i, l_blob);
  END LOOP;
  COMMIT;


  FOR i IN 1 .. 1000 LOOP
    INSERT INTO deduplicate_tab VALUES (i, l_blob);
  END LOOP;
  COMMIT;
END;




SQL> select l.table_name, l.segment_name, s.bytes  from dba_lobs l, user_segments s where l.table_name IN ('DEDUPLICATE_TAB', 'KEEP_DUPLICATES_TAB') AND S.SEGMENT_NAME = L.SEGMENT_NAME;


TABLE_NAME                     SEGMENT_NAME                        BYTES
------------------------------ ------------------------------ ----------
KEEP_DUPLICATES_TAB            SYS_LOB0000071365C00002$$        71303168
DEDUPLICATE_TAB                SYS_LOB0000071368C00002$$          327680




You can also reverse the deduplication process:


SQL> alter table deduplicate_tab modify lob(myblob) (keep_duplicates);


Table altered.


SQL> select l.table_name, l.segment_name, s.bytes  from dba_lobs l, user_segments s where l.table_name IN ('DEDUPLICATE_TAB', 'KEEP_DUPLICATES_TAB') AND S.SEGMENT_NAME = L.SEGMENT_NAME;


TABLE_NAME                     SEGMENT_NAME                        BYTES
------------------------------ ------------------------------ ----------
KEEP_DUPLICATES_TAB            SYS_LOB0000071365C00002$$        71303168
DEDUPLICATE_TAB                SYS_LOB0000071368C00002$$         9764864






LOB Compression




Anothjer useful feature of SecureFile is COMPRESS. This option of SecureFiles enables compression of LOB contents at table or partition level. The level of compression is indicated using the optional MEDIUM and HIGH keywords. If no compression level is specified, MEDIUM is used:






CREATE TABLE nocompress_tab (
  id         NUMBER,
  myblob  CLOB

LOB(myblob) STORE AS SECUREFILE nocompress_lob(NOCOMPRESS tablespace users);


CREATE TABLE compress_tab (
  id         NUMBER,
  myblob  CLOB

LOB(myblob) STORE AS SECUREFILE (COMPRESS HIGH tablespace users);


DECLARE
  l_blob CLOB := RPAD('X', 10000, 'X');
BEGIN
  FOR i IN 1 .. 1000 LOOP
    INSERT INTO nocompress_tab VALUES (i, l_blob);
  END LOOP;
  COMMIT;


  FOR i IN 1 .. 1000 LOOP
    INSERT INTO compress_tab VALUES (i, l_blob);
  END LOOP;
  COMMIT;
END;




SQL> select l.table_name, l.segment_name, s.bytes  from dba_lobs l, user_segments s where l.table_name IN ('COMPRESS_TAB', 'NOCOMPRESS_TAB') AND S.SEGMENT_NAME = L.SEGMENT_NAME;


TABLE_NAME                     SEGMENT_NAME                        BYTES
------------------------------ ------------------------------ ----------
NOCOMPRESS_TAB                  SYS_LOB0000071389C00002$$                    71303168
COMPRESS_TAB                   SYS_LOB0000071385C00002$$          131072




You can also reverse the compression process:




ALTER TABLE compress_tab MODIFY LOB(myblob) (NOCOMPRESS);


SQL>  select l.table_name, l.segment_name, s.bytes  from dba_lobs l, user_segments s where l.table_name IN ('COMPRESS_TAB', 'NOCOMPRESS_TAB') AND S.SEGMENT_NAME = L.SEGMENT_NAME;


TABLE_NAME                     SEGMENT_NAME                        BYTES
------------------------------ ------------------------------ ----------
NOCOMPRESS_TAB                  SYS_LOB0000071389C00002$$                    71303168
COMPRESS_TAB                   SYS_LOB0000071385C00002$$        76808192

Thursday, May 5, 2011

ORACLE Database Firewall

    Oracle Database Firewall provides a Next Generation technology for securing and protecting data in databases. Database Firewall uses a very different approach, which offers greater levels of automation, simplicity and security than traditionally data security software vendors.  The core technology interprets the grammar of SQL and works by analyzing the meaning of the statements that database clients issue. This approach provides a significantly higher degree of protection than traditional first-generation database monitoring technology since it is not dependent on the source of an attack or on recognizing the syntax of known security threats.
Oracle Database Firewall technology easily recognizes injected SQL and can block attempted attacks that are out of an organizations security policy.  The technology is so accurate in understanding the meaning of the SQL transaction that false positives and false negatives are a thing of the past.
Database Firewall is designed to work on the network, providing in-line security or out-of-band monitoring for the largest of enterprise deployment. Optional host-based agents provide low-impact local monitoring capabilities.


About the Oracle Database Firewall System Architecture
The typical Oracle Database Firewall architecture has the following main components:
    The database network, containing the database server and its clients: You are not required to install Oracle Database Firewall onto the database server or clients. However, if needed, you can install the Database Firewall Local Monitoring on the database server, which enables the Database Firewall to monitor SQL traffic originating from the users or processes that have direct access (for example, through the console) to the database computer.
    The Database Firewall: This is the server that runs the Oracle Database Firewall software. Each Database Firewall collects SQL data from SQL databases, and then sends this SQL data to the Database Firewall Management Server to be analyzed in reports. After the Database Firewall sends the SQL data to the Management Server, it deletes it locally. The SQL data is then stored in the Management Server.
    Database Firewall applications and other third-party applications: These applications perform system configuration, monitoring, administration, and reporting. If necessary, you can use a single computer to operate these applications. However, typically, there is a separate computer for each application, because applications are often used by different people or from different locations.
Examples of Database Firewall applications include the Oracle Database Firewall Administration Console and Oracle Database Analyzer.


You must use a Database Firewall Management Server to control one or more Database Firewall installations.



Documentation: http://download.oracle.com/docs/cd/E20465_01/doc/nav/portal_booklist.htm


In this tutorial, I use 4 Virtual machines:




ORADB11g - Test Oracle Database (on OELinux) which will be monitored by Database Firewall.

FWMS - Firewall Management Server (on OELinux)

DBFW - Standalone Database Firewall (on OELinux)

Analyzer - Windows XP client, which will be used for browser based applications like Administration Console.


Now, I will go through multiple parts for each section: Installation, Integration, Monitoring, Auditing and etc.

Future parts will be added as soon as possible, like "User Role Auditing", "SQL Injection" and etc.


Part 1: Installation


Download MP4 of this video



 Part 2: Firewall Management Server Installation


Download MP4 of this video



Part 3: Firewall Management Server Initial Configuration


Download MP4 of this video


Part 4: Integration Standalone Firewall With Management Server


Download MP4 of this video


Part 5: Creating Enforcement Point for Monitoring Database



Download MP4 of this video

Part 6: Stored Procedure Auditing


Download MP4 of this video

Part 7: User Role Auditing


Download MP4 of this video


Next parts coming soon...













Wednesday, February 2, 2011

Oracle 11g: Easy way getting current session's trace file name

As you know, sometimes we need to trace session, in order to troubleshoot application performance and etc. Tracing session generates server side file with name as: InstanceName_ora_SessionProcessID.trc.

To get these values and location of trace file, especially Process_ID, we had to join v$process with v$session.

However, in Oracle 11g, now it is possible to get current session's trace file name without writing custom SQL script:

SQL> desc v$diag_info
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------

 INST_ID                                            NUMBER
 NAME                                               VARCHAR2(64)
 VALUE                                              VARCHAR2(512)


SQL> select value from v$diag_info where name='Default Trace File';

VALUE
--------------------------------------------------------------------------------

c:\app\oracle\diag\rdbms\oradbwin\oradbwin\trace\oradbwin_ora_1820.trc


So, it means if you trace this session, check above location for trace file.

Friday, December 10, 2010

Oracle 11g new feature: new data dictionary view for listing users with default password - DBA_USERS_WITH_DEFPWD

As you know, after creation of new database, there will be some DB users whose password should be changed because of security issues.

Before 11g, we had to read documentation and check which users are in this list.

However, starting from 11g, we can easily find out that user - using DBA_USERS_WITH_DEFPWD

SQL> desc DBA_USERS_WITH_DEFPWD
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------

 USERNAME                                  NOT NULL VARCHAR2(30)


SQL> select username from DBA_USERS_WITH_DEFPWD;

USERNAME
------------------------------
DIP
MDSYS
XS$NULL
SPATIAL_WFS_ADMIN_USR
CTXSYS
OLAPSYS
OUTLN
OWBSYS
SPATIAL_CSW_ADMIN_USR
EXFSYS
ORACLE_OCM

USERNAME
------------------------------
SCOTT
MDDATA
ORDPLUGINS
ORDSYS
PM
APPQOSSYS
BI
IX
ORDDATA
XDB
SI_INFORMTN_SCHEMA

USERNAME
------------------------------
WMSYS

23 rows selected.


Now, lets change SCOTT's password and check if SCOTT appears in this list again:

SQL> alter user scott identified by newpassword;

User altered.

SQL> select username from DBA_USERS_WITH_DEFPWD;

USERNAME
------------------------------
DIP
XS$NULL
MDSYS
SPATIAL_WFS_ADMIN_USR
OUTLN
CTXSYS
OLAPSYS
SPATIAL_CSW_ADMIN_USR
OWBSYS
ORACLE_OCM
EXFSYS

USERNAME
------------------------------
ORDSYS
ORDPLUGINS
MDDATA
PM
APPQOSSYS
XDB
ORDDATA
IX
BI
WMSYS
SI_INFORMTN_SCHEMA

22 rows selected.

Well, now SCOTT disappeared from this list. If we change passwortd back to default, SCOTT will appear again:

SQL> alter user scott identified by tiger;

User altered.

SQL> select username from DBA_USERS_WITH_DEFPWD;

USERNAME
------------------------------
DIP
XS$NULL
MDSYS
SPATIAL_WFS_ADMIN_USR
OUTLN
CTXSYS
OLAPSYS
SPATIAL_CSW_ADMIN_USR
OWBSYS
ORACLE_OCM
EXFSYS

USERNAME
------------------------------
SCOTT
ORDSYS
ORDPLUGINS
MDDATA
PM
APPQOSSYS
XDB
ORDDATA
IX
BI
WMSYS

USERNAME
------------------------------
SI_INFORMTN_SCHEMA

23 rows selected.

SQL>
Enjoy!

Friday, December 3, 2010

Oracle 11g new feature: PRIMARY_DB_UNIQUE_NAME column in V$DATABASE

You can now find the DB_UNIQUE_NAME of the primary database from the standby database by querying the new PRIMARY_DB_UNIQUE_NAME column in the V$DATABASE view.

Also, Oracle Data Guard release 11g ensures each database's DB_UNIQUE_NAME is different.

It means, after upgrading to 11g, any databases with the same DB_UNIQUE_NAME will not be able to communicate with each other.

SQL> desc v$database
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------

 DBID                                               NUMBER
 NAME                                               VARCHAR2(9)
 CREATED                                            DATE
 RESETLOGS_CHANGE#                                  NUMBER
 RESETLOGS_TIME                                     DATE
 PRIOR_RESETLOGS_CHANGE#                            NUMBER
 PRIOR_RESETLOGS_TIME                               DATE
 LOG_MODE                                           VARCHAR2(12)
 CHECKPOINT_CHANGE#                                 NUMBER
 ARCHIVE_CHANGE#                                    NUMBER
 CONTROLFILE_TYPE                                   VARCHAR2(7)
 CONTROLFILE_CREATED                                DATE
 CONTROLFILE_SEQUENCE#                              NUMBER
 CONTROLFILE_CHANGE#                                NUMBER
 CONTROLFILE_TIME                                   DATE
 OPEN_RESETLOGS                                     VARCHAR2(11)
 VERSION_TIME                                       DATE
 OPEN_MODE                                          VARCHAR2(20)
 PROTECTION_MODE                                    VARCHAR2(20)
 PROTECTION_LEVEL                                   VARCHAR2(20)
 REMOTE_ARCHIVE                                     VARCHAR2(8)
 ACTIVATION#                                        NUMBER
 SWITCHOVER#                                        NUMBER
 DATABASE_ROLE                                      VARCHAR2(16)
 ARCHIVELOG_CHANGE#                                 NUMBER
 ARCHIVELOG_COMPRESSION                             VARCHAR2(8)
 SWITCHOVER_STATUS                                  VARCHAR2(20)
 DATAGUARD_BROKER                                   VARCHAR2(8)
 GUARD_STATUS                                       VARCHAR2(7)
 SUPPLEMENTAL_LOG_DATA_MIN                          VARCHAR2(8)
 SUPPLEMENTAL_LOG_DATA_PK                           VARCHAR2(3)
 SUPPLEMENTAL_LOG_DATA_UI                           VARCHAR2(3)
 FORCE_LOGGING                                      VARCHAR2(3)
 PLATFORM_ID                                        NUMBER
 PLATFORM_NAME                                      VARCHAR2(101)
 RECOVERY_TARGET_INCARNATION#                       NUMBER
 LAST_OPEN_INCARNATION#                             NUMBER
 CURRENT_SCN                                        NUMBER
 FLASHBACK_ON                                       VARCHAR2(18)
 SUPPLEMENTAL_LOG_DATA_FK                           VARCHAR2(3)
 SUPPLEMENTAL_LOG_DATA_ALL                          VARCHAR2(3)
 DB_UNIQUE_NAME                                     VARCHAR2(30)
 STANDBY_BECAME_PRIMARY_SCN                         NUMBER
 FS_FAILOVER_STATUS                                 VARCHAR2(22)
 FS_FAILOVER_CURRENT_TARGET                         VARCHAR2(30)
 FS_FAILOVER_THRESHOLD                              NUMBER
 FS_FAILOVER_OBSERVER_PRESENT                       VARCHAR2(7)
 FS_FAILOVER_OBSERVER_HOST                          VARCHAR2(512)
 CONTROLFILE_CONVERTED                              VARCHAR2(3)
 PRIMARY_DB_UNIQUE_NAME                             VARCHAR2(30)
 SUPPLEMENTAL_LOG_DATA_PL                           VARCHAR2(3)
 MIN_REQUIRED_CAPTURE_CHANGE#                       NUMBER

SQL>

Quote from documentation(http://download.oracle.com/docs/cd/B28359_01/server.111/b28320/dynviews_1086.htm#I1030047):


PRIMARY_DB_UNIQUE_NAME VARCHAR2(30): For any Standby database (Physical, Logical, or Reporting), this column will contain the DB_UNIQUE_NAME of the Primary database that this Standby last received current redo from.

If this standby has not received any current redo since last being started, then this column will be null.

For a Primary database that had previously been a Standby, this column will contain the DB_UNIQUE_NAME of the last Primary that this database received current redo from while acting as a Standby.

For a Primary database that has never been a Standby, this column will be null.

Thursday, November 25, 2010

Oracle 11g new feature in DataPump Export: Overwrite existing dump file

As you know, until 11g, normally, Data Pump Export will return an error if you specify a dump file name that already exists. Starting with 11g, the REUSE_DUMPFILES parameter allows you to override that behavior and reuse a dump file name.

Example:
SQL> create table myt1 as select * from all_objects where rownum<11;

Table created.

SQL> create directory mydir1 as 'c:\oradump';

Directory created.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Pr
oduction
With the Partitioning option

C:\>expdp system/oracle dumpfile=myt1.dmp directory=mydir1 tables=myt1

Export: Release 11.2.0.1.0 - Production on Thu Nov 25 12:35:21 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning option
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/******** dumpfile=myt1.dmp dire
ctory=mydir1 tables=myt1
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/PRE_TABLE_ACTION
. . exported "SYSTEM"."MYT1"                             11.43 KB      10 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  C:\ORADUMP\MYT1.DMP
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 12:35:41

Now, we created dumpfile for the 1st time and lets try the same command 2nd time:

C:\>expdp system/oracle dumpfile=myt1.dmp directory=mydir1 tables=myt1

Export: Release 11.2.0.1.0 - Production on Thu Nov 25 12:35:47 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Produc
tion
With the Partitioning option
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "c:\oradump\myt1.dmp"
ORA-27038: created file already exists
OSD-04010: option specified, file already exists

Well, that (ORA-27038: created file already exist) is normal, now using new EXPDP parameter will solve the issue:REUSE_DUMPFILES

C:\>expdp system/oracle dumpfile=myt1.dmp directory=mydir1 tables=myt1 reuse_dumpfiles=y

Export: Release 11.2.0.1.0 - Production on Thu Nov 25 12:36:08 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Produc
tion
With the Partitioning option
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/******** dumpfile=myt1.dmp dire
ctory=mydir1 tables=myt1 reuse_dumpfiles=y
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/PRE_TABLE_ACTION
. . exported "SYSTEM"."MYT1"                             11.43 KB      10 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  C:\ORADUMP\MYT1.DMP
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 12:36:25

C:\>

Thursday, August 26, 2010

Thursday, July 29, 2010

A new thing I learned today: Truncating a table makes an unusable index valid

 Today I was rereading 11gR2 new Features regarding unusable indexes. I met with following in that document: 11GR2 doc

Unusable indexes

An unusable index is ignored by the optimizer and is not maintained by DML. One reason to make an index unusable is if you want to improve the performance of bulk loads. (Bulk loads go more quickly if the database does not need to maintain indexes when inserting rows.) Instead of dropping the index and later recreating it, which requires you to recall the exact parameters of the CREATE INDEX statement, you can make the index unusable, and then just rebuild it. You can create an index in the unusable state, or you can mark an existing index or index partition unusable. The database may mark an index unusable under certain circumstances, such as when there is a failure while building the index. When one partition of a partitioned index is marked unusable, the other partitions of the index remain valid.

An unusable index or index partition must be rebuilt, or dropped and re-created, before it can be used. Truncating a table makes an unusable index valid.

Truncating a table makes an unusable index valid? Interesting. Lets try:

Tuesday, July 13, 2010

ORA-15063: ASM discovered an insufficient number of disks for diskgroup

After sudden restart of DB Server, interesting issue happened with ASM instance today.

OS: Win2003 32 bit
Oracle: 10.2.0.1

SQL> startup
ASM instance started
...
ORA-15032: not all alterations performed
ORA-15063: ASM discovered an insufficient number of disks for diskgroup "RECO"
ORA-15063: ASM discovered an insufficient number of disks for diskgroup "DATA"

Tuesday, June 22, 2010

ORA-08106: cannot create journal table ... and ORA-00600

Environment:

OS: Win2003 32 bit
Oracle: 10.2.0.1

Problem description: My colleague told me  that he can not rebuild index online. Index partition datafile was lost. Trying to rebuild index  in online mode, Oracle raised error ORA-08106: cannot create journal table XXX.SYS_JOURNAL_123473.

Solution:

bash-2.05$ oerr ora 08106
08106, 00000, "can not create journal table %s.%s"
// *Cause:  The online index builder could not create its journal table 
// *Action: Rename the conflicting table or rerun the SQL statement. There
// *        may be a concurrent online index rebuild on the same object.

It seemed he already executed the same operation, however, because of network problem with database server, operation terminated.

As error description states, I followed below steps:

sql>drop table XXX.SYS_JOURNAL_123473 purge;

Table dropped.

Now, retrying the rebuild operation:

sql>alter index XXX.CALLS_ONCD_UK rebuild partition CDR200809_4 tablespace  TS_CALLS_IDX_CDR_200809_4 online;

I received error like : ORA-00600: internal error code, arguments: [kkdlfjou_1], [], [], [], [], [], [], []

Interesting, what could be done?  After investigation metalink, seems I have done something wrong.

As removal of the journal table is considered a data dictionary patching exercise, therefore, the standard recommendations for data dictionary patching apply:
1. The patching should be performed when the database is in restricted mode
2. The instance should be stopped/started (bounced) after patching is completed


None of above steps was performed by me.

Fortunately, in such case restarting database instance should solve the problem.

Exactly, after rebouncing database, I could manage rebuild index partition online.

Thursday, June 3, 2010

Standby database: ORA-00600: internal error code, arguments: [3020]

Today I received error in Standby database after adding new datafile into UNDO tablespace at primary site.


This is Oracle SE 9.2.0.5 on Sun box which is used by Vendor Application:

Monday, May 17, 2010

Real world example what it can cause using hidden parameters without Oracle Support’s direct control

As we know, there are many undocumented initialization parameters in Oracle Database and sometimes it is called "hidden parameters". These parameters is critical tool for Oracle Support Engineers in case of emergency problems relating customers At the same time, Oracle strongly recommends using these parameters only and only under Oracle Support’s direct control and confirmation. Otherwise, database may be corrupted and Oracle may not support it.

Here, I would like to write real world example what happened to me when I used hidden parameter without informing and confirmation by Oracle Support.

Database: 10.2.0.4
OS: HP-UX
Database size: 2 Tb
Business availability: 24*7

22:40 PM – Database Server rebooted itself. During OS startup, Oracle tried to startup and open database. Database mounted, unfortunately there was problem during crash recovery case:

From alert log:

RECOVERY OF THREAD 1 STUCK AT BLOCK 902035 OF FILE 6
Sat Apr 10 23:01:23 2010
Aborting crash recovery due to error 1172

Wednesday, May 12, 2010

How to get list of hidden parameters in ORACLE?

Hidden parameters should be used only and only under Oracle Support control. Otherwise you can be left with your corrupted database. In the near future I will post real world example what it caused using it without Oracle Support's confirmation.

However, I was interested in how many and which hidden parameters available in Oracle. Below is script:

SELECT
a.ksppinm "Parameter",
b.ksppstvl "Session Value",
c.ksppstvl "Instance Value",
a.ksppdesc description
FROM
x$ksppi a,
x$ksppcv b,
x$ksppsv c
WHERE
a.indx = b.indx
AND
a.indx = c.indx
AND
a.ksppinm LIKE '/_%' escape '/'

In Oracle 10.2.0.4 on HP UX box, it returned 1236(!) parameters. It is 2057 in 11gR2 in Linux box.