Oracle Database 12c New Features
Hi Friends,
Collected important Oracle Database 12c new features (ఒరాకిల్ 12సి అంశాలు )
- Online migration of an active data file
- Online table partition or sub-partition migration
- Invisible column
- Multiple indexes on the same column
- DDL logging
- Temporary undo in- and- outs
- New backup user privilege
- How to execute SQL statement in RMAN
- Table level recovery in RMAN
- Restricting PGA size
1. Online rename and relocation of an active data file
Unlike in the previous releases, a data file migration or renaming in Oracle database 12c R1 no longer requires a number of steps i.e. putting the tablespace in READ ONLY mode, followed by data file offline action. In 12c R1, a data file can be renamed or moved online simply using the ALTER DATABASE MOVE DATAFILE SQL statement. While the data file is being transferred, the end user can perform queries, DML and DDL tasks. Additionally, data files can be migrated between storages e.g. from non-ASM to ASM and vice versa.
A) Rename a data file:
ALTER DATABASE MOVE DATAFILE '/u00/data/users01.dbf' TO '/u00/data/users_01.dbf';
B) Migrate a data file from non-ASM to ASM:
ALTER DATABASE MOVE DATAFILE '/u00/data/users_01.dbf' TO '+DG_DATA';
C) Migrate a data file from one ASM disk group to another:
ALTER DATABASE MOVE DATAFILE '+DG_DATA/DBNAME/DATAFILE/users_01.dbf ' TO '+DG_DATA_02';
D) Overwrite the data file with the same name, if it exists at the new location:
ALTER DATABASE MOVE DATAFILE '/u00/data/users_01.dbf' TO '/u00/data_new/users_01.dbf' REUSE;
E) Copy the file to a new location whilst retaining the old copy in the old location:
ALTER DATABASE MOVE DATAFILE '/u00/data/users_01.dbf' TO '/u00/data_new/users_01.dbf' KEEP;
Here are some working examples:
SQL> ALTER TABLE table_name MOVE PARTITION|SUBPARTITION partition_name TO tablespace tablespace_name;
SQL> ALTER TABLE table_name MOVE PARTITION|SUBPARTITION partition_name TO tablespace tablespace_name UPDATE INDEXES ONLINE;
The first example is used to move a table partition|sub-partition to a new tablespace offline. The second example moves a table partition/sub-partitioning online maintaining any local/global indexes on the table. Additionally, no DML operation will get interrupted when ONLINE clause is mentioned.
Important notes:
The UPDATE INDEXES clause will avoid any local/global indexes going unusable on the table.
Table online migration restriction applies here too.
There will be locking mechanism involved to complete the procedure, also it might leads to performance degradation and can generate huge redo, depending upon the size of the partition, sub-partition.
3. Invisible columns
In Oracle 11g R1, Oracle introduced a couple of good enhancements in the form of invisible indexes and virtual columns. Taking the legacy forward, invisible column concepts has been introduced in Oracle 12c R1. I still remember, in the previous releases, to hide important data –columns from being displayed in the generic queries– we used to create a view hiding the required information or apply some sort of security conditions.
In 12c R1, you can now have an invisible column in a table. When a column is defined as invisible, the column won’t appear in generic queries, unless the column is explicitly referred to in the SQL statement or condition, or DESCRIBED in the table definition. It is pretty easy to add or modify a column to be invisible and vice versa:
SQL> CREATE TABLE emp (eno number(6), ename name varchar2(40), sal number(9) INVISIBLE);
SQL> ALTER TABLE emp MODIFY (sal visible);
You must explicitly refer to the invisible column name with the INSERT statement to insert the database into invisible columns. A virtual column or partition column can be defined as invisible too. However, temporary tables, external tables and cluster tables won’t support invisible columns.
4. Multiple indexes on the same column
Pre Oracle 12c, you can’t create multiple indexes either on the same column or set of columns in any form. For example, if you have an index on column {a} or columns {a,b}, you can’t create another index on the same column or set of columns in the same order. In 12c, you can have multiple indexes on the same column or set of columns as long as the index type is different. However, only one type of index is usable/visible at a given time. In order to test the invisible indexes, you need to set the optimizer_use_use_invisible_indexes=true.
Here’s an the example:
SQL> CREATE INDEX emp_ind1 ON EMP(ENO,ENAME);
SQL> CREATE BITMAP INDEX emp_ind2 ON EMP(ENO,ENAME) INVISIBLE;
5. DDL logging
There was no direction option available to log the DDL action in the previous releases. In 12cR1, you can now log the DDL action into xml and log files. This will be very useful to know when the drop or create command was executed and by who. The ENABLE_DDL_LOGGING initiation parameter must be configured in order to turn on this feature. The parameter can be set at the database or session levels. When this parameter is enabled, all DDL commands are logged in an xml and a log file under the $ORACLE_BASE/diag/rdbms/DBNAME/log|ddl location. An xml file contains information, such as DDL command, IP address, timestamp etc. This helps to identify when a user or table dropped or when a DDL statement is triggered.
To enable DDL logging
SQL> ALTER SYSTEM|SESSION SET ENABLE_DDL_LOGGING=TRUE;
The following DDL statements are likely to be recorded in the xml/log file:
CREATE|ALTER|DROP|TRUNCATE TABLE
DROP USER
CREATE|ALTER|DROP PACKAGE|FUNCTION|VIEW|SYNONYM|SEQUENCE
6. Temporary Undo
Each Oracle database contains a set of system related tablespaces, such as, SYSTEM, SYSAUX, UNDO & TEMP, and each are used for different purposes within the Oracle database. Pre Oracle 12c R1, undo records generated by the temporary tables used to be stored in undo tablespace, much similar to a general/persistent table undo records. However, with the temporary undo feature in 12c R1, the temporary undo records can now be stored in a temporary table instead of stored in undo tablespace. The prime benefits of temporary undo includes: reduction in undo tablespace and less redo data generation as the information won’t be logged in redo logs. You have the flexibility to enable the temporary undo option either at session level or database level.
Enabling temporary undo
To be able to use the new feature, the following needs to be set:
Compatibility parameter must be set to 12.0.0 or higher
Enable TEMP_UNDO_ENABLED initialization parameter
Since the temporary undo records now stored in a temp tablespace, you need to create the temporary tablespace with sufficient space
For session level, you can use: ALTER SESSION SET TEMP_UNDO_ENABLE=TRUE;
Query temporary undo information
The dictionary views listed below are used to view/query the information/statistics about the temporary undo data:
V$TEMPUNDOSTAT
DBA_HIST_UNDOSTAT
V$UNDOSTAT
To disable the feature, you simply need to set the following:
SQL> ALTER SYSTEM|SESSION SET TEMP_UNDO_ENABLED=FALSE;
7. Backup specific user privilege
In 11g R2, SYSASM privilege was introduced to perform ASM specific operations. Similarly, backup and recovery tasks specific privilege SYSBACKUP has been introduced in 12c to execute backup and recovery commands in Recovery Manager (RMAN). Therefore, you can create a local user in the database and grant the SYSBACKUP privilege to perform any backup and recovery related tasks in RMAN without being granting the SYSDBA privilege.
$ ./rman target "username/password as SYSBACKUP"
8. How to execute SQL statement in RMAN
In 12c, you can now execute any SQL and PL/SQL commands in RMAN without the need of a SQL prefix: you can execute any SQL and PLS/SQL commands directly from RMAN. How you can execute SQL statements in RMAN:
RMAN> SELECT username,machine FROM v$session;
RMAN> ALTER TABLESPACE users ADD DATAFILE SIZE 121m;
9. Table or partition recovery in RMAN
Oracle database backups are mainly categorized into two types: logical and physical. Each backup type has its own pros and cons. In previous editions, it was not feasible to restore a table or partition using existing physical backups. In order to restore a particular object, you must have logical backup. With 12c R1, you can recover a particular table or partition to a point-in-time or SCN from RMAN backups in the event of a table drop or truncate.
When a table or partition recovery is initiated via RMAN, the following action is performed:
Required backup sets are identified to recover the table/partition
An auxiliary database will be configured to a point-in-time temporarily in the process of recovering the table/partition
Required table/partitions will be then exported to a dumpfile using the data pumps
Optionally, you can import the table/partitions in the source database
Rename option while recovery
An example of a table point-in-time recovery via RMAN (ensure you already have a full database backup from earlier):
RMAN> connect target "username/password as SYSBACKUP";
RMAN> RECOVER TABLE username.tablename UNTIL TIME 'TIMESTAMP…'
AUXILIARY DESTINATION '/u01/tablerecovery'
DATAPUMP DESTINATION '/u01/dpump'
DUMP FILE 'tablename.dmp'
NOTABLEIMPORT -- this option avoids importing the table automatically.
REMAP TABLE 'username.tablename': 'username.new_table_name'; -- can rename table with this option.
Important notes:
Ensure sufficient free space available under /u01 filesystem for auxiliary database and also to keep the data pump file
A full database backup must be exists, or at least the SYSTEM related tablespaces
The following limitations/restrictions are applied on table/partition recovery in RMAN:
SYS user table/partition can’t be recovered
Tables/partitions stored under SYSAUX and SYSTEM tablespaces can’t be recovered
Recovery of a table is not possible when REMAP option used to recovery a table that contains NOT NULL constraints
10. Restricting PGA size
Pre Oracle 12c R1, there was no option to limit and control the PGA size. Although, you set a certain size to PGA_AGGREGATE_TARGET initialization parameter, Oracle could increase/reduce the size of the PGA dynamically based on the workload and requirements. In 12c, you can set a hard limit on PGA by enabling the automatic PGA management, which requires PGA_AGGREGATE_LIMIT parameter settings. Therefore, you can now set the hard limit on PGA by setting the new parameter to avoid excessive PGA usage.
SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT=2G;
SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT=0; --disables the hard limit
Important notes:
When the current PGA limits exceeds, Oracle will automatically terminates/abort the session/process that holds the most untenable PGA memory.
Additions/Enhancements in ASM
Additions/Enhancements in Grid Infrastructure
Additions/Enhancements in Real Application Cluster (database)
1. Additions/Enhancements in Automatic Storage Management (ASM)
Flex ASM
In a typical Grid Infrastructure installation, each node will have its own ASM instance running and act the as the storage container for the databases running on the node. There is a single point-of-failure threat with this setup. For instance, if the ASM instance on the node suffers or fails all the databases and instances running on the node will be impacted. To avoid ASM instance single-point-failure, Oracle 12c provides a Flex ASM feature. The Flex ASM is a different concept and architecture all together. Only a fewer number of ASM Instances need to run on a group of servers in the cluster. When an ASM instance fails on a node, Oracle Clusterware automatically starts surviving (replacement) ASM instance on a different node to maintain availability. In addition, this setup also provides ASM instance load balancing capabilities for the instances running on the node. Another advantage of Flex ASM is that it can be configured on a separate node.
When you choose Flex Cluster option as part of the cluster installation, Flex ASM configuration will be automatically selected as it is required by the Flex Cluster. You can also have traditional cluster over Flex ASM. When you decide to use Flex ASM, you must ensure the required networks are available. You can choose the Flex ASM storage option as part of Cluster installation, or use ASMCA to enable Flex ASM in a standard cluster environment.
The following command shows the current ASM mode:
$ ./asmcmd showclustermode
$ ./srvctl config asm
Or connect to the ASM instances and query the INSTANCE_TYPE parameter. If the output value is ASMPROX, then, the Flex ASM is configured.
Increased ASM storage limits
The ASM storage hard limits on maximum ASM disk groups and disk size has been drastically increased. In 12cR1, ASM support 511 ASM disk groups against 63 ASM disk groups in 11gR2. Also, an ASM disk can be now 32PB size against 20PB in 11gR2.
Tuning ASM rebalance operations
The new EXPLAIN WORK FOR statement in 12c measures the amount of work required for a given ASM rebalance operation and inputs the result in V$ASM_ESTIMATE dynamic view. Using the dynamic view, you can adjust the POWER LIMIT clause to improve the rebalancing operation work. For example, if you want to measure the amount of work required for adding a new ASM disk, before actually running the manual rebalance operation, you can use the following:
SQL> EXPLAIN WORK FOR ALTER DISKGROUP DG_DATA ADD DISK data_005;
SQL> SELECT est_work FROM V$ASM_ESTIMATE;
SQL> EXPLAIN WORK SET STATEMENT_ID='ADD_DISK' FOR ALTER DISKGROUP DG_DATA AD DISK data_005;
SQL> SELECT est_work FROM V$ASM_ESTIMATE WHERE STATEMENT_ID = 'ADD_DISK’;
You can adjust the POWER limit based on the output you get from the dynamic view to improve the rebalancing operations.
ASM Disk Scrubbing
The new ASM Disk Scrubbing operation on a ASM diskgroup with normal or high redundancy level, verifies the logical data corruption on all ASM disks of that ASM diskgroup, and repairs the logical corruption automatically, if detected, using the ASM mirror disks. The disk scrubbing can be performed at disk group, specified disk or on a file and the impact is very minimal. The following examples demonstrate the disk scrubbing scenario:
SQL> ALTER DISKGROUP dg_data SCRUB POWER LOW:HIGH:AUTO:MAX;
SQL> ALTER DISKGROUP dg_data SCRUB FILE '+DG_DATA/MYDB/DATAFILE/filename.xxxx.xxxx'
REPAIR POWER AUTO;
Active Session History (ASH) for ASM
The V$ACTIVE_SESSION_HISOTRY dynamic view now provides the active session sampling on ASM instance too. However, the use of diagnostic pack is subject to the license.
2. Additions/Enhancements in Grid Infrastructure
Flex Clusters
Oracle 12c support two types of cluster configuration at the time of Clusterware installation: Traditional Standard Cluster and Flex cluster. In a traditional standard cluster, all nodes in a cluster are tightly integrated to each other and interact through a private network and can access the storage directly. On the other hand, the Flex Cluster introduced two types of nodes arranged in Hub and Leaf nodes architecture. The nodes arranged in Hub nodes category are similar to the traditional standard cluster, i.e. they are interconnected to each other through a private network and have the directly storage read/write access. The Leaf nodes are different from the Hub nodes. They don’t need to have direct access to the underlying storage; rather they access the storage/data through Hub nodes.
You can configure Hub nodes up to 64, and Leaf nodes can be many. In an Oracle Flex Cluster, you can have Hub nodes without having Leaf nodes configured, but no Leaf nodes exist without Hub nodes. You can configure multiple Leaf nodes to a single Hub node. In Oracle Flex Cluster, only Hub nodes will have direct access to the OCR/Voting disks. When you plan large scale Cluster environments, this would be a great feature to use. This sort of setup greatly reduces interconnect traffic, provides room to scale up the cluster to the traditional standard cluster.
There are two ways to deploy the Flex Cluster:
While configuring a brand new cluster
Upgrade a standard cluster mode to Flex Cluster
If you are configuring a brand new cluster, you need to choose the type of cluster configuration during step 3, select Configure a Flex Cluster option and you will have to categorize the Hub and Leaf nodes on Step 6. Against each node, select the Role, Hub or Leaf, and optionally Virtual Hostname too.
The following steps are required to convert a standard cluster mode to Flex Cluster mode:
1. Get the current status of the cluster using the following command:
$ ./crsctl get cluster mode status
2. Run the following command as the root user:
$ ./crsctl set cluster mode flex
$ ./crsctl stop crs
$ ./crsctl start crs –wait
3. Change the node role as per your design
$ ./crsctl get node role config
$ ./crsctl set node role hub|leaf
$ ./crsctl stop crs
$ ./crsctl start crs -wait
Note the following:
You can’t revert back from Flex to Standard cluster mode
Cluster node mode change requires cluster stack stop/start
Ensure GNS is configured with a fixed VIP
OCR backup in ASM disk group
With 12c, OCR can be now be backed-up in ASM disk group. This simplifies the access to the OCR backup files across all nodes. In case of OCR restore, you don’t need to worry about which node the OCR latest backup is on. One can simply identify the latest backup stored in the ASM from any node and can perform the restore easily.
The following example demonstrates how to set the ASM disk group as OCR backup location:
$ ./ocrconfig -backuploc +DG_OCR
IPv6 support
With Oracle 12c, Oracle now supports IPv4 and IPv6 network protocol configuration on the same network. You can now configure public network (Public/VIP) either on IPv4, IPv6 or combination protocol configuration. However, ensure you use the same set of IP protocol configuration across all nodes in a cluster.
3. Additions/Enhancements in RAC (database)
What-If command evaluation
Using the new What-if command evaluation (-eval) option with srvctl, one can now determine the impact of running the command. This new addition to the srvctl command, will let you simulate the command without it actually being executed or making any changes to the current system. This is particularly useful in a situation when you want to make a change to an existing system and you’re not sure of the outcome. Therefore, the command will provide the effect of making the change. The –eval option also can be used with crsctl command.
For example, if you want to know what will happen if you stop a particular database, you can use the following example:
$ ./srvctl stop database –d MYDB –eval
$ ./crsctl eval modify resource -attr “value”
Miscellaneous srvctl improvements
There are a few new additions to the srvctl command. The following demonstrates the new addition to stop/start database/instance resources on the cluster:
srvctl start database|instance –startoption NOMOUNT|MOUNT|OPEN
srvctl stop database|instance –stopoption NOMOUNT|MOUNT|OPEN
ALTER DATABASE MOVE DATAFILE '/u00/data/users_01.dbf' TO '/u00/data_new/users_01.dbf' KEEP;
2. Online migration of table partition or sub-partition
Migration of a table partition or sub-partition to a different tablespace no longer requires a complex procedure in Oracle 12c R1. In a similar way to how a heap (non-partition) table online migration was achieved in the previous releases, a table partition or sub-partition can be moved to a different tablespace online or offline. When an ONLINE clause is specified, all DML operations can be performed without any interruption on the partition|sub-partition which is involved in the procedure. In contrast, no DML operations are allowed if the partition|sub-partition is moved offline.Here are some working examples:
SQL> ALTER TABLE table_name MOVE PARTITION|SUBPARTITION partition_name TO tablespace tablespace_name;
SQL> ALTER TABLE table_name MOVE PARTITION|SUBPARTITION partition_name TO tablespace tablespace_name UPDATE INDEXES ONLINE;
The first example is used to move a table partition|sub-partition to a new tablespace offline. The second example moves a table partition/sub-partitioning online maintaining any local/global indexes on the table. Additionally, no DML operation will get interrupted when ONLINE clause is mentioned.
Important notes:
The UPDATE INDEXES clause will avoid any local/global indexes going unusable on the table.
Table online migration restriction applies here too.
There will be locking mechanism involved to complete the procedure, also it might leads to performance degradation and can generate huge redo, depending upon the size of the partition, sub-partition.
3. Invisible columns
In Oracle 11g R1, Oracle introduced a couple of good enhancements in the form of invisible indexes and virtual columns. Taking the legacy forward, invisible column concepts has been introduced in Oracle 12c R1. I still remember, in the previous releases, to hide important data –columns from being displayed in the generic queries– we used to create a view hiding the required information or apply some sort of security conditions.
In 12c R1, you can now have an invisible column in a table. When a column is defined as invisible, the column won’t appear in generic queries, unless the column is explicitly referred to in the SQL statement or condition, or DESCRIBED in the table definition. It is pretty easy to add or modify a column to be invisible and vice versa:
SQL> CREATE TABLE emp (eno number(6), ename name varchar2(40), sal number(9) INVISIBLE);
SQL> ALTER TABLE emp MODIFY (sal visible);
You must explicitly refer to the invisible column name with the INSERT statement to insert the database into invisible columns. A virtual column or partition column can be defined as invisible too. However, temporary tables, external tables and cluster tables won’t support invisible columns.
4. Multiple indexes on the same column
Pre Oracle 12c, you can’t create multiple indexes either on the same column or set of columns in any form. For example, if you have an index on column {a} or columns {a,b}, you can’t create another index on the same column or set of columns in the same order. In 12c, you can have multiple indexes on the same column or set of columns as long as the index type is different. However, only one type of index is usable/visible at a given time. In order to test the invisible indexes, you need to set the optimizer_use_use_invisible_indexes=true.
Here’s an the example:
SQL> CREATE INDEX emp_ind1 ON EMP(ENO,ENAME);
SQL> CREATE BITMAP INDEX emp_ind2 ON EMP(ENO,ENAME) INVISIBLE;
5. DDL logging
There was no direction option available to log the DDL action in the previous releases. In 12cR1, you can now log the DDL action into xml and log files. This will be very useful to know when the drop or create command was executed and by who. The ENABLE_DDL_LOGGING initiation parameter must be configured in order to turn on this feature. The parameter can be set at the database or session levels. When this parameter is enabled, all DDL commands are logged in an xml and a log file under the $ORACLE_BASE/diag/rdbms/DBNAME/log|ddl location. An xml file contains information, such as DDL command, IP address, timestamp etc. This helps to identify when a user or table dropped or when a DDL statement is triggered.
To enable DDL logging
SQL> ALTER SYSTEM|SESSION SET ENABLE_DDL_LOGGING=TRUE;
The following DDL statements are likely to be recorded in the xml/log file:
CREATE|ALTER|DROP|TRUNCATE TABLE
DROP USER
CREATE|ALTER|DROP PACKAGE|FUNCTION|VIEW|SYNONYM|SEQUENCE
6. Temporary Undo
Each Oracle database contains a set of system related tablespaces, such as, SYSTEM, SYSAUX, UNDO & TEMP, and each are used for different purposes within the Oracle database. Pre Oracle 12c R1, undo records generated by the temporary tables used to be stored in undo tablespace, much similar to a general/persistent table undo records. However, with the temporary undo feature in 12c R1, the temporary undo records can now be stored in a temporary table instead of stored in undo tablespace. The prime benefits of temporary undo includes: reduction in undo tablespace and less redo data generation as the information won’t be logged in redo logs. You have the flexibility to enable the temporary undo option either at session level or database level.
Enabling temporary undo
To be able to use the new feature, the following needs to be set:
Compatibility parameter must be set to 12.0.0 or higher
Enable TEMP_UNDO_ENABLED initialization parameter
Since the temporary undo records now stored in a temp tablespace, you need to create the temporary tablespace with sufficient space
For session level, you can use: ALTER SESSION SET TEMP_UNDO_ENABLE=TRUE;
Query temporary undo information
The dictionary views listed below are used to view/query the information/statistics about the temporary undo data:
V$TEMPUNDOSTAT
DBA_HIST_UNDOSTAT
V$UNDOSTAT
To disable the feature, you simply need to set the following:
SQL> ALTER SYSTEM|SESSION SET TEMP_UNDO_ENABLED=FALSE;
7. Backup specific user privilege
In 11g R2, SYSASM privilege was introduced to perform ASM specific operations. Similarly, backup and recovery tasks specific privilege SYSBACKUP has been introduced in 12c to execute backup and recovery commands in Recovery Manager (RMAN). Therefore, you can create a local user in the database and grant the SYSBACKUP privilege to perform any backup and recovery related tasks in RMAN without being granting the SYSDBA privilege.
$ ./rman target "username/password as SYSBACKUP"
8. How to execute SQL statement in RMAN
In 12c, you can now execute any SQL and PL/SQL commands in RMAN without the need of a SQL prefix: you can execute any SQL and PLS/SQL commands directly from RMAN. How you can execute SQL statements in RMAN:
RMAN> SELECT username,machine FROM v$session;
RMAN> ALTER TABLESPACE users ADD DATAFILE SIZE 121m;
9. Table or partition recovery in RMAN
Oracle database backups are mainly categorized into two types: logical and physical. Each backup type has its own pros and cons. In previous editions, it was not feasible to restore a table or partition using existing physical backups. In order to restore a particular object, you must have logical backup. With 12c R1, you can recover a particular table or partition to a point-in-time or SCN from RMAN backups in the event of a table drop or truncate.
When a table or partition recovery is initiated via RMAN, the following action is performed:
Required backup sets are identified to recover the table/partition
An auxiliary database will be configured to a point-in-time temporarily in the process of recovering the table/partition
Required table/partitions will be then exported to a dumpfile using the data pumps
Optionally, you can import the table/partitions in the source database
Rename option while recovery
An example of a table point-in-time recovery via RMAN (ensure you already have a full database backup from earlier):
RMAN> connect target "username/password as SYSBACKUP";
RMAN> RECOVER TABLE username.tablename UNTIL TIME 'TIMESTAMP…'
AUXILIARY DESTINATION '/u01/tablerecovery'
DATAPUMP DESTINATION '/u01/dpump'
DUMP FILE 'tablename.dmp'
NOTABLEIMPORT -- this option avoids importing the table automatically.
REMAP TABLE 'username.tablename': 'username.new_table_name'; -- can rename table with this option.
Important notes:
Ensure sufficient free space available under /u01 filesystem for auxiliary database and also to keep the data pump file
A full database backup must be exists, or at least the SYSTEM related tablespaces
The following limitations/restrictions are applied on table/partition recovery in RMAN:
SYS user table/partition can’t be recovered
Tables/partitions stored under SYSAUX and SYSTEM tablespaces can’t be recovered
Recovery of a table is not possible when REMAP option used to recovery a table that contains NOT NULL constraints
10. Restricting PGA size
Pre Oracle 12c R1, there was no option to limit and control the PGA size. Although, you set a certain size to PGA_AGGREGATE_TARGET initialization parameter, Oracle could increase/reduce the size of the PGA dynamically based on the workload and requirements. In 12c, you can set a hard limit on PGA by enabling the automatic PGA management, which requires PGA_AGGREGATE_LIMIT parameter settings. Therefore, you can now set the hard limit on PGA by setting the new parameter to avoid excessive PGA usage.
SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT=2G;
SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT=0; --disables the hard limit
Important notes:
When the current PGA limits exceeds, Oracle will automatically terminates/abort the session/process that holds the most untenable PGA memory.
Additions/Enhancements in ASM
Additions/Enhancements in Grid Infrastructure
Additions/Enhancements in Real Application Cluster (database)
1. Additions/Enhancements in Automatic Storage Management (ASM)
Flex ASM
In a typical Grid Infrastructure installation, each node will have its own ASM instance running and act the as the storage container for the databases running on the node. There is a single point-of-failure threat with this setup. For instance, if the ASM instance on the node suffers or fails all the databases and instances running on the node will be impacted. To avoid ASM instance single-point-failure, Oracle 12c provides a Flex ASM feature. The Flex ASM is a different concept and architecture all together. Only a fewer number of ASM Instances need to run on a group of servers in the cluster. When an ASM instance fails on a node, Oracle Clusterware automatically starts surviving (replacement) ASM instance on a different node to maintain availability. In addition, this setup also provides ASM instance load balancing capabilities for the instances running on the node. Another advantage of Flex ASM is that it can be configured on a separate node.
When you choose Flex Cluster option as part of the cluster installation, Flex ASM configuration will be automatically selected as it is required by the Flex Cluster. You can also have traditional cluster over Flex ASM. When you decide to use Flex ASM, you must ensure the required networks are available. You can choose the Flex ASM storage option as part of Cluster installation, or use ASMCA to enable Flex ASM in a standard cluster environment.
The following command shows the current ASM mode:
$ ./asmcmd showclustermode
$ ./srvctl config asm
Or connect to the ASM instances and query the INSTANCE_TYPE parameter. If the output value is ASMPROX, then, the Flex ASM is configured.
Increased ASM storage limits
The ASM storage hard limits on maximum ASM disk groups and disk size has been drastically increased. In 12cR1, ASM support 511 ASM disk groups against 63 ASM disk groups in 11gR2. Also, an ASM disk can be now 32PB size against 20PB in 11gR2.
Tuning ASM rebalance operations
The new EXPLAIN WORK FOR statement in 12c measures the amount of work required for a given ASM rebalance operation and inputs the result in V$ASM_ESTIMATE dynamic view. Using the dynamic view, you can adjust the POWER LIMIT clause to improve the rebalancing operation work. For example, if you want to measure the amount of work required for adding a new ASM disk, before actually running the manual rebalance operation, you can use the following:
SQL> EXPLAIN WORK FOR ALTER DISKGROUP DG_DATA ADD DISK data_005;
SQL> SELECT est_work FROM V$ASM_ESTIMATE;
SQL> EXPLAIN WORK SET STATEMENT_ID='ADD_DISK' FOR ALTER DISKGROUP DG_DATA AD DISK data_005;
SQL> SELECT est_work FROM V$ASM_ESTIMATE WHERE STATEMENT_ID = 'ADD_DISK’;
You can adjust the POWER limit based on the output you get from the dynamic view to improve the rebalancing operations.
ASM Disk Scrubbing
The new ASM Disk Scrubbing operation on a ASM diskgroup with normal or high redundancy level, verifies the logical data corruption on all ASM disks of that ASM diskgroup, and repairs the logical corruption automatically, if detected, using the ASM mirror disks. The disk scrubbing can be performed at disk group, specified disk or on a file and the impact is very minimal. The following examples demonstrate the disk scrubbing scenario:
SQL> ALTER DISKGROUP dg_data SCRUB POWER LOW:HIGH:AUTO:MAX;
SQL> ALTER DISKGROUP dg_data SCRUB FILE '+DG_DATA/MYDB/DATAFILE/filename.xxxx.xxxx'
REPAIR POWER AUTO;
Active Session History (ASH) for ASM
The V$ACTIVE_SESSION_HISOTRY dynamic view now provides the active session sampling on ASM instance too. However, the use of diagnostic pack is subject to the license.
2. Additions/Enhancements in Grid Infrastructure
Flex Clusters
Oracle 12c support two types of cluster configuration at the time of Clusterware installation: Traditional Standard Cluster and Flex cluster. In a traditional standard cluster, all nodes in a cluster are tightly integrated to each other and interact through a private network and can access the storage directly. On the other hand, the Flex Cluster introduced two types of nodes arranged in Hub and Leaf nodes architecture. The nodes arranged in Hub nodes category are similar to the traditional standard cluster, i.e. they are interconnected to each other through a private network and have the directly storage read/write access. The Leaf nodes are different from the Hub nodes. They don’t need to have direct access to the underlying storage; rather they access the storage/data through Hub nodes.
You can configure Hub nodes up to 64, and Leaf nodes can be many. In an Oracle Flex Cluster, you can have Hub nodes without having Leaf nodes configured, but no Leaf nodes exist without Hub nodes. You can configure multiple Leaf nodes to a single Hub node. In Oracle Flex Cluster, only Hub nodes will have direct access to the OCR/Voting disks. When you plan large scale Cluster environments, this would be a great feature to use. This sort of setup greatly reduces interconnect traffic, provides room to scale up the cluster to the traditional standard cluster.
There are two ways to deploy the Flex Cluster:
While configuring a brand new cluster
Upgrade a standard cluster mode to Flex Cluster
If you are configuring a brand new cluster, you need to choose the type of cluster configuration during step 3, select Configure a Flex Cluster option and you will have to categorize the Hub and Leaf nodes on Step 6. Against each node, select the Role, Hub or Leaf, and optionally Virtual Hostname too.
The following steps are required to convert a standard cluster mode to Flex Cluster mode:
1. Get the current status of the cluster using the following command:
$ ./crsctl get cluster mode status
2. Run the following command as the root user:
$ ./crsctl set cluster mode flex
$ ./crsctl stop crs
$ ./crsctl start crs –wait
3. Change the node role as per your design
$ ./crsctl get node role config
$ ./crsctl set node role hub|leaf
$ ./crsctl stop crs
$ ./crsctl start crs -wait
Note the following:
You can’t revert back from Flex to Standard cluster mode
Cluster node mode change requires cluster stack stop/start
Ensure GNS is configured with a fixed VIP
OCR backup in ASM disk group
With 12c, OCR can be now be backed-up in ASM disk group. This simplifies the access to the OCR backup files across all nodes. In case of OCR restore, you don’t need to worry about which node the OCR latest backup is on. One can simply identify the latest backup stored in the ASM from any node and can perform the restore easily.
The following example demonstrates how to set the ASM disk group as OCR backup location:
$ ./ocrconfig -backuploc +DG_OCR
IPv6 support
With Oracle 12c, Oracle now supports IPv4 and IPv6 network protocol configuration on the same network. You can now configure public network (Public/VIP) either on IPv4, IPv6 or combination protocol configuration. However, ensure you use the same set of IP protocol configuration across all nodes in a cluster.
3. Additions/Enhancements in RAC (database)
What-If command evaluation
Using the new What-if command evaluation (-eval) option with srvctl, one can now determine the impact of running the command. This new addition to the srvctl command, will let you simulate the command without it actually being executed or making any changes to the current system. This is particularly useful in a situation when you want to make a change to an existing system and you’re not sure of the outcome. Therefore, the command will provide the effect of making the change. The –eval option also can be used with crsctl command.
For example, if you want to know what will happen if you stop a particular database, you can use the following example:
$ ./srvctl stop database –d MYDB –eval
$ ./crsctl eval modify resource
Miscellaneous srvctl improvements
There are a few new additions to the srvctl command. The following demonstrates the new addition to stop/start database/instance resources on the cluster:
srvctl start database|instance –startoption NOMOUNT|MOUNT|OPEN
srvctl stop database|instance –stopoption NOMOUNT|MOUNT|OPEN
Comments
Post a Comment