Which node is joining the cluster? Successful deletion of voting disk b4a7fbbf7ebf6aaae7c A node must be able to access more than half of the voting disks at any time.
Scenario: Let us consider 2 node clusters with even number of voting disks say 2. Let node 1 is able to access voting disk 1. Node 2 is able to access voting disk 2. If we have 3 voting disks and both the nodes are able to access more than half ie. The clusterware can use this disk to check the heartbeat of the nodes.
A node not able to do so will be evicted from the cluster by another node that has more than half the voting disks to maintain the integrity of the cluster. Start the CRS in exclusive mode in any nodes, [root rac1 bin].
CRS Attempting to start 'ora. The heartbeat counter increments every second on every write call. Thus heartbeat of various nodes is recorded at different offsets in the voting disk.
Break in heartbeats indicates a possible error scenario. Healthy nodes will have continuous network and disk heartbeats exchanged between the nodes. Break in heart beat indicates a possible error scenario. There are few different scenarios possible with missing heart beats: 1. Network heart beat is successful, but disk heart beat is missed.
Disk heart beat is successful, but network heart beat is missed. Both heart beats failed. In addition to using the automatically created OCR backup files, you should also export OCR contents before and after making significant configuration changes, such as adding or deleting nodes from your environment, modifying Oracle Clusterware resources, and upgrading, downgrading or creating a database. Do this by using the ocrconfig -export command, which exports OCR content to a file format.
The file format generated by ocrconfig -restore is incompatible with the file format generated by ocrconfig -export. The ocrconfig -export and ocrconfig -import commands are compatible. The ocrconfig -manualbackup and ocrconfig -restore commands are compatible. The two file formats are incompatible and must not be interchangeably used. When exporting OCR, Oracle recommends including " ocr ", the cluster name, and the timestamp in the name string. For example:. Using the ocrconfig -export command also enables you to restore OCR using the -import option if your configuration changes cause errors.
For example, if you have configuration problems that you cannot resolve, or if you are unable to restart Oracle Clusterware after such changes, then restore your configuration using the procedure for your platform. Oracle recommends that you use either automatic or manual backups, and the ocrconfig -restore command instead of the ocrconfig -export and ocrconfig -import commands to restore OCR for the following reasons:.
Backups are created when the system is online. You must shut down Oracle Clusterware on all nodes in the cluster to get a consistent snapshot using the ocrconfig -export command. You cannot inspect the contents of an export. You can list backups with the ocrconfig -showbackup command, whereas you must keep track of all generated exports. If it is, stop it by running the following command as root :. If you are importing OCR to a cluster or network file system, then skip to step 7.
If the original OCR location does not exist, then you must create an empty 0 byte OCR location before you run the ocrconfig -import command. If it is, stop it by running the following command as a member of the Administrators group:.
In Oracle Clusterware 11 g release 2 Multiple processes on each node have simultaneous read and write access to the OLR particular to the node on which they reside, regardless of whether Oracle Clusterware is running or fully functional. Oracle recommends that you use the -manualbackup and -restore commands and not the -import and -export commands. When exporting OLR, Oracle recommends including " olr ", the host name, and the timestamp in the name string.
When you upgrade Oracle Clusterware, it automatically runs the ocrconfig -upgrade command. To downgrade, follow the downgrade instructions for each component and also downgrade OCR using the ocrconfig -downgrade command.
This section includes the following topics for managing voting disks in your cluster:. Voting disk management requires a valid and working OCR. Before you add, delete, replace, or restore voting disks, run the ocrcheck command as root.
If you upgrade from a previous version of Oracle Clusterware to Oracle Clusterware 11 g release 2 ASM compatibility attribute to Oracle ASM manages voting disks differently from other files that it stores. Once you configure voting disks on Oracle ASM, you can only make changes to the voting disks' configuration using the crsctl replace votedisk command. This is true even in cases where there are no working voting disks.
Despite the fact that crsctl query css votedisk reports zero vote disks in use, Oracle Clusterware remembers the fact that Oracle ASM was in use and the replace verb is required. Only after you use the replace verb to move voting disks back to non-Oracle ASM storage are the verbs add css votedisk and delete css votedisk again usable. The number of voting files you can store in a particular Oracle ASM disk group depends upon the redundancy of the disk group.
External redundancy : A disk group with external redundancy can store only one voting disk. Normal redundancy : A disk group with normal redundancy stores three voting disks. High redundancy : A disk group with high redundancy stores five voting disks. By default, Oracle ASM puts each voting disk in its own failure group within the disk group. A failure group is a subset of the disks in a disk group. Failure groups define disks that share components, such that if one fails then other disks sharing the component might also fail.
Failure groups are used to determine which Oracle ASM disks to use for storing redundant data. For example, if two-way mirroring is specified for a file, then redundant copies of file extents must be stored in separate failure groups. If voting disks are stored on Oracle ASM with normal or high redundancy, and the storage hardware in one failure group suffers a failure, then if there is another disk available in a disk group in an unaffected failure group, Oracle ASM recovers the voting disk in the unaffected failure group.
A normal redundancy disk group must contain at least two failure groups but if you are storing your voting disks on Oracle ASM, then a normal redundancy disk group must contain at least three failure groups. A high redundancy disk group must contain at least three failure groups. However, Oracle recommends using several failure groups. A small number of failure groups, or failure groups of uneven capacity, can create allocation problems that prevent full use of all of the available storage.
You must specify enough failure groups in each disk group to support the redundancy type for that disk group. Using the crsctl replace votedisk command, you can move a given set of voting disks from one Oracle ASM disk group into another, or onto a certified file system.
If you move voting disks from one Oracle ASM disk group to another, then you can change the number of voting disks by placing them in a disk group of a different redundancy level as the former disk group.
Question: I hear that a voting disk is important in RAC but I don't understand the concept of a voting disk. Can you show an example of RAC voting disks? Is a voting disk the same as a quorum disk? How would you define a voting disk? Answer: The Voting Disk File is a file on the shared cluster system or a shared raw device file. Voting disk is akin to the quorum disk which helps to avoid the split brain syndrome.
0コメント