您当前的位置: 首页 > 

【MOS】How to backup or restore OLR in 11.2/12c Grid Infrastructure

发布时间:2016-12-12 21:42:18 ,浏览量:0

How to backup or restore OLR in 11.2/12c Grid Infrastructure (文档 ID 1193643.1)

In this Document

Goal Solution   OLR location   To backup   To list backups   To restore References

APPLIES TO: Oracle Database - Enterprise Edition - Version 11.2.0.1.0 and later Information in this document applies to any platform. GOAL

Oracle Local Registry (OLR) is introduced in 11gR2/12c Grid Infrastructure. It contains local node specific configuration required by OHASD and is not shared between nodes; in other word, every node has its own OLR.

This note provides steps to backup or restore OLR.

SOLUTION OLR location

The OLR location pointer file is '/etc/oracle/olr.loc' or '/var/opt/oracle/olr.loc' depending on platform. The default location after installing Oracle Clusterware is:

GI Cluster:/cdata/ GI Standalone (Oracle Restart):/cdata/localhost/

To backup

OLR will be backed up during GI configuration(installation or upgrade). In contrast to OCR, OLR will NOT be automatically backed up again after GI is configured, only manual backups can be taken. If further backup is required, OLR needs to be backed up manually. To take a backup of the OLR use the following command.

#/bin/ocrconfig -local -manualbackup

To list backups

To List the backups currently available:

#/bin/ocrconfig -local -showbackup

node1 2010/12/14 14:33:20 /opt/app/oracle/grid/11.2.0.1/cdata/node1/backup_20101214_143320.olr node1 2010/12/14 14:33:17 /opt/app/oracle/grid/11.2.0.1/cdata/node1/backup_20101214_143317.olr

  Clusterware maintains the history of the five most recent manual backups and will not update/delete a manual backups after it has been created.

$ocrconfig -local -showbackup  shows manual backups in the registry though they are removed or archived manually in OS file system by OS commands

#ocrconfig -local -showbackup node1     2014/02/21 08:02:57     /opt/app/oracle/grid/11.2.0.1/cdata/node1/backup_20140221_080257.olr node1     2014/02/21 08:02:56     /opt/app/oracle/grid/11.2.0.1/cdata/node1/backup_20140221_080256.olr node1     2014/02/21 08:02:54     /opt/app/oracle/grid/11.2.0.1/cdata/node1/backup_20140221_080254.olr node1     2014/02/21 08:02:51     /opt/app/oracle/grid/11.2.0.1/cdata/node1/backup_20140221_080251.olr node1     2014/02/21 08:02:39     /opt/app/oracle/grid/11.2.0.1/cdata/node1/backup_20140221_080239.olr #ls -ltr  /opt/app/oracle/grid/11.2.0.1/cdata/node1 total 38896 -rw-------   1 root     root     6635520 Feb 21 08:02 backup_20140221_080256.olr -rw-------   1 root     root     6635520 Feb 21 08:02 backup_20140221_080257.olr

To restore

Be sure GI stack is completely down and ohasd.bin is not up and running, use the following command to confirm:

ps -ef| grep ohasd.bin

This should return no process, if ohasd.bin is still up and running, stop it on local node:

#/bin/crsctl stop crs -f  <========= for GI Cluster OR  #/bin/crsctl stop has  <========= for GI Standalone

Once it's down, restore with the following command: 

#/bin/ocrconfig -local -restore

If the command fails, create a dummy OLR, set correct ownership and permission and retry the restoration command:

# cd # touch.olr # chmod 600.olr # chown:.olr

Once it's restored, GI can be brought up:

#/bin/crsctl start crs   <========= for GI Cluster OR  $/bin/crsctl start has  <========= for GI Standalone, this must be done as grid user.

About Me

...............................................................................................................................

● 本文来自于MOS转载文章(文档 ID 1193643.1)

● 小麦苗云盘地址:http://blog.itpub.net/26736162/viewspace-1624453/

● QQ群:230161599     微信群:私聊

● 联系我请加QQ好友(642808185),注明添加缘由

● 版权所有,欢迎分享本文,转载请保留出处

...............................................................................................................................

手机长按下图识别二维码或微信客户端扫描下边的二维码来关注小麦苗的微信公众号:xiaomaimiaolhr,免费学习最实用的数据库技术。

wpsF8C8.tmp

 

ico_mailme_02.png

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/26736162/viewspace-2130323/,如需转载,请注明出处,否则将追究法律责任。

关注
打赏
1688896170
查看更多评论

暂无认证

  • 0浏览

    0关注

    108697博文

    0收益

  • 0浏览

    0点赞

    0打赏

    0留言

私信
关注
热门博文
立即登录/注册

微信扫码登录

0.3855s